Re: Nodes failed to join the cluster after restarting

2020-11-18 Thread Cong Guo
Hi,

I attach the log from the only working node while two others are restarted.
There is no error message other than the "failed to join" message. I do not
see any clue in the log. I cannot reproduce this issue either. That's why I
am asking about the code. Maybe you know certain suspicious places. Thank
you.

On Wed, Nov 18, 2020 at 2:45 AM Ivan Bessonov  wrote:

> Sorry, I see that you use TcpDiscoverySpi.
>
> ср, 18 нояб. 2020 г. в 10:44, Ivan Bessonov :
>
>> Hello,
>>
>> these parameters are configured automatically, I know that you don't
>> configure them. And with the fact that all "automatic" configuration is
>> completed, chances of seeing the same bug are low.
>>
>> Understanding the reason is tricky, we would need to debug the starting
>> node or at least add more logs. Is this possible? I see that you're asking
>> me about the code.
>>
>> Knowing the content of "ver" and "histCache.toArray()" in
>> "org.apache.ignite.internal.processors.metastorage.persistence.DistributedMetaStorageImpl#collectJoiningNodeData"
>> would certainly help.
>> More specifically - *ver.id <http://ver.id>()* and 
>> *Arrays.stream(histCache.toArray()).map(item
>> -> Arrays.toString(item.keys())).collect(Collectors.joining(","))*
>>
>> Honestly, I have no idea how your situation is even possible, otherwise
>> we would find the solution rather quickly. Needless to say, I can't
>> reproduce it. Error message that you see was created for the case when you
>> join your node to the wrong cluster.
>>
>> Do you have any custom code during the node start? And one more question
>> - what discovery SPI are you using? TCP or Zookeeper?
>>
>>
>> ср, 18 нояб. 2020 г. в 02:29, Cong Guo :
>>
>>> Hi,
>>>
>>> The parameters values on two other nodes are the same. Actually I do not
>>> configure these values. When you enable the native persistence, you will
>>> see these logs by default. Nothing is special. When this error occurs on
>>> the restarting node, nothing happens on two other nodes. When I restart the
>>> second node, it also fails due to the same error.
>>>
>>> I will still need to restart the nodes in the future,  one by one
>>> without stopping the service. This issue may happen again. The workaround
>>> has to deactivate the cluster and stop the service, which does not work in
>>> a production environment.
>>>
>>> I think we need to fix this bug or at least understand the reason to
>>> avoid it. Could you please tell me where this version value could be
>>> modified when a node just starts? Do you have any guess about this bug now?
>>> I can help analyze the code. Thank you.
>>>
>>> On Tue, Nov 17, 2020 at 4:09 AM Ivan Bessonov 
>>> wrote:
>>>
>>>> Thank you for the reply!
>>>>
>>>> Right now the only existing distributed properties I see are these:
>>>> - Baseline parameter 'baselineAutoAdjustEnabled' was changed from
>>>> 'null' to 'false'
>>>> - Baseline parameter 'baselineAutoAdjustTimeout' was changed from
>>>> 'null' to '30'
>>>> - SQL parameter 'sql.disabledFunctions' was changed from 'null' to
>>>> '[FILE_WRITE, CANCEL_SESSION, MEMORY_USED, CSVREAD, LINK_SCHEMA,
>>>> MEMORY_FREE, FILE_READ, CSVWRITE, SESSION_ID, LOCK_MODE]'
>>>>
>>>> I wonder what values they have on nodes that rejected the new node. I
>>>> suggest sending logs of those nodes as well.
>>>> Right now I believe that this bug won't happen again on your
>>>> installation, but it only makes it more elusive...
>>>>
>>>> The most probable reason is that node (somehow) initialized some
>>>> properties with defaults before joining the cluster, while cluster didn't
>>>> have those values at all.
>>>> The rule is that activated cluster can't accept changed properties from
>>>> joining node. So, the workaround would be deactivating the cluster, joining
>>>> the node and activating it again. But as I said, I don't think that you'll
>>>> see this bug ever again.
>>>>
>>>> вт, 17 нояб. 2020 г. в 07:34, Cong Guo :
>>>>
>>>>> Hi,
>>>>>
>>>>> Please find the attached log for a complete but failed reboot. You can
>>>>> see the e

Re: Nodes failed to join the cluster after restarting

2020-11-17 Thread Cong Guo
Hi,

The parameters values on two other nodes are the same. Actually I do not
configure these values. When you enable the native persistence, you will
see these logs by default. Nothing is special. When this error occurs on
the restarting node, nothing happens on two other nodes. When I restart the
second node, it also fails due to the same error.

I will still need to restart the nodes in the future,  one by one without
stopping the service. This issue may happen again. The workaround has to
deactivate the cluster and stop the service, which does not work in a
production environment.

I think we need to fix this bug or at least understand the reason to avoid
it. Could you please tell me where this version value could be modified
when a node just starts? Do you have any guess about this bug now? I can
help analyze the code. Thank you.

On Tue, Nov 17, 2020 at 4:09 AM Ivan Bessonov  wrote:

> Thank you for the reply!
>
> Right now the only existing distributed properties I see are these:
> - Baseline parameter 'baselineAutoAdjustEnabled' was changed from 'null'
> to 'false'
> - Baseline parameter 'baselineAutoAdjustTimeout' was changed from 'null'
> to '30'
> - SQL parameter 'sql.disabledFunctions' was changed from 'null' to
> '[FILE_WRITE, CANCEL_SESSION, MEMORY_USED, CSVREAD, LINK_SCHEMA,
> MEMORY_FREE, FILE_READ, CSVWRITE, SESSION_ID, LOCK_MODE]'
>
> I wonder what values they have on nodes that rejected the new node. I
> suggest sending logs of those nodes as well.
> Right now I believe that this bug won't happen again on your installation,
> but it only makes it more elusive...
>
> The most probable reason is that node (somehow) initialized some
> properties with defaults before joining the cluster, while cluster didn't
> have those values at all.
> The rule is that activated cluster can't accept changed properties from
> joining node. So, the workaround would be deactivating the cluster, joining
> the node and activating it again. But as I said, I don't think that you'll
> see this bug ever again.
>
> вт, 17 нояб. 2020 г. в 07:34, Cong Guo :
>
>> Hi,
>>
>> Please find the attached log for a complete but failed reboot. You can
>> see the exceptions.
>>
>> On Mon, Nov 16, 2020 at 4:00 AM Ivan Bessonov 
>> wrote:
>>
>>> Hello,
>>>
>>> there must be a bug somewhere during node start, it updates its
>>> distributed metastorage content and tries to join an already activated
>>> cluster, thus creating a conflict. It's hard to tell the exact data that
>>> caused conflict, especially without any logs.
>>>
>>> Topic that you mentioned (
>>> http://apache-ignite-users.70518.x6.nabble.com/Question-about-baseline-topology-and-cluster-activation-td34336.html)
>>> seems to be about the same problem, but the issue
>>> https://issues.apache.org/jira/browse/IGNITE-12850 is not related to it.
>>>
>>> If you have logs from those unsuccessful restart attempts, it would be
>>> very helpful.
>>>
>>> Sadly, distributed metastorage is an internal component to store
>>> settings and has no public documentation. Developers documentation is
>>> probably outdated and incomplete. But just in case, "version id" that
>>> message is referring to is located in field
>>> "org.apache.ignite.internal.processors.metastorage.persistence.DistributedMetaStorageImpl#ver",
>>> it's incremented on every distributed metastorage setting update. You can
>>> find your error message in the same class.
>>>
>>> Please follow up with more questions and logs it possible, I hope we'll
>>> figure it out.
>>>
>>> Thank you!
>>>
>>> пт, 13 нояб. 2020 г. в 02:23, Cong Guo :
>>>
>>>> Hi,
>>>>
>>>> I have a 3-node cluster with persistence enabled. All the three nodes
>>>> are in the baseline topology. The ignite version is 2.8.1.
>>>>
>>>> When I restart the first node, it encounters an error and fails to join
>>>> the cluster. The error message is "Caused by: org.apache.
>>>> ignite.spi.IgniteSpiException: Attempting to join node with larger
>>>> distributed metastorage version id. The node is most likely in invalid
>>>> state and can't be joined." I try several times but get the same error.
>>>>
>>>> Then I restart the second node, it encounters the same error. After I
>>>> restart the third node, the other two nodes can start successfully and join

Re: Nodes failed to join the cluster after restarting

2020-11-16 Thread Cong Guo
Hi,

Please find the attached log for a complete but failed reboot. You can see
the exceptions.

On Mon, Nov 16, 2020 at 4:00 AM Ivan Bessonov  wrote:

> Hello,
>
> there must be a bug somewhere during node start, it updates its
> distributed metastorage content and tries to join an already activated
> cluster, thus creating a conflict. It's hard to tell the exact data that
> caused conflict, especially without any logs.
>
> Topic that you mentioned (
> http://apache-ignite-users.70518.x6.nabble.com/Question-about-baseline-topology-and-cluster-activation-td34336.html)
> seems to be about the same problem, but the issue
> https://issues.apache.org/jira/browse/IGNITE-12850 is not related to it.
>
> If you have logs from those unsuccessful restart attempts, it would be
> very helpful.
>
> Sadly, distributed metastorage is an internal component to store settings
> and has no public documentation. Developers documentation is probably
> outdated and incomplete. But just in case, "version id" that message is
> referring to is located in field
> "org.apache.ignite.internal.processors.metastorage.persistence.DistributedMetaStorageImpl#ver",
> it's incremented on every distributed metastorage setting update. You can
> find your error message in the same class.
>
> Please follow up with more questions and logs it possible, I hope we'll
> figure it out.
>
> Thank you!
>
> пт, 13 нояб. 2020 г. в 02:23, Cong Guo :
>
>> Hi,
>>
>> I have a 3-node cluster with persistence enabled. All the three nodes are
>> in the baseline topology. The ignite version is 2.8.1.
>>
>> When I restart the first node, it encounters an error and fails to join
>> the cluster. The error message is "Caused by: org.apache.
>> ignite.spi.IgniteSpiException: Attempting to join node with larger
>> distributed metastorage version id. The node is most likely in invalid
>> state and can't be joined." I try several times but get the same error.
>>
>> Then I restart the second node, it encounters the same error. After I
>> restart the third node, the other two nodes can start successfully and join
>> the cluster. When I restart the nodes, I do not change the baseline
>> topology. I cannot reproduce this error now.
>>
>> I find someone else has the same problem.
>> http://apache-ignite-users.70518.x6.nabble.com/Question-about-baseline-topology-and-cluster-activation-td34336.html
>>
>> The answer is corruption in the metastorage. I do not see any issue of
>> the metastorage files. However, it is a small probability event to have
>> files on two different machines corrupted at the same time. Is it possible
>> that this is another bug like
>> https://issues.apache.org/jira/browse/IGNITE-12850?
>>
>> Do you have any document about how the version id is updated and read?
>> Could you please show me in the source code where the version id is read
>> when a node starts and where the version id is updated when a node stops?
>> Thank you!
>>
>>
>>
>
> --
> Sincerely yours,
> Ivan Bessonov
>


errorlog
Description: Binary data


Nodes failed to join the cluster after restarting

2020-11-12 Thread Cong Guo
Hi,

I have a 3-node cluster with persistence enabled. All the three nodes are
in the baseline topology. The ignite version is 2.8.1.

When I restart the first node, it encounters an error and fails to join the
cluster. The error message is "Caused by: org.apache.
ignite.spi.IgniteSpiException: Attempting to join node with larger
distributed metastorage version id. The node is most likely in invalid
state and can't be joined." I try several times but get the same error.

Then I restart the second node, it encounters the same error. After I
restart the third node, the other two nodes can start successfully and join
the cluster. When I restart the nodes, I do not change the baseline
topology. I cannot reproduce this error now.

I find someone else has the same problem.
http://apache-ignite-users.70518.x6.nabble.com/Question-about-baseline-topology-and-cluster-activation-td34336.html

The answer is corruption in the metastorage. I do not see any issue of the
metastorage files. However, it is a small probability event to have files
on two different machines corrupted at the same time. Is it possible that
this is another bug like https://issues.apache.org/jira/browse/IGNITE-12850?

Do you have any document about how the version id is updated and read?
Could you please show me in the source code where the version id is read
when a node starts and where the version id is updated when a node stops?
Thank you!


Re: Ignite test takes several days

2020-09-08 Thread Cong Guo
Hi, all

Thank you for your reply. As mentioned earlier, I want to test ignite-core
with a new patch, so -DskipTests is not an option for me. What test suite
should I use if I just want to test ignite-core?



On Tue, Sep 8, 2020 at 5:33 AM Petr Ivanov  wrote:

> Also, -DskipTests flag can be used to avoid running tests while building
> Apache Ignite.
>
> On 8 Sep 2020, at 12:19, Ilya Kasnacheev 
> wrote:
>
> Hello!
>
> You should never try to run all Ignite tests. They are not supposed to be
> run in that way. You should always build with -DskipTests.
>
> If you really want to run Ignite tests, you should run this test suite
> against your PR:
> https://ci.ignite.apache.org/buildConfiguration/IgniteTests24Java8_RunAll#all-projects
> This is the only meaningful way.
> Then you can use MTCGA to check your test results:
> https://mtcga.gridgain.com/
>
> Obviously, you need to create account in Apache Ignite CI.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 27 авг. 2020 г. в 16:33, Cong Guo :
>
>> Hi,
>>
>> I try to build the ignite-core on my workstation. I use the original
>> ignite-2.8.1 source package. The test, specifically
>> GridCacheWriteBehindStoreLoadTest, has been running for several days. Is it
>> normal? I run "mvn clean package" directly. Should I configure anything in
>> advance? Thank you.
>>
>
>


Re: Ignite test takes several days

2020-08-28 Thread Cong Guo
Hi,

I want to run all the tests. Actually I want to apply the patch for
https://issues.apache.org/jira/browse/IGNITE-10959 to 2.8.1. I find that
even for the original 2.8.1 source code, the test takes a long time. I
think there must be an env or configuration issue. Do I need any special
configuration for the ignite core test? Thank you.


On Thu, Aug 27, 2020 at 9:53 AM Evgenii Zhuravlev 
wrote:

> Hi,
>
> No, it's not normal. Do you really want to run all the tests locally, or
> you just want to build the project? If you want just to build it, I suggest
> skipping tests by using -Dmaven.*test*.*skip*=true flag.
>
> Evgenii
>
> чт, 27 авг. 2020 г. в 06:33, Cong Guo :
>
>> Hi,
>>
>> I try to build the ignite-core on my workstation. I use the original
>> ignite-2.8.1 source package. The test, specifically
>> GridCacheWriteBehindStoreLoadTest, has been running for several days. Is it
>> normal? I run "mvn clean package" directly. Should I configure anything in
>> advance? Thank you.
>>
>


Ignite test takes several days

2020-08-27 Thread Cong Guo
Hi,

I try to build the ignite-core on my workstation. I use the original
ignite-2.8.1 source package. The test, specifically
GridCacheWriteBehindStoreLoadTest, has been running for several days. Is it
normal? I run "mvn clean package" directly. Should I configure anything in
advance? Thank you.


Re: Are data in NearCache BinaryObject?

2019-12-12 Thread Cong Guo
Hi,

My application needs to read all entries in the cache frequently. The
entries may be updated by others. I'm thinking about two solutions to avoid
a lot deserialization. First, I can maintain my own local hash map and
relies on continuous queries to get the update events. Second, I can use a
NearCache, but if the data in NearCache are still serialized, this method
does not work for my application.

Thanks,
Nap

On Thu, Dec 12, 2019 at 5:37 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> It is actually hard to say without debugging. I expect that it is
> BinaryObject or primitive type or byte[].
>
> It is possible to enable onheap caching, in this case objects will be held
> as is, and also sed copyOnRead=false, in this case objects will not even be
> copied.
> However, I'm not sure if Near Cache will interact with onheap caching.
>
> Why does it matter for your use case?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 11 дек. 2019 г. в 22:54, Cong Guo :
>
>> Hi,
>>
>> Are the entries stored in local NearCache on my client node in the format
>> of deserialized java objects or BinaryObject? Will the entry in local
>> on-heap NearCache be deserialized from BinaryObject when I call the get
>> function?
>>
>> Thanks,
>> Nap
>>
>


Are data in NearCache BinaryObject?

2019-12-11 Thread Cong Guo
Hi,

Are the entries stored in local NearCache on my client node in the format
of deserialized java objects or BinaryObject? Will the entry in local
on-heap NearCache be deserialized from BinaryObject when I call the get
function?

Thanks,
Nap


RE: UnsupportedOperationException when updating a field in binary object

2018-07-23 Thread Cong Guo
Thank you for the reply. You are right. The function used to initially get the 
set returns an unmodifiableSet. That is the reason for the error.
However, when we use a Map as a field and initially write an unmodifiableMap, 
we can still modify the Map using getField. 

The code is like how we use the Set: 

BinaryObjectBuilder boBuilder = bo.toBuilder();
Map meta = boBuilder.getField(meta_FieldStr);
meta.put(key, value);
boBuilder.setField(meta_FieldStr, meta, Map.class);

What is the difference between Map and Set here?

Thanks,
Cong

-Original Message-
From: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
Sent: 2018年7月23日 17:16
To: user@ignite.apache.org
Subject: Re: UnsupportedOperationException when updating a field in binary 
object

This test stops working though if you replace line 30 with this:

builder.setField("set", Collections.unmodifiableSet(Sets.newHashSet("a",
"b", "c")));

If unmodifiable set is written, it's then read as unmodifiable set as well, and 
therefore can't be modified. I believe this is the reason for the error.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


UnsupportedOperationException when updating a field in binary object

2018-07-23 Thread Cong Guo
Hi,

I use a Set class as a field. When I update the field via the 
BinaryObjectBuilder, I get an UnsupportedOperationException.

I use QueryEntity to set the field like:

fieldNameTypeMap.put(tags_FieldStr, Set.class.getName());

Then my update function (in my EntryProcessor) is like:

BinaryObjectBuilder boBuilder = bo.toBuilder();
Set tags = boBuilder.getField(tags_FieldStr);
tags.add(tag);
boBuilder.setField(tags_FieldStr, tags, Set.class);

The exception shows:

Caused by: java.lang.UnsupportedOperationException
at java.util.Collections$UnmodifiableCollection.add(Collections.java:1055)
at com.myproject.managers.common.BOHelperImpl$4.process(BOHelperImpl.java:390)
at 
org.apache.ignite.internal.processors.cache.EntryProcessorResourceInjectorProxy.process(EntryProcessorResourceInjectorProxy.java:68)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.runEntryProcessor(GridCacheMapEntry.java:5142)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4550)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:4367)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:3051)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.access$6200(BPlusTree.java:2945)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1717)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1600)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1199)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:345)
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:1767)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2420)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:1883)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1736)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1628)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:483)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:443)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1117)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.invoke0(GridDhtAtomicCache.java:826)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.invoke(GridDhtAtomicCache.java:784)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1359)
... 12 more

I check the document:
https://www.gridgain.com/sdk/pe/latest/javadoc/org/apache/ignite/binary/BinaryObjectBuilder.html#getField-java.lang.String-

"Collections and maps returned from this method are modifiable."

Does anyone know how I can update a Set field via the binary object?

Thanks,
Cong





latency for replication

2018-07-19 Thread Cong Guo
Hi,

How can I measure the latency for updates to be replicated from the primary 
node to a backup node?
I use the PRIMARY_SYNC mode. I want to know the time for a backup node to catch 
up. Is there any API for the latency measurement? Do you have any suggestion?

Thanks,
Cong


RE: How to set JVM opts in the configuration xml

2018-07-06 Thread Cong Guo
Hi,

Please ignore this email. I lost common sense. Sorry.

From: Cong Guo
Sent: 2018年7月6日 15:25
To: user@ignite.apache.org
Subject: How to set JVM opts in the configuration xml

Hi,

I start the Ignite node in my own code instead of using ignite.sh. How do I set 
JVM opts in the configuration xml?

Thanks,
Cong


How to set JVM opts in the configuration xml

2018-07-06 Thread Cong Guo
Hi,

I start the Ignite node in my own code instead of using ignite.sh. How do I set 
JVM opts in the configuration xml?

Thanks,
Cong


What is the difference between PRIMARY_SYNC and FULL_ASYNC

2018-06-29 Thread Cong Guo
Hi,

Does PRIMARY_SYNC means waiting for only primary copies even if the primary 
copies are on remote nodes, while FULL_ASYNC means waiting for only local 
copies no matter the local copies are primary or not?

Could you please give me an example case to show different performance results 
with the two CacheWriteSynchronizationModes?

Thanks,
Cong


RE: SQL cannot find data of new class definition

2018-06-21 Thread Cong Guo
Hi,

I don't think this feature requires any change in the SQL API.

When we create a cache, even if the value object contains a nested object, the 
fields in the nested object can be mapped to columns in the table. Now we can 
do this using QueryEntity, for example, 

QueryEntity personEntity = new QueryEntity();
personEntity.setValueType(Person.class.getName());
personEntity.setKeyType(Long.class.getName());
LinkedHashMap fields = new LinkedHashMap<>();
fields.put("addr.streetNum", Integer.class.getName());
fields.put("addr.streetName", String.class.getName());
personEntity.setFields(fields);

There will be two columns named streetNum and streetName in the table 
automatically.
So when we need to add a new field, say in "addr", we can use current ALTER 
TABLE to add a normal column, but now the problem is how to map the new field 
to the column. Now we cannot modify the QueryEntity dynamically, right? I think 
the problem is to support dynamic update of fields in QueryEntity. 

-Original Message-
From: slava.koptilin [mailto:slava.kopti...@gmail.com] 
Sent: 2018年6月21日 11:27
To: user@ignite.apache.org
Subject: RE: SQL cannot find data of new class definition

Hello Cong,

> when we add a field to the first-level value object and add a column 
> to the table dynamically, they can be connected automatically.
Yes, that is correct.

> So now the problem is when we add a field to the nested object and add 
> a column to the table, they cannot be connected automatically.
It cannot be done via SQL API at runtime.
The reason for that constraint is that this feature requires custom SQL syntax 
which is not SQL ANSI-99 obviously, and I don't think there are any plans to 
support this feature.


Perhaps, it makes sense to start a discussion on the dev list.

Thanks!






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: A bug in SQL "CREATE TABLE" and its underlying Ignite cache

2018-06-21 Thread Cong Guo
So can I add a field to a nested object dynamically (without restarting the 
cluster) by using annotations?

-Original Message-
From: slava.koptilin [mailto:slava.kopti...@gmail.com] 
Sent: 2018年6月21日 10:44
To: user@ignite.apache.org
Subject: RE: A bug in SQL "CREATE TABLE" and its underlying Ignite cache

Hi,

> How can I write the “Create Table” statement to create columns for the 
> two members of Address in Table Person?
If I am not mistaken, SQL tables cannot contain nested objects.

Apache Ignite SQL engine allows executing SQL queries of nested fields. In 
case, a cache was configured via annotations, for example.
Please take a look at this page:
https://apacheignite-sql.readme.io/docs/schema-and-indexes#section-indexing-nested-objects


Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: SQL cannot find data of new class definition

2018-06-21 Thread Cong Guo
Hi,

I think we can map members in a nested object to table columns when we create 
the cache. And when we add a field to the first-level value object (which 
contains the nested object) and add a column to the table dynamically, they can 
be connected automatically. So now the problem is when we add a field to the 
nested object and add a column to the table, they cannot be connected 
automatically.  May I know how this part is implemented in Ignite? Could you 
please create a ticket and fix this in the future?

Thanks,
Cong

-Original Message- 
From: slava.koptilin [mailto:slava.kopti...@gmail.com] 
Sent: 2018年6月21日 9:25
To: user@ignite.apache.org
Subject: RE: SQL cannot find data of new class definition

Hello,

> How should I write the "alter table Person" statement if I want to add 
> a new member to class Address after the cache has been created?
I don't think that there is a way to do it for nested objects, unfortunately.

In that case, I think that you need to update your configuration [1] and 
restart the cluster.
[1]
https://apacheignite-sql.readme.io/docs/schema-and-indexes#section-annotation-based-configuration

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: A bug in SQL "CREATE TABLE" and its underlying Ignite cache

2018-06-20 Thread Cong Guo
Thank you for your reply! I have another question. If the Person class has a 
member of another class, say Address. Class Address has two members, int 
streetNo, String streetName. How can I write the “Create Table” statement to 
create columns for the two members of Address in Table Person? Do I have to 
create another table for Address? Thank you.


From: Вячеслав Коптилин [mailto:slava.kopti...@gmail.com]
Sent: 2018年6月20日 14:55
To: user@ignite.apache.org
Subject: Re: A bug in SQL "CREATE TABLE" and its underlying Ignite cache

Hi,

> How should I use BinaryObject in the CREATE TABLE statement?
If you want to use binary objects, then there is no need to specify 
'VALUE_TYPE'.

Please use the following code:

String createTableSQL = "CREATE TABLE Persons (id LONG, orgId LONG, 
firstName VARCHAR, lastName VARCHAR, resume VARCHAR, salary FLOAT, PRIMARY 
KEY(firstName))" +
"WITH \"BACKUPS=1, ATOMICITY=TRANSACTIONAL, 
WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME + 
"\"";

Thanks!

вт, 19 июн. 2018 г. в 16:43, Cong Guo 
mailto:cong.g...@huawei.com>>:
Hi,

How should I use BinaryObject in the CREATE TABLE statement? I try using 
BinaryObject.class.getName() as the value_type, but get the following exception:

class org.apache.ignite.IgniteCheckedException: Failed to initialize property 
'ORGID' of type 'java.lang.Long' for key class 'class java.lang.Long' and value 
class 'interface org.apache.ignite.binary.BinaryObject'. Make sure that one of 
these classes contains respective getter method or field.

I want to use BinaryObject here for some flexibility.

Thanks,
Cong

From: Вячеслав Коптилин 
[mailto:slava.kopti...@gmail.com<mailto:slava.kopti...@gmail.com>]
Sent: 2018年6月18日 18:10
To: user@ignite.apache.org<mailto:user@ignite.apache.org>
Subject: Re: A bug in SQL "CREATE TABLE" and its underlying Ignite cache

Hello,

It seems that the root cause of the issue is wrong values of 'KEY_TYPE' and 
'VALUE_TYPE' parameters.
In your case, there is no need to specify 'KEY_TYPE' at all, and 'VALUE_TYPE' 
should Person.class.getName() I think.

Please try the following:
String createTableSQL = "CREATE TABLE Persons (id LONG, orgId LONG, 
firstName VARCHAR, lastName VARCHAR, resume VARCHAR, salary FLOAT, PRIMARY 
KEY(firstName))" +
"WITH \"BACKUPS=1, ATOMICITY=TRANSACTIONAL, 
WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME +
", VALUE_TYPE=" + Person.class.getName() + "\"";

Best regards,
Slava.

пн, 18 июн. 2018 г. в 21:50, Cong Guo 
mailto:cong.g...@huawei.com>>:
Hi,

I need to use both SQL and non-SQL APIs (key-value) on a single cache. I follow 
the document in:
https://apacheignite-sql.readme.io/docs/create-table

I use “CREATE TABLE” to create the table and its underlying cache. I can use 
both SQL “INSERT” and put to add data to the cache. However, when I run a 
SqlFieldsQuery, only the row added by SQL “INSERT” can be seen. The Ignite 
version is 2.4.0.

You can reproduce the bug using the following code:

CacheConfiguration dummyCfg = new 
CacheConfiguration<>("DUMMY");
dummyCfg.setSqlSchema("PUBLIC");

 try(IgniteCache dummyCache = ignite.getOrCreateCache(dummyCfg)){
String 
createTableSQL = "CREATE TABLE Persons (id LONG, orgId LONG, firstName VARCHAR, 
lastName VARCHAR, resume VARCHAR, salary FLOAT, PRIMARY KEY(firstName))" +

   "WITH \"BACKUPS=1, ATOMICITY=TRANSACTIONAL, 
WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME +

  ", KEY_TYPE=String, 
VALUE_TYPE=BinaryObject\"";

 
dummyCache.query(new SqlFieldsQuery(createTableSQL)).getAll();

 SqlFieldsQuery 
firstInsert = new SqlFieldsQuery("INSERT INTO Persons (id, orgId, firstName, 
lastname, resume, salary) VALUES (?,?,?,?,?,?)");

firstInsert.setArgs(1L, 1L, "John", "Smith", "PhD", 1.0d);

dummyCache.query(firstInsert).getAll();

   

RE: SQL cannot find data of new class definition

2018-06-20 Thread Cong Guo
Thank you for your reply! This method works, but I have another question. If 
the Person class has a member which is another class, say Address. Class 
Address has two members int streetNo and String streetName. When I create the 
cache, I can use QueryEntity to map the members of Address to table 
columns/fields. How should I write the "alter table Person" statement if I want 
to add a new member to class Address after the cache has been created? Thank 
you!


-Original Message-
From: slava.koptilin [mailto:slava.kopti...@gmail.com] 
Sent: 2018年6月20日 14:50
To: user@ignite.apache.org
Subject: RE: SQL cannot find data of new class definition

Hello,

> Can I add fields without restarting the cluster?
Yes, It can be done via DDL command, as Ilya Kasnacheev mentioned.

Let's assume that you created a cache:
CacheConfiguration cfg = new CacheConfiguration(PERSON_CACHE_NAME)
.setIndexedTypes(Long.class, Person.class);

IgniteCache cache = ignite.getOrCreateCache(cfg);

where the Person class has two fields 'id' and 'firstName'.

after that, you want to add a new field, for example, 'secondName'.
// please take a look for the details:
https://apacheignite-sql.readme.io/docs/alter-table
String ddl = "alter table Person add column secondName varchar";

// execute the DDL command
cache.query(new SqlFieldsQuery(ddl)).getAll();

// new field should be queryable
Iterator iter = cache.query(new SqlFieldsQuery("select secondName from 
Person")).getAll());

hope it helps.

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: A bug in SQL "CREATE TABLE" and its underlying Ignite cache

2018-06-19 Thread Cong Guo
Hi,

How should I use BinaryObject in the CREATE TABLE statement? I try using 
BinaryObject.class.getName() as the value_type, but get the following exception:

class org.apache.ignite.IgniteCheckedException: Failed to initialize property 
'ORGID' of type 'java.lang.Long' for key class 'class java.lang.Long' and value 
class 'interface org.apache.ignite.binary.BinaryObject'. Make sure that one of 
these classes contains respective getter method or field.

I want to use BinaryObject here for some flexibility.

Thanks,
Cong

From: Вячеслав Коптилин [mailto:slava.kopti...@gmail.com]
Sent: 2018年6月18日 18:10
To: user@ignite.apache.org
Subject: Re: A bug in SQL "CREATE TABLE" and its underlying Ignite cache

Hello,

It seems that the root cause of the issue is wrong values of 'KEY_TYPE' and 
'VALUE_TYPE' parameters.
In your case, there is no need to specify 'KEY_TYPE' at all, and 'VALUE_TYPE' 
should Person.class.getName() I think.

Please try the following:
String createTableSQL = "CREATE TABLE Persons (id LONG, orgId LONG, 
firstName VARCHAR, lastName VARCHAR, resume VARCHAR, salary FLOAT, PRIMARY 
KEY(firstName))" +
"WITH \"BACKUPS=1, ATOMICITY=TRANSACTIONAL, 
WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME +
        ", VALUE_TYPE=" + Person.class.getName() + "\"";

Best regards,
Slava.

пн, 18 июн. 2018 г. в 21:50, Cong Guo 
mailto:cong.g...@huawei.com>>:
Hi,

I need to use both SQL and non-SQL APIs (key-value) on a single cache. I follow 
the document in:
https://apacheignite-sql.readme.io/docs/create-table

I use “CREATE TABLE” to create the table and its underlying cache. I can use 
both SQL “INSERT” and put to add data to the cache. However, when I run a 
SqlFieldsQuery, only the row added by SQL “INSERT” can be seen. The Ignite 
version is 2.4.0.

You can reproduce the bug using the following code:

CacheConfiguration dummyCfg = new 
CacheConfiguration<>("DUMMY");
dummyCfg.setSqlSchema("PUBLIC");

 try(IgniteCache dummyCache = ignite.getOrCreateCache(dummyCfg)){
String 
createTableSQL = "CREATE TABLE Persons (id LONG, orgId LONG, firstName VARCHAR, 
lastName VARCHAR, resume VARCHAR, salary FLOAT, PRIMARY KEY(firstName))" +

   "WITH \"BACKUPS=1, ATOMICITY=TRANSACTIONAL, 
WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME +

  ", KEY_TYPE=String, 
VALUE_TYPE=BinaryObject\"";

 
dummyCache.query(new SqlFieldsQuery(createTableSQL)).getAll();

 SqlFieldsQuery 
firstInsert = new SqlFieldsQuery("INSERT INTO Persons (id, orgId, firstName, 
lastname, resume, salary) VALUES (?,?,?,?,?,?)");

firstInsert.setArgs(1L, 1L, "John", "Smith", "PhD", 1.0d);

dummyCache.query(firstInsert).getAll();

 
try(IgniteCache personCache = ignite.cache(PERSON_CACHE_NAME)){

Person p2 = new Person(2L, 1L, "Hello", "World", "Master", 1000.0d);

personCache.put("Hello", p2);


IgniteCache binaryCache = personCache.withKeepBinary();

System.out.println("Size of the cache is: " + 
binaryCache.size(CachePeekMode.ALL));


 binaryCache.query(new ScanQuery<>(null)).forEach(entry -> 
System.out.println(entry.getKey()));


System.out.println("Select results: ");

SqlFieldsQuery qry = new SqlFieldsQuery("select * from Persons");

QueryCursor> answers = personCache.query(qry);

RE: SQL cannot find data of new class definition

2018-06-18 Thread Cong Guo
Can I add fields without restarting the cluster? My requirement is to do 
rolling upgrade.

From: Вячеслав Коптилин [mailto:slava.kopti...@gmail.com]
Sent: 2018年6月18日 17:35
To: user@ignite.apache.org
Subject: Re: SQL cannot find data of new class definition

Hello,

>  I use BinaryObject in the first place because the document says BinaryObject 
> “enables you to add and remove fields from objects of the same type”
Yes, you can dynamically add fields to BinaryObject using BinaryObjecyBuilder, 
but fields that you want to query have to be specified on node startup for 
example through QueryEntity.
Please take a look at this page: 
https://apacheignite.readme.io/v2.5/docs/indexes#queryentity-based-configuration

I would suggest specifying a new field via QueryEntity in XML configuration 
file and restart your cluster. I hope it helps.

Thanks!

пн, 18 июн. 2018 г. в 16:47, Cong Guo 
mailto:cong.g...@huawei.com>>:
Hi,

Does anyone have experience using both Cache and SQL interfaces at the same 
time? How do you solve the possible upgrade? Is my problem a bug for 
BinaryObject? Should I debug the ignite source code?

From: Cong Guo
Sent: 2018年6月15日 10:12
To: 'user@ignite.apache.org<mailto:user@ignite.apache.org>' 
mailto:user@ignite.apache.org>>
Subject: RE: SQL cannot find data of new class definition

I run the SQL query only after the cache size has changed. The new data should 
be already in the cache when I run the query.


From: Cong Guo
Sent: 2018年6月15日 10:01
To: user@ignite.apache.org<mailto:user@ignite.apache.org>
Subject: RE: SQL cannot find data of new class definition

Hi,

Thank you for the reply. In my original test, I do not create a table using 
SQL. I just create a cache. I think a table using the value class name is 
created implicitely. I add the new field/column using ALTER TABLE before I put 
new data into the cache, but I still cannot find the data of the new class in 
the table with the class name.

It is easy to reproduce my original test. I use the Person class from ignite 
example.

In the old code:

CacheConfiguration personCacheCfg = new 
CacheConfiguration<>(PERSON_CACHE_NAME);
personCacheCfg.setCacheMode(CacheMode.REPLICATED);
personCacheCfg.setQueryEntities(Arrays.asList(createPersonQueryEntity()));
try(IgniteCache personCache = 
ignite.getOrCreateCache(personCacheCfg)){
// add some data here
   Person p1 = new Person(…);
   personCache.put(1L, p1);
   //  keep the node running and run the SQL query
}

private static QueryEntity createPersonQueryEntity() {
QueryEntity personEntity = new QueryEntity();

personEntity.setValueType(Person.class.getName());
personEntity.setKeyType(Long.class.getName());

LinkedHashMap fields = new 
LinkedHashMap<>();
fields.put("id", Long.class.getName());
fields.put("orgId", Long.class.getName());
fields.put("firstName", String.class.getName());
fields.put("lastName", String.class.getName());
fields.put("resume", String.class.getName());
fields.put("salary", Double.class.getName());
personEntity.setFields(fields);

personEntity.setIndexes(Arrays.asList(
new 
QueryIndex("id"),
new 
QueryIndex("orgId")
));

return personEntity;
}

The SQL query is:
IgniteCache binaryCache = 
personCache.withKeepBinary();
SqlFieldsQuery 
qry = new SqlFieldsQuery("select salary from Person");


QueryCursor> answers = binaryCache.query(qry);
List> 
salaryList = answers.getAll();
for(List row 
: salaryList) {

Double salary = (Double)row.get(0);

System.out.println(salary);
}

In the new code:

I add a member to the Person class which is “private in addOn”.

try(IgniteCache personCache = ignite.cache(PERSON_CACHE_NAME)){
   // add the new data and then check the cache size
  Person p2 = new Person(…);
  personCache.put(2L, p2);
   System.out.println(&q

A bug in SQL "CREATE TABLE" and its underlying Ignite cache

2018-06-18 Thread Cong Guo
Hi,

I need to use both SQL and non-SQL APIs (key-value) on a single cache. I follow 
the document in:
https://apacheignite-sql.readme.io/docs/create-table

I use "CREATE TABLE" to create the table and its underlying cache. I can use 
both SQL "INSERT" and put to add data to the cache. However, when I run a 
SqlFieldsQuery, only the row added by SQL "INSERT" can be seen. The Ignite 
version is 2.4.0.

You can reproduce the bug using the following code:

CacheConfiguration dummyCfg = new 
CacheConfiguration<>("DUMMY");
dummyCfg.setSqlSchema("PUBLIC");

 try(IgniteCache dummyCache = ignite.getOrCreateCache(dummyCfg)){
String 
createTableSQL = "CREATE TABLE Persons (id LONG, orgId LONG, firstName VARCHAR, 
lastName VARCHAR, resume VARCHAR, salary FLOAT, PRIMARY KEY(firstName))" +

   "WITH \"BACKUPS=1, ATOMICITY=TRANSACTIONAL, 
WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME +

  ", KEY_TYPE=String, 
VALUE_TYPE=BinaryObject\"";

 
dummyCache.query(new SqlFieldsQuery(createTableSQL)).getAll();

 SqlFieldsQuery 
firstInsert = new SqlFieldsQuery("INSERT INTO Persons (id, orgId, firstName, 
lastname, resume, salary) VALUES (?,?,?,?,?,?)");

firstInsert.setArgs(1L, 1L, "John", "Smith", "PhD", 1.0d);

dummyCache.query(firstInsert).getAll();

 
try(IgniteCache personCache = ignite.cache(PERSON_CACHE_NAME)){

Person p2 = new Person(2L, 1L, "Hello", "World", "Master", 1000.0d);

personCache.put("Hello", p2);


IgniteCache binaryCache = personCache.withKeepBinary();

System.out.println("Size of the cache is: " + 
binaryCache.size(CachePeekMode.ALL));


 binaryCache.query(new ScanQuery<>(null)).forEach(entry -> 
System.out.println(entry.getKey()));


System.out.println("Select results: ");

SqlFieldsQuery qry = new SqlFieldsQuery("select * from Persons");

QueryCursor> answers = personCache.query(qry);

List> personList = answers.getAll();

for(List row : personList) {

String fn = (String)row.get(2);

System.out.println(fn);

}
}
}


The output is:

Size of the cache is: 2
Hello
String [idHash=213193302, hash=-900113201, FIRSTNAME=John]
Select results:
John

The bug is that the SqlFieldsQuery cannot see the data added by "put".


RE: SQL cannot find data of new class definition

2018-06-18 Thread Cong Guo
Hi,

Does anyone have experience using both Cache and SQL interfaces at the same 
time? How do you solve the possible upgrade? Is my problem a bug for 
BinaryObject? Should I debug the ignite source code?

From: Cong Guo
Sent: 2018年6月15日 10:12
To: 'user@ignite.apache.org' 
Subject: RE: SQL cannot find data of new class definition

I run the SQL query only after the cache size has changed. The new data should 
be already in the cache when I run the query.


From: Cong Guo
Sent: 2018年6月15日 10:01
To: user@ignite.apache.org<mailto:user@ignite.apache.org>
Subject: RE: SQL cannot find data of new class definition

Hi,

Thank you for the reply. In my original test, I do not create a table using 
SQL. I just create a cache. I think a table using the value class name is 
created implicitely. I add the new field/column using ALTER TABLE before I put 
new data into the cache, but I still cannot find the data of the new class in 
the table with the class name.

It is easy to reproduce my original test. I use the Person class from ignite 
example.

In the old code:

CacheConfiguration personCacheCfg = new 
CacheConfiguration<>(PERSON_CACHE_NAME);
personCacheCfg.setCacheMode(CacheMode.REPLICATED);
personCacheCfg.setQueryEntities(Arrays.asList(createPersonQueryEntity()));
try(IgniteCache personCache = 
ignite.getOrCreateCache(personCacheCfg)){
// add some data here
   Person p1 = new Person(…);
   personCache.put(1L, p1);
   //  keep the node running and run the SQL query
}

private static QueryEntity createPersonQueryEntity() {
QueryEntity personEntity = new QueryEntity();

personEntity.setValueType(Person.class.getName());
personEntity.setKeyType(Long.class.getName());

LinkedHashMap fields = new 
LinkedHashMap<>();
fields.put("id", Long.class.getName());
fields.put("orgId", Long.class.getName());
fields.put("firstName", String.class.getName());
fields.put("lastName", String.class.getName());
fields.put("resume", String.class.getName());
fields.put("salary", Double.class.getName());
personEntity.setFields(fields);

personEntity.setIndexes(Arrays.asList(
new 
QueryIndex("id"),
new 
QueryIndex("orgId")
));

return personEntity;
}

The SQL query is:
IgniteCache binaryCache = 
personCache.withKeepBinary();
SqlFieldsQuery 
qry = new SqlFieldsQuery("select salary from Person");


QueryCursor> answers = binaryCache.query(qry);
List> 
salaryList = answers.getAll();
for(List row 
: salaryList) {

Double salary = (Double)row.get(0);

System.out.println(salary);
}

In the new code:

I add a member to the Person class which is “private in addOn”.

try(IgniteCache personCache = ignite.cache(PERSON_CACHE_NAME)){
   // add the new data and then check the cache size
  Person p2 = new Person(…);
  personCache.put(2L, p2);
   System.out.println("Size of the cache is: " + 
personCache.size(CachePeekMode.ALL));
}

I can only get the data of the old class P1 using the SQL query, but there is 
no error.

I use BinaryObject in the first place because the document says BinaryObject 
“enables you to add and remove fields from objects of the same type”

https://apacheignite.readme.io/docs/binary-marshaller

I can get the data of different class definitions using get(key), but I also 
need the SQL fields query.

IgniteCache binaryCache = personCache.withKeepBinary();
BinaryObject bObj = binaryCache.get(1L);
System.out.println(bObj.type().field("firstName").value(bObj) + " " + 
bObj.type().field("salary").value(bObj));
System.out.println("" + bObj.type().field("addON").value(bObj));

BinaryObject bObj2 = binaryCache.get(2L);
System.out.println(bObj2.type().field("firstName").value(bObj2) + " " + 
bObj2.type().field("salary").value(bObj2));
System.out.println("

RE: SQL cannot find data of new class definition

2018-06-15 Thread Cong Guo
I run the SQL query only after the cache size has changed. The new data should 
be already in the cache when I run the query.


From: Cong Guo
Sent: 2018年6月15日 10:01
To: user@ignite.apache.org
Subject: RE: SQL cannot find data of new class definition

Hi,

Thank you for the reply. In my original test, I do not create a table using 
SQL. I just create a cache. I think a table using the value class name is 
created implicitely. I add the new field/column using ALTER TABLE before I put 
new data into the cache, but I still cannot find the data of the new class in 
the table with the class name.

It is easy to reproduce my original test. I use the Person class from ignite 
example.

In the old code:

CacheConfiguration personCacheCfg = new 
CacheConfiguration<>(PERSON_CACHE_NAME);
personCacheCfg.setCacheMode(CacheMode.REPLICATED);
personCacheCfg.setQueryEntities(Arrays.asList(createPersonQueryEntity()));
try(IgniteCache personCache = 
ignite.getOrCreateCache(personCacheCfg)){
// add some data here
   Person p1 = new Person(…);
   personCache.put(1L, p1);
   //  keep the node running and run the SQL query
}

private static QueryEntity createPersonQueryEntity() {
QueryEntity personEntity = new QueryEntity();

personEntity.setValueType(Person.class.getName());
personEntity.setKeyType(Long.class.getName());

LinkedHashMap fields = new 
LinkedHashMap<>();
fields.put("id", Long.class.getName());
fields.put("orgId", Long.class.getName());
fields.put("firstName", String.class.getName());
fields.put("lastName", String.class.getName());
fields.put("resume", String.class.getName());
fields.put("salary", Double.class.getName());
personEntity.setFields(fields);

personEntity.setIndexes(Arrays.asList(
new 
QueryIndex("id"),
new 
QueryIndex("orgId")
));

return personEntity;
}

The SQL query is:
IgniteCache binaryCache = 
personCache.withKeepBinary();
SqlFieldsQuery 
qry = new SqlFieldsQuery("select salary from Person");


QueryCursor> answers = binaryCache.query(qry);
List> 
salaryList = answers.getAll();
for(List row 
: salaryList) {

Double salary = (Double)row.get(0);

System.out.println(salary);
}

In the new code:

I add a member to the Person class which is “private in addOn”.

try(IgniteCache personCache = ignite.cache(PERSON_CACHE_NAME)){
   // add the new data and then check the cache size
  Person p2 = new Person(…);
  personCache.put(2L, p2);
   System.out.println("Size of the cache is: " + 
personCache.size(CachePeekMode.ALL));
}

I can only get the data of the old class P1 using the SQL query, but there is 
no error.

I use BinaryObject in the first place because the document says BinaryObject 
“enables you to add and remove fields from objects of the same type”

https://apacheignite.readme.io/docs/binary-marshaller

I can get the data of different class definitions using get(key), but I also 
need the SQL fields query.

IgniteCache binaryCache = personCache.withKeepBinary();
BinaryObject bObj = binaryCache.get(1L);
System.out.println(bObj.type().field("firstName").value(bObj) + " " + 
bObj.type().field("salary").value(bObj));
System.out.println("" + bObj.type().field("addON").value(bObj));

BinaryObject bObj2 = binaryCache.get(2L);
System.out.println(bObj2.type().field("firstName").value(bObj2) + " " + 
bObj2.type().field("salary").value(bObj2));
System.out.println("" + bObj2.type().field("addON").value(bObj2));



Thanks,
Cong



From: Ilya Kasnacheev [mailto:ilya.kasnach...@gmail.com]
Sent: 2018年6月15日 9:37
To: user@ignite.apache.org<mailto:user@ignite.apache.org>
Subject: Re: SQL cannot find data of new class definition

Hello!

You can add fields to existing SQL-backed cache using ALTER TABLE ... ADD 
COLUMN comm

RE: SQL cannot find data of new class definition

2018-06-15 Thread Cong Guo
Hi,

Thank you for the reply. In my original test, I do not create a table using 
SQL. I just create a cache. I think a table using the value class name is 
created implicitely. I add the new field/column using ALTER TABLE before I put 
new data into the cache, but I still cannot find the data of the new class in 
the table with the class name.

It is easy to reproduce my original test. I use the Person class from ignite 
example.

In the old code:

CacheConfiguration personCacheCfg = new 
CacheConfiguration<>(PERSON_CACHE_NAME);
personCacheCfg.setCacheMode(CacheMode.REPLICATED);
personCacheCfg.setQueryEntities(Arrays.asList(createPersonQueryEntity()));
try(IgniteCache personCache = 
ignite.getOrCreateCache(personCacheCfg)){
// add some data here
   Person p1 = new Person(…);
   personCache.put(1L, p1);
   //  keep the node running and run the SQL query
}

private static QueryEntity createPersonQueryEntity() {
QueryEntity personEntity = new QueryEntity();

personEntity.setValueType(Person.class.getName());
personEntity.setKeyType(Long.class.getName());

LinkedHashMap fields = new 
LinkedHashMap<>();
fields.put("id", Long.class.getName());
fields.put("orgId", Long.class.getName());
fields.put("firstName", String.class.getName());
fields.put("lastName", String.class.getName());
fields.put("resume", String.class.getName());
fields.put("salary", Double.class.getName());
personEntity.setFields(fields);

personEntity.setIndexes(Arrays.asList(
new 
QueryIndex("id"),
new 
QueryIndex("orgId")
));

return personEntity;
}

The SQL query is:
IgniteCache binaryCache = 
personCache.withKeepBinary();
SqlFieldsQuery 
qry = new SqlFieldsQuery("select salary from Person");


QueryCursor> answers = binaryCache.query(qry);
List> 
salaryList = answers.getAll();
for(List row 
: salaryList) {

Double salary = (Double)row.get(0);

System.out.println(salary);
}

In the new code:

I add a member to the Person class which is “private in addOn”.

try(IgniteCache personCache = ignite.cache(PERSON_CACHE_NAME)){
   // add the new data and then check the cache size
  Person p2 = new Person(…);
  personCache.put(2L, p2);
   System.out.println("Size of the cache is: " + 
personCache.size(CachePeekMode.ALL));
}

I can only get the data of the old class P1 using the SQL query, but there is 
no error.

I use BinaryObject in the first place because the document says BinaryObject 
“enables you to add and remove fields from objects of the same type”

https://apacheignite.readme.io/docs/binary-marshaller

I can get the data of different class definitions using get(key), but I also 
need the SQL fields query.

IgniteCache binaryCache = personCache.withKeepBinary();
BinaryObject bObj = binaryCache.get(1L);
System.out.println(bObj.type().field("firstName").value(bObj) + " " + 
bObj.type().field("salary").value(bObj));
System.out.println("" + bObj.type().field("addON").value(bObj));

BinaryObject bObj2 = binaryCache.get(2L);
System.out.println(bObj2.type().field("firstName").value(bObj2) + " " + 
bObj2.type().field("salary").value(bObj2));
System.out.println("" + bObj2.type().field("addON").value(bObj2));



Thanks,
Cong



From: Ilya Kasnacheev [mailto:ilya.kasnach...@gmail.com]
Sent: 2018年6月15日 9:37
To: user@ignite.apache.org
Subject: Re: SQL cannot find data of new class definition

Hello!

You can add fields to existing SQL-backed cache using ALTER TABLE ... ADD 
COLUMN command:
https://apacheignite-sql.readme.io/docs/alter-table

The recommendation for your use case, where the layout of dat1a is expected to 
change, is to just use SQL (DDL) defined tables and forget about BinaryObject's.

With regards to your original case, i.e., a different class de

SQL cannot find data of new class definition

2018-06-15 Thread Cong Guo
Hi all,

I am trying to use BinaryObject to support data of different class definitions 
in one cache. This is for my system upgrade. I first start a cache with data, 
and then launch a new node to join the cluster and put new data into the 
existing cache. The new data has a different class definition. I use the same 
class name, but add a member to the class. I can add the objects of this new 
class to the cache. The cache size changes and I can get both the new and old 
data using keys. However, when I use SQLFieldsQuery like "select id from 
myclassname" where id is a member exists in both the versions of classes, I can 
get only the old data. There is no error or exception. SQL just cannot find the 
data of the new class definition.

How can I use SQL queries to find both the new and old data? The new data is in 
the cache, but it seems not being in the table using my class name. Where is 
the new data? Is there a new table? If yes, what is the table name? I do not 
expect to see the new column using the old query. I just hope to see the old 
fields of new data using the old queries.

BTW, I use QueryEntity to set up fields and indexes in my codes. Does anyone 
has an example about how to add fields to existing cache dynamically? Thank you!


RE: ClassCastException When Using CacheEntryProcessor in StreamVisitor

2018-06-08 Thread Cong Guo
Static classes do not work either. Has anyone used CacheEntryProcessor in 
StreamVisitor? Does Ignite allow that?
My requirement is to update certain fields of the Value in cache  
based on a data stream. I want to update only several fields in a large object. 
Do I have to get the whole Value object and then put it back?

Thanks,
Cong

From: Cong Guo
Sent: 2018年6月6日 9:50
To: user@ignite.apache.org
Subject: RE: ClassCastException When Using CacheEntryProcessor in StreamVisitor

Hi,

I put the same jar on the two nodes. The codes are the same. Why does lambda 
not work here? Thank you.


From: Andrey Mashenkov [mailto:andrey.mashen...@gmail.com]
Sent: 2018年6月6日 9:11
To: user@ignite.apache.org<mailto:user@ignite.apache.org>
Subject: Re: ClassCastException When Using CacheEntryProcessor in StreamVisitor

Hi,

Is it possible you change lambdas code between calls? Or may be classes are 
differs on nodes?
Try to replace lambdas with static classes in your code. Will it work for you?

On Tue, Jun 5, 2018 at 10:28 PM, Cong Guo 
mailto:cong.g...@huawei.com>> wrote:
Hi,

The stacktrace is as follows. Do I use the CacheEntryProcessor in the right 
way? May I have an example about how to use CacheEntryProcessor in 
StreamVisitor, please? Thank you!

javax.cache.processor.EntryProcessorException: java.lang.ClassCastException: 
com.huawei.clusterexperiment.model.Person cannot be cast to 
org.apache.ignite.binary.BinaryObject
at 
org.apache.ignite.internal.processors.cache.CacheInvokeResult.get(CacheInvokeResult.java:102)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1361)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1405)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.invoke(GatewayProtectedCacheProxy.java:1362)
at 
com.huawei.clusterexperiment.Client.lambda$streamUpdate$531c8d2f$1(Client.java:337)
at org.apache.ignite.stream.StreamVisitor$1.apply(StreamVisitor.java:50)
at org.apache.ignite.stream.StreamVisitor$1.apply(StreamVisitor.java:48)
at org.apache.ignite.stream.StreamVisitor.receive(StreamVisitor.java:38)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:137)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:397)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:302)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:59)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:89)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:505)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassCastException: 
com.huawei.clusterexperiment.model.Person cannot be cast to 
org.apache.ignite.binary.BinaryObject
at com.huawei.clusterexperiment.Client$2.process(Client.java:340)
at 
org.apache.ignite.internal.processors.cache.EntryProcessorResourceInjectorProxy.process(EntryProcessorResourceInjectorProxy.java:68)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.onEntriesLocked(GridDhtTxPrepareFuture.java:421)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1231)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:671)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare(GridDhtTxPrepareFuture.java:1048)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.prepareAsyncLocal(GridNearTxLocal.java:3452)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareColocatedTx(IgniteTxHandler.java:257)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.proceedPrepare(GridNearOptimisticTxPrepareFuture.java:578)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.prepareSingle(GridNearOptimisticTxPrepareFuture.java:405

RE: ClassCastException When Using CacheEntryProcessor in StreamVisitor

2018-06-06 Thread Cong Guo
Hi,

I put the same jar on the two nodes. The codes are the same. Why does lambda 
not work here? Thank you.


From: Andrey Mashenkov [mailto:andrey.mashen...@gmail.com]
Sent: 2018年6月6日 9:11
To: user@ignite.apache.org
Subject: Re: ClassCastException When Using CacheEntryProcessor in StreamVisitor

Hi,

Is it possible you change lambdas code between calls? Or may be classes are 
differs on nodes?
Try to replace lambdas with static classes in your code. Will it work for you?

On Tue, Jun 5, 2018 at 10:28 PM, Cong Guo 
mailto:cong.g...@huawei.com>> wrote:
Hi,

The stacktrace is as follows. Do I use the CacheEntryProcessor in the right 
way? May I have an example about how to use CacheEntryProcessor in 
StreamVisitor, please? Thank you!

javax.cache.processor.EntryProcessorException: java.lang.ClassCastException: 
com.huawei.clusterexperiment.model.Person cannot be cast to 
org.apache.ignite.binary.BinaryObject
at 
org.apache.ignite.internal.processors.cache.CacheInvokeResult.get(CacheInvokeResult.java:102)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1361)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1405)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.invoke(GatewayProtectedCacheProxy.java:1362)
at 
com.huawei.clusterexperiment.Client.lambda$streamUpdate$531c8d2f$1(Client.java:337)
at org.apache.ignite.stream.StreamVisitor$1.apply(StreamVisitor.java:50)
at org.apache.ignite.stream.StreamVisitor$1.apply(StreamVisitor.java:48)
at org.apache.ignite.stream.StreamVisitor.receive(StreamVisitor.java:38)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:137)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:397)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:302)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:59)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:89)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:505)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassCastException: 
com.huawei.clusterexperiment.model.Person cannot be cast to 
org.apache.ignite.binary.BinaryObject
at com.huawei.clusterexperiment.Client$2.process(Client.java:340)
at 
org.apache.ignite.internal.processors.cache.EntryProcessorResourceInjectorProxy.process(EntryProcessorResourceInjectorProxy.java:68)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.onEntriesLocked(GridDhtTxPrepareFuture.java:421)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1231)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:671)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare(GridDhtTxPrepareFuture.java:1048)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.prepareAsyncLocal(GridNearTxLocal.java:3452)
at 
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareColocatedTx(IgniteTxHandler.java:257)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.proceedPrepare(GridNearOptimisticTxPrepareFuture.java:578)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.prepareSingle(GridNearOptimisticTxPrepareFuture.java:405)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.prepare0(GridNearOptimisticTxPrepareFuture.java:348)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFutureAdapter.prepareOnTopology(GridNearOptimisticTxPrepareFutureAdapter.java:137)
at 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFutureAdapter.prepare(GridNearOptimisticTxPrepareFutureAdapter.java:74)

RE: ClassCastException When Using CacheEntryProcessor in StreamVisitor

2018-06-05 Thread Cong Guo
 
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.invokeAsync(GridNearTxLocal.java:407)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$25.op(GridCacheAdapter.java:2486)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$25.op(GridCacheAdapter.java:2478)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4088)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.invoke0(GridCacheAdapter.java:2478)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.invoke(GridCacheAdapter.java:2456)
at 
org.apache.ignite.internal.processors.cache.GridCacheProxyImpl.invoke(GridCacheProxyImpl.java:588)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1359)
... 17 more




From: Alexey Goncharuk [mailto:alexey.goncha...@gmail.com]
Sent: 2018年6月5日 12:32
To: user 
Subject: Re: ClassCastException When Using CacheEntryProcessor in StreamVisitor

Hello,

Can you please share the full stacktrace so we can see where the original 
ClassCastException is initiated? If it is not printed on a client, it should be 
printed on one of the server nodes.

Thanks!

вт, 5 июн. 2018 г. в 18:35, Cong Guo 
mailto:cong.g...@huawei.com>>:
Hello,

Can anyone see this email?

From: Cong Guo
Sent: 2018年6月1日 13:11
To: 'user@ignite.apache.org<mailto:user@ignite.apache.org>' 
mailto:user@ignite.apache.org>>
Subject: ClassCastException When Using CacheEntryProcessor in StreamVisitor

Hi,

I want to use IgniteDataStreamer to handle data updates. Is it possible to use 
CacheEntryProcessor in StreamVisitor? I write a simple program as follows. It 
works on a single node, but gets a ClassCastException on two nodes. The two 
nodes are on two physical machines. I have set  peerClassLoadingEnabled to true 
on both the nodes. How do I use CacheEntryProcessor in StreamVisitor?

The function is like:

private static void streamUpdate(Ignite ignite, IgniteCache 
personCache) {
CacheConfiguration updateCfg = 
new CacheConfiguration<>("updateCache");
try(IgniteCache updateCache = 
ignite.getOrCreateCache(updateCfg)) {
try (IgniteDataStreamer updateStmr = ignite.dataStreamer(updateCache.getName())) {


updateStmr.receiver(StreamVisitor.from((cache,e) -> {

Long id = e.getKey();

Double newVal = e.getValue();

personCache.withKeepBinary().invoke(id,

new CacheEntryProcessor() {

public Object process(MutableEntry entry, Object...objects) throws EntryProcessorException {

BinaryObjectBuilder bldr = 
entry.getValue().toBuilder();

double salary = 
bldr.getField("salary");

bldr.setField("salary", 
salary+newVal);

entry.setValue(bldr.build());

return null;

}

});
}));

Random 
generator = new Random();
for(long 
i=1;i<=EXP_SIZE;i++) {

long rankey = 1+generator.nextInt(EXP_SIZE);

updateStmr.addData(rankey, 10.0);
}
}//end second try
}//end first try
}

Here t

RE: ClassCastException When Using CacheEntryProcessor in StreamVisitor

2018-06-05 Thread Cong Guo
Hello,

Can anyone see this email?

From: Cong Guo
Sent: 2018年6月1日 13:11
To: 'user@ignite.apache.org' 
Subject: ClassCastException When Using CacheEntryProcessor in StreamVisitor

Hi,

I want to use IgniteDataStreamer to handle data updates. Is it possible to use 
CacheEntryProcessor in StreamVisitor? I write a simple program as follows. It 
works on a single node, but gets a ClassCastException on two nodes. The two 
nodes are on two physical machines. I have set  peerClassLoadingEnabled to true 
on both the nodes. How do I use CacheEntryProcessor in StreamVisitor?

The function is like:

private static void streamUpdate(Ignite ignite, IgniteCache 
personCache) {
CacheConfiguration updateCfg = 
new CacheConfiguration<>("updateCache");
try(IgniteCache updateCache = 
ignite.getOrCreateCache(updateCfg)) {
try (IgniteDataStreamer updateStmr = ignite.dataStreamer(updateCache.getName())) {


updateStmr.receiver(StreamVisitor.from((cache,e) -> {

Long id = e.getKey();

Double newVal = e.getValue();

personCache.withKeepBinary().invoke(id,

new CacheEntryProcessor() {

public Object process(MutableEntry entry, Object...objects) throws EntryProcessorException {

BinaryObjectBuilder bldr = 
entry.getValue().toBuilder();

double salary = 
bldr.getField("salary");

bldr.setField("salary", 
salary+newVal);

entry.setValue(bldr.build());

return null;

}

});
}));

Random 
generator = new Random();
for(long 
i=1;i<=EXP_SIZE;i++) {

long rankey = 1+generator.nextInt(EXP_SIZE);

updateStmr.addData(rankey, 10.0);
}
}//end second try
}//end first try
}

Here the Person class is from the ignite example. There is no exception on a 
single node.
The exception is like:

javax.cache.processor.EntryProcessorException: java.lang.ClassCastException: 
com.huawei.clusterexperiment.model.Person cannot be cast to 
org.apache.ignite.binary.BinaryObject
at 
org.apache.ignite.internal.processors.cache.CacheInvokeResult.get(CacheInvokeResult.java:102)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1361)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1405)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.invoke(GatewayProtectedCacheProxy.java:1362)
at 
com.huawei.clusterexperiment.Client.lambda$streamUpdate$a02be2b7$1(Client.java:310)
at org.apache.ignite.stream.StreamVisitor$1.apply(StreamVisitor.java:50)
at org.apache.ignite.stream.StreamVisitor$1.apply(StreamVisitor.java:48)
at org.apache.ignite.stream.StreamVisitor.receive(StreamVisitor.java:38)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:137)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:397)
at 
org.apache.ignite.internal.processors.datastreamer.DataStr

ClassCastException When Using CacheEntryProcessor in StreamVisitor

2018-06-01 Thread Cong Guo
Hi,

I want to use IgniteDataStreamer to handle data updates. Is it possible to use 
CacheEntryProcessor in StreamVisitor? I write a simple program as follows. It 
works on a single node, but gets a ClassCastException on two nodes. The two 
nodes are on two physical machines. I have set  peerClassLoadingEnabled to true 
on both the nodes. How do I use CacheEntryProcessor in StreamVisitor?

The function is like:

private static void streamUpdate(Ignite ignite, IgniteCache 
personCache) {
CacheConfiguration updateCfg = 
new CacheConfiguration<>("updateCache");
try(IgniteCache updateCache = 
ignite.getOrCreateCache(updateCfg)) {
try (IgniteDataStreamer updateStmr = ignite.dataStreamer(updateCache.getName())) {


updateStmr.receiver(StreamVisitor.from((cache,e) -> {

Long id = e.getKey();

Double newVal = e.getValue();

personCache.withKeepBinary().invoke(id,

new CacheEntryProcessor() {

public Object process(MutableEntry entry, Object...objects) throws EntryProcessorException {

BinaryObjectBuilder bldr = 
entry.getValue().toBuilder();

double salary = 
bldr.getField("salary");

bldr.setField("salary", 
salary+newVal);

entry.setValue(bldr.build());

return null;

}

});
}));

Random 
generator = new Random();
for(long 
i=1;i<=EXP_SIZE;i++) {

long rankey = 1+generator.nextInt(EXP_SIZE);

updateStmr.addData(rankey, 10.0);
}
}//end second try
}//end first try
}

Here the Person class is from the ignite example. There is no exception on a 
single node.
The exception is like:

javax.cache.processor.EntryProcessorException: java.lang.ClassCastException: 
com.huawei.clusterexperiment.model.Person cannot be cast to 
org.apache.ignite.binary.BinaryObject
at 
org.apache.ignite.internal.processors.cache.CacheInvokeResult.get(CacheInvokeResult.java:102)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1361)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1405)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.invoke(GatewayProtectedCacheProxy.java:1362)
at 
com.huawei.clusterexperiment.Client.lambda$streamUpdate$a02be2b7$1(Client.java:310)
at org.apache.ignite.stream.StreamVisitor$1.apply(StreamVisitor.java:50)
at org.apache.ignite.stream.StreamVisitor$1.apply(StreamVisitor.java:48)
at org.apache.ignite.stream.StreamVisitor.receive(StreamVisitor.java:38)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:137)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:397)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:302)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:59)
at 
org.apache.ignite.internal.processors.data