otherwise
>> we would find the solution rather quickly. Needless to say, I can't
>> reproduce it. Error message that you see was created for the case when you
>> join your node to the wrong cluster.
>>
>> Do you have any custom code during the node start? And one more qu
cluster didn't
> have those values at all.
> The rule is that activated cluster can't accept changed properties from
> joining node. So, the workaround would be deactivating the cluster, joining
> the node and activating it again. But as I said, I don't think that you'll
> see this
ce.DistributedMetaStorageImpl#ver",
> it's incremented on every distributed metastorage setting update. You can
> find your error message in the same class.
>
> Please follow up with more questions and logs it possible, I hope we'll
> figure it out.
>
> Thank you!
>
> пт
Hi,
I have a 3-node cluster with persistence enabled. All the three nodes are
in the baseline topology. The ignite version is 2.8.1.
When I restart the first node, it encounters an error and fails to join the
cluster. The error message is "Caused by: org.apache.
ignite.spi.IgniteSpiException:
cga.gridgain.com/
>
> Obviously, you need to create account in Apache Ignite CI.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 27 авг. 2020 г. в 16:33, Cong Guo :
>
>> Hi,
>>
>> I try to build the ignite-core on my workstation. I use the original
>> i
y using -Dmaven.*test*.*skip*=true flag.
>
> Evgenii
>
> чт, 27 авг. 2020 г. в 06:33, Cong Guo :
>
>> Hi,
>>
>> I try to build the ignite-core on my workstation. I use the original
>> ignite-2.8.1 source package. The test, specifically
>> GridCach
Hi,
I try to build the ignite-core on my workstation. I use the original
ignite-2.8.1 source package. The test, specifically
GridCacheWriteBehindStoreLoadTest, has been running for several days. Is it
normal? I run "mvn clean package" directly. Should I configure anything in
advance? Thank you.
er for your use case?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 11 дек. 2019 г. в 22:54, Cong Guo :
>
>> Hi,
>>
>> Are the entries stored in local NearCache on my client node in the format
>> of deserialized java objects or BinaryObject? Will the
Hi,
Are the entries stored in local NearCache on my client node in the format
of deserialized java objects or BinaryObject? Will the entry in local
on-heap NearCache be deserialized from BinaryObject when I call the get
function?
Thanks,
Nap
Thank you for the reply. You are right. The function used to initially get the
set returns an unmodifiableSet. That is the reason for the error.
However, when we use a Map as a field and initially write an unmodifiableMap,
we can still modify the Map using getField.
The code is like how we use
Hi,
I use a Set class as a field. When I update the field via the
BinaryObjectBuilder, I get an UnsupportedOperationException.
I use QueryEntity to set the field like:
fieldNameTypeMap.put(tags_FieldStr, Set.class.getName());
Then my update function (in my EntryProcessor) is like:
Hi,
How can I measure the latency for updates to be replicated from the primary
node to a backup node?
I use the PRIMARY_SYNC mode. I want to know the time for a backup node to catch
up. Is there any API for the latency measurement? Do you have any suggestion?
Thanks,
Cong
Hi,
Please ignore this email. I lost common sense. Sorry.
From: Cong Guo
Sent: 2018年7月6日 15:25
To: user@ignite.apache.org
Subject: How to set JVM opts in the configuration xml
Hi,
I start the Ignite node in my own code instead of using ignite.sh. How do I set
JVM opts in the configuration xml
Hi,
I start the Ignite node in my own code instead of using ignite.sh. How do I set
JVM opts in the configuration xml?
Thanks,
Cong
Hi,
Does PRIMARY_SYNC means waiting for only primary copies even if the primary
copies are on remote nodes, while FULL_ASYNC means waiting for only local
copies no matter the local copies are primary or not?
Could you please give me an example case to show different performance results
with
Hi,
I don't think this feature requires any change in the SQL API.
When we create a cache, even if the value object contains a nested object, the
fields in the nested object can be mapped to columns in the table. Now we can
do this using QueryEntity, for example,
QueryEntity personEntity =
So can I add a field to a nested object dynamically (without restarting the
cluster) by using annotations?
-Original Message-
From: slava.koptilin [mailto:slava.kopti...@gmail.com]
Sent: 2018年6月21日 10:44
To: user@ignite.apache.org
Subject: RE: A bug in SQL "CREATE TABLE" and its
ARY
KEY(firstName))" +
"WITH \"BACKUPS=1, ATOMICITY=TRANSACTIONAL,
WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME +
"\"";
Thanks!
вт, 19 июн. 2018 г. в 16:43, Cong Guo
mailto:cong.g...@huawei.com>>:
Hi,
How should I
UPS=1, ATOMICITY=TRANSACTIONAL,
WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME +
", VALUE_TYPE=" + Person.class.getName() + "\"";
Best regards,
Slava.
пн, 18 июн. 2018 г. в 21:50, Cong Guo
mailto:cong.g...@huawei.com>>:
Hi
hrough QueryEntity.
Please take a look at this page:
https://apacheignite.readme.io/v2.5/docs/indexes#queryentity-based-configuration
I would suggest specifying a new field via QueryEntity in XML configuration
file and restart your cluster. I hope it helps.
Thanks!
пн, 18 июн. 2018 г. в 16:47, Co
Hi,
I need to use both SQL and non-SQL APIs (key-value) on a single cache. I follow
the document in:
https://apacheignite-sql.readme.io/docs/create-table
I use "CREATE TABLE" to create the table and its underlying cache. I can use
both SQL "INSERT" and put to add data to the cache. However,
Hi,
Does anyone have experience using both Cache and SQL interfaces at the same
time? How do you solve the possible upgrade? Is my problem a bug for
BinaryObject? Should I debug the ignite source code?
From: Cong Guo
Sent: 2018年6月15日 10:12
To: 'user@ignite.apache.org'
Subject: RE: SQL cannot
I run the SQL query only after the cache size has changed. The new data should
be already in the cache when I run the query.
From: Cong Guo
Sent: 2018年6月15日 10:01
To: user@ignite.apache.org
Subject: RE: SQL cannot find data of new class definition
Hi,
Thank you for the reply. In my original
e case, where the layout of dat1a is expected to
change, is to just use SQL (DDL) defined tables and forget about BinaryObject's.
With regards to your original case, i.e., a different class definition: I could
spend time debugging it if you had more info, but this approach is not
recommended anyway
Hi all,
I am trying to use BinaryObject to support data of different class definitions
in one cache. This is for my system upgrade. I first start a cache with data,
and then launch a new node to join the cluster and put new data into the
existing cache. The new data has a different class
object and then put it back?
Thanks,
Cong
From: Cong Guo
Sent: 2018年6月6日 9:50
To: user@ignite.apache.org
Subject: RE: ClassCastException When Using CacheEntryProcessor in StreamVisitor
Hi,
I put the same jar on the two nodes. The codes are the same. Why does lambda
not work here? Thank you.
From
Hi,
Is it possible you change lambdas code between calls? Or may be classes are
differs on nodes?
Try to replace lambdas with static classes in your code. Will it work for you?
On Tue, Jun 5, 2018 at 10:28 PM, Cong Guo
mailto:cong.g...@huawei.com>> wrote:
Hi,
The stacktrace is as follows
When Using CacheEntryProcessor in StreamVisitor
Hello,
Can you please share the full stacktrace so we can see where the original
ClassCastException is initiated? If it is not printed on a client, it should be
printed on one of the server nodes.
Thanks!
вт, 5 июн. 2018 г. в 18:35, Cong Guo
Hello,
Can anyone see this email?
From: Cong Guo
Sent: 2018年6月1日 13:11
To: 'user@ignite.apache.org'
Subject: ClassCastException When Using CacheEntryProcessor in StreamVisitor
Hi,
I want to use IgniteDataStreamer to handle data updates. Is it possible to use
CacheEntryProcessor
Hi,
I want to use IgniteDataStreamer to handle data updates. Is it possible to use
CacheEntryProcessor in StreamVisitor? I write a simple program as follows. It
works on a single node, but gets a ClassCastException on two nodes. The two
nodes are on two physical machines. I have set
30 matches
Mail list logo