Val,
Thank you for the response. As per your suggestion I will first test it
with write through approach before moving to JTA.
About JTA, I still have some doubts about how to use it.
As per the example given in this link
https://apacheignite.readme.io/v2.4/docs/transactions (Integration With JT
Hi Roman, Denis,
Thanks for your responses.
You are right. The primary usecase is to keep product data along with items
in cache to quickly fetch item in single lookup for display purpose.
Therefore, each item is updated at write time with product data instead of
doing it at read time. We do crea
I would like to have a strongly consistent (K,V) cluster of 3 nodes, with a
quorum of 2.
1 node can be down / partitioned, but it won't survive the failure of 2
nodes.
In my mind, if I start single node when there are 3 nodes specified in
TcpDiscoveryVmIpFinder, it should not process requests till
Hi Chris,
STOPPED state is final. A node will not become STARTED again after that.
Why do you need to create a NodeKiller? Node's process will be closed
automatically when Life Cycle Beans handled BEFORE_NODE_STOP,
AFTER_NODE_STOP events.
The difference between these events is that BEFORE_NODE_S
I actually don't think it relates to initial query, although it definitely
guarantees exactly-once for listener updates. Andrew, am I wrong?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Naveen,
No, that GoldenGate integration, I gave a link to, doesn't work with
Ignite. It requires you to use GridGain instead.
-
Denis
On Fri, Mar 23, 2018 at 12:45 AM, Naveen wrote:
> Thanks Denis.
>
> I did go thru the link, so the full details are not furnished.
> Can this be achieved wit
Hi,
I have a question about Ignite LifeCycle and have not been able to find the
info by googling.
Under what conditions does a Node get into the STOPPED state??
And is that final??
In other words, once a Node is STOPPED can it somehow become STARTED again
in the natural flow of things??
I ask b
Thanks for the suggestion Andrey.
We are wishing to keep our code-base compatible with Redis, so not
Ignite-specific code.
But we do need support for the Redis KEYS command...
Jose
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I have the same problem on GKE with 2.4. I found some helpful info there but
I see ignite neet more permissions.
I think this should be added to Ignite kubernetes deployment instruction.
https://stackoverflow.com/a/49405879/2578137
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
I'm not sure that such kind of benchmarking make sense. I've loaded much
more than 300 million rows, but my numbers won't give anything to you
because it depends only on objects size, nodes count, etc.
>From my experience, I would say that the bottleneck in your case will be
the CPU or I/O on
Hi,
Ignite stores keys in binary format, and for now the only way is to use
ScanQuery with filter (to filter out entries) and transformer (to return
keys instead of whole entries).
It will look smth like:
cache.query(new ScanQuery(filterPredicate), transformerClosure)
Is is what you are looking
Hi,
Is there any intention of implementing support for the Redis KEYS command in
the near future?
Jose
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
I have loaded data into cache by setting persistence property to true.
But im not able to find the path of the data persisted on the disk. I have
checked in IgniteHome/work/db directory, it is not present in that location.
Please help me to find the path of persisted data on one of the node
Hi Swetha.
Please, check if IGNITE_HOME env. variable or JVM property is properly
configured.
If you are on linux, check /tmp/ignite directory as a default one.
On Tue, Mar 27, 2018 at 4:37 PM, Swetha wrote:
> Hi,
>
> I have loaded data into cache by setting persistence property to true.
>
> Bu
Hi,
Thanks for checking out issue. I think its not about IDE issue.
Again same thing i have reproduce like bellow steps
1.run the program from IDE as client
2.copy jar to ignite libs folder and start the server
3.Then check previous message steps so that can able to reproduce the issue.
https:/
Hi,
Yes, Ignite stores configuration on disk if native persistence enabled.
Would you please share a reproducer?
On Tue, Mar 27, 2018 at 3:04 PM, Mikael wrote:
> Ok, I will have a look at it and see what I can figure out, it's only a
> test computer running so it is only a single node.
>
> One
Ok, I will have a look at it and see what I can figure out, it's only a
test computer running so it is only a single node.
One question though, does Ignite save information about the services
that was running on disk when a node is stopped ? it looks like that
otherwise it would not know about
Mikael,
Please, let us know if the issue occurs again and no work directories were
deleted and no files can be shared between nodes.
We'll investigate this. Any reproducer will be appreciated.
On Tue, Mar 27, 2018 at 1:40 PM, Andrey Mashenkov <
andrey.mashen...@gmail.com> wrote:
> Hi Mikael,
>
>
Hi Mikael,
Please check if ignite work directories were not cleaned in between.
Also check if every node have separate work directory and no files can be
shared.
Otherwise, it looks like a race.
As a workaround you can specify Classes that can be serialized (Service
classes, Key\Value classes)
i
Hi!
I stopped my application without problems last week, today when I
started it up (no changes to the code or anything) I got the exception
below, anyone have a clue to what it could be ?
It's 2.4, I have native persistence on.
12:13:13 [srvc-deploy-#56] ERROR: Failed to initialize service
For example: mysql has two fields create table testa (id bigint not null,
name varchar (20), primary key (id)) and in Ignite corresponding entity
class private Long id; private String name; When using springboot to start
The client creates an entity class object using the object's Set method to
co
Hi Vinokurov,
I tried to run your code for 30 minutes monitored by “atop”.
And the average write speed is about 2151.55 KB per second.
Though the performance is better.
But there is still a gap with your testing result.
Is there anything I can improve?
Thanks.
There is my hardware specifications.
Hi Vinokurov,
I tried to run your code for 30 minutes monitored by “atop”.
And the average write speed is about 2151.55 KB per second.
Though the performance is better.
But there is still a gap with your testing result.
Is there anything I can improve?
Thanks.
There is my hardware specifications.
Hi,
Continuous query guarantees exactly once delivery [1].
Duplicate events are rejected by cont.query internals.
[1]
https://apacheignite.readme.io/docs/continuous-queries#section-events-delivery-guarantees
On Mon, Mar 26, 2018 at 8:02 PM, au.fp2018 wrote:
> Cool thanks for the confirmation
>
Hi,
Sorry. I understand almost nothing.
Would you please clarify what "ID" and "normal content" do you mean?
Would you please share a code with comments in places where you've got
unexpected results?
On Tue, Mar 27, 2018 at 10:31 AM, hulitao198758 wrote:
> Hello, the project is on my side t
Hello, the project is on my side to Ignite + mysql + springboot build
platform, mysql do persistent storage, Save in the client program through
the program records at the time of the query can return to normal content
ID, but the client via JDBC Insert Into to Cache the data at the time of the
26 matches
Mail list logo