Hi!
I don't think there is an easy answer, the configuration depends on so
many things, try the default configuration and see how it goes and work
your way from there, the documentation is great and explains well what
all the options do, so it's easy to play around with.
Just configure a
Hi lmark58 -
I met a problem similar with you have met before. The performance of
_cache.get()_ gets worse when enabled the write-behind, so I want to know
will the performance got better when disabling the WriteBehindCoalescing?
--
Sent from:
I’m a new user of apache ignite.
I want to know if my data is supposed to be partitioned amongst 3 nodes and
data of each node is supposed to have a backup on the other two nodes. The
data is JSON key value pairs in 150 columns and 1 million rows.
How will the configuration file look like?
Can
Hello!
Can you provide logs from your nodes prior to seeing this exceptions?
Note that in a distributed system, you can expect to see this problem
sometimes, i.e. to see a large lazy result set fail in mid-iteration due to
some changes in the cluster. I'm not sure if there's nothimg more
Hello!
Apache Ignite is an SQL engine on top of key-value, not the vice versa. It
means that no SQL statements are fired when you update the underlying
key-value. Instead, old and new values are feed to Indexing SPI or its SQL
equivalent.
see
Hello!
I think you should be writing on Developers list if you have suggestions
regarding this topic.
As far as my understanding goes, current behavior is consistent with PME
concept of Apache Ignite, which will wait for all operations to finish.
Regards,
--
Ilya Kasnacheev
вс, 2 дек. 2018
Hello!
This is somewhat expected since memory grids are very fast. Disk-based
database can't match memory grid's in-memory performance numbers.
Still, it should not degrade endlessly. What is the version that you are
on? I guess you may see improvements in the coming Apache Ignite 2.7.
Hello.
The error does not happen every time because it disappears when we restart
our application.But it repeats when first error appears.The row number is
not fixed when this error happens.We are really confused about this
exception.
ilya.kasnacheev wrote
> Hello!
>
> Does this happen every
Hello Ilya,
The thing is I just want to apply a few bug fixes to the Ignite 2.6.
The nightly release will include all the new tickets in master since 2.6 if
my understanding is correct.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Another question:
How the client APIs get or put data to the rebalancing cluster(Async Mode)
when adding a new node to the cluster, from the old nodes or the new node?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I'm trying to seed about 500 million rows from a Spark DataFrame into a clean
Ignite database, but running into serious performance issues once Ignite
runs out of durable memory. I'm running 4 Ignite Nodes on Kubernetes cluster
backed by AWS i3.2xl instances (8 CPUs per node, 60 GB Memory, 2TB SSD
Hi Ilya,
Any update on your investigation of this issue...?
Your comments that 'streaming mode' in Client driver and Client driver
itself are near-deprecated - are very surprising and concerning!
1. Are you saying that Apache Ignite SQL will seize to be accessible via
standard JDBC?
2. If
Hi Amir,
I have two server nodes and 1 client node. I have two caches, one that holds
entire accounts from DB and another counter cache that is used for counter
operations. The server nodes are deployed on two different nodes and
clustered together. A client that is also on one of the machines of
when persistence enabled, when call IgniteCache.put(k,v) with a `NEW`
key-value pair, it's obviously that
there is a new `INSERT` sql generated and the values will be inserted into
the backend table.
but for the case when I want to update the properties of the value entity
(eg, the weight
14 matches
Mail list logo