lear which Ignite API this code should be using.
Thanks,
Victor
Hi
In Ignite 2.16, I seeing that newVal method of cache event returns a
BinaryObjectImpl. Is it possible to make it return actual user-defined
type contained in cache, by some config value? Wasn't able to find
relevant info in docs.
Thanks,
Victor
Update,
Interestingly, i took the value to the wire, setting 'emptyPagesPoolSize' to
510 (< 512) and that seems to have done the trick. Atleast in my past 5 test
run's the preload has gone through fine.
Right now i have set it to pretty much the max value, since there is no good
way to identify w
Correction, max size is indeed in MB, 'offHeapSize' is in bytes, which maps
to max size. I was looking for similar attribute for initial size in bytes.
But i didn't find one.
That is ok i guess. MB value works too.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
On further reading around DataRegionConfiguration. I see
'setEmptyPagesPoolSize', it says,
"Increase this parameter if cache can contain very big entries (total size
of pages in this pool should be enough to contain largest cache entry).
Increase this parameter if IgniteOutOfMemoryException occurr
Thanks Evgenii, this helps get some perspective.
It's confusing with maxSize being shown in bytes and the initialSize being
shown in MB.
Enabling metrics, does it entail any performance drop or can it be enabled
in production as well. I'll run my tests anyway.
--
Sent from: http://apache-igni
gnite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.processors.cache.persistence.RowStore.addRow(RowStore.java:108)
[ignite-core-2.8.1.jar:2.8.1]
...
Not sure why is the expiration policy not kicking in. Is this a bug?
Any inputs appreciated.
Victor
--
Sent from: http://apache-ig
atedSize already at 8 mb, even before data was added to the cache.
5. Finally if "TotalAllocatedSize" is indeed the attribute to track the size
increment, i should expect eviction to kick in when its value reaches 90% of
the max size. Is this understanding correct?
I'll run some more t
>From what i have read so far, seems like if i have 2 caches each to be
configured with 50MB, then i need to define 2 identical regions with unique
names and apply to each of the caches.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Thanks Denis this helps.
Btw, one thing to check, if you create a 25MB region and multiple cache's
are associated to that region, do all the caches share the region capacity
or just share the configuration but each cache will be allocated a 25MB
capacity?
--
Sent from: http://apache-ignite-user
), be able to inspect
the capacity a row can hold based on the column datatypes?
2. Can the Dataregion be set post the cache is created?
Thanks,
Victor
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Iiya/ Denis,
I am aware of the Data Region with max size as bytes. I was looking for way
to do this via number of records.
Anyway, seems there is no way in Ignite to do this today.
I am looking at couple of other options to achieve this, for that same i am
looking at a way to calculate the stora
Hi Denis,
For our product we are using an in-house solution where we expose the size
in records. Now we are looking at replacing it with Ignite. So we wanted to
make the experience seamless and let the end user's continue to set the same
configuration they are familiar with, rather than adding a n
Hi,
I am looking to set the cache size for off-heap via number of max records
instead of max bytes. Similar to how LRU eviction policy for on-heap can set
it.
Haven't found anything around that. Is that doable, if yes, how do i go
about it?
Thanks,
Vic
--
Sent from: http://apache-ignite-users
then i will have to implement
the server nodes as well, right?
Regards,
Victor
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
any pros/con's with not having the config on the sever end,
performance, n/w hops, etc?
Thanks,
Victor
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Denis, i get that about when to scale. And with my test, these are fairly
large boxes with lots of head room. Hardly doing 10% or less CPU wise.
But why do i see a performance difference when i put data into 1 node vs 3
node with PRIMARY_SYNC turned on.
Shouldn't the behavior and workflow be the
I am aware of those nuances around distributed systems.
What i am trying to understand is, with Sync mode as PRIMARY_SYNC, where the
response does not wait for updates to backups, so long as primary node is
updated. With this setting why should 1 node vs 3 node matter?
--
Sent from: http://apa
Any pointers to understand this behavior?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Folks,
I am doing a putAll test with a simple Employee Pojo, stored as binary. The
cache is configured with,
Atomicity Mode = Transactional
Write Sync Mode = Full Sync
Backup Count - 1
Deployment config is, 2 large linux boxes,
Box 1 - 3 server nodes
Box 2 - 1 client node
500k load with batc
Update,
1. So there were 2 issues, there was old batch processing app that
periodically ran, that loaded lot of data in memory. Which i think was
causing some memory contention. So i shut that down for me tests.
2. Thread dumps showed some odd wait times between 2 get calls. I had
overtly complic
Not sure i follow. The data is on server node/s. Even for a single/multiple
requests, 'get' from a client will need to make a n/w round trip if server
and client are on different boxes vs both being on the same box. So n/w
latency becomes quite relevant.
--
Sent from: http://apache-ignite-users.
Performed one more test. Moved the client on the same box, and changed the
off & on heap values.
The Employee record is barely about 75-100bytes. So 500k records would just
range between 40-50mb + 1 backup, so another 40-50mb, so about 100mb worth
of data.
I set the off-heap to 1GB and -Xmx to 1G
Yes, ran Cassandra on the same box. Similar config, 3 nodes on one box and
client on another. Have about 75G on both boxes.
However for now, i am keeping Cassandra aside, since my primary goal around
evaluating Ignite is to see similar performance numbers for "get" as seen in
the benchmark.
--
Thanks Denis for confirming the benchmarks are real.
I am using the latest ignite version i.e. 2.6.7.
I tried with Atomic as well, don't see much variation. Marginal changes. So
currently, in my test,
I am using
/examples/config/persistentstore/example-persistent-store.xml, with
persistence dis
It's 500k unique gets, spread across multiple threads. Max i tried with 30
threads.
I cant use getAll for this usecase, since it is user driven and the user
will load one record at a time. In any case i expected event the single gets
to be pretty fast as well. Given the benchmark reference -
https
I am running some comparison tests (ignite vs cassandra) to check how to
improve the performance of 'get' operation. The data is fairly
straightforward. A simple Employee Object(10 odd fields), being stored as
BinaryObject in the cache as
IgniteCache empCache;
The cache is configured with, Write
When C3P0 attempts to create a new connection pool, it first tries to set
the isolation level for each transaction through the connection provided by
the JDBC driver. At this point, when the method getTransactionIsolation()
is called on the Connection, the Ignite driver throws the
SQLFeatureNotSup
Hi Ignite community,
I'm making the switch from SparkSQL to Ignite, but I noticed the Ignite
JDBC driver doesn't seem to support any transactions at all.
As a result, I've been having some difficulty setting up a connection pool.
I've tried with C3P0 and HikariCP, but again due to the driver not
29 matches
Mail list logo