When you enable the on-heap cache, is saves anything that you access. If
there is no eviction policy and you constantly update/read new keys, you
will very likely run out of heap memory. Configure eviction policy if you
want to limit the size of on-heap cache.
-Val
--
View this message in conte
This simply means that environment variable is not picked up by the process.
If you run in Eclipse, probably you just need to restart it.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Setting-custom-Log-location-log4j-tp16106p16121.html
Sent from the Apac
Ravi,
If you need to speed up SQL, you should make sure Ignite uses indexes to
execute queries. I think you can do the following:
- Create Hive RDD and map it to RDD of key value pairs.
- Create new IgniteRDD on top of a cache and use IgniteRDD#savePairs method
to load data from Hive to Ignite.
-
Ravi,
Have you seen the Hadoop Accelerator?
https://apacheignite-fs.readme.io/docs/hadoop-accelerator
It also provides custom implementation of MR engine.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Apache-Tez-support-for-Ignite-tp16086p16114.html
Sen
Jessie,
You still call atomicLong() method from Service#init(). As I already
mentioned, this is causing the startup hang. You should move
IgniteAtomicLong creation out of init() method to avoid it.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Streaming-
Hi Roger,
It's a known issue and fixed in master. You can try to build from there or
check the latest nightly build:
https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/lastSuccessfulBuild/
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Viso
Path for log files is ${IGNITE_HOME}/work/log/, as specified in the log4j
file. If log4j logger is used and if configuration was not changed, then
most likely the change to IGNITE_HOME property variable you made was not
picked by the process. You can check it in the log - Ignite prints out
IGNITE_H
Hi Ravi,
I don't think it currently will, because this will require integration with
data frames. We have it plans, but it is not implemented yet. I think you
should use IgniteRDD or Ignite APIs directly.
Can you describe business use case you're trying to implement?
-Val
--
View this message
1. These are just two different protocols for communication between client
and cluster. With current implementations of both, client node would most
likely provide better performance. We're already working on new thin client
implementation though.
2. Can you provide the exact query you run, execut
I can't reproduce it, your project works fine for me. Can you attach thread
dumps?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Streaming-test-tp14039p16088.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi Luqman,
I meant the service deployment. Most likely, you will do this in init()
method - just call Class.forName() and create the instance of the class.
Makes sense?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Platform-updates-tp15998p16069.html
Se
Jessie,
What do you mean by "stuck"? Did you check thread dumps? Is it possible you
have memory/GC issues?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Streaming-test-tp14039p16066.html
Sent from the Apache Ignite Users mailing list archive at Nabble.co
IgniteContext does not create any executors, Spark does. And frankly I don't
know why you have so many, I was not able to reproduce it myself.
As for large number of tasks, only assumption I can make is that your cache
doesn't actually have any data, which forces Spark to scan ALL partitions to
fi
As long as all this data fits in memory, yes.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Warming-up-RAM-from-durable-memory-tp16017p16062.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
When you add persistence, latency of each individual updates gets bigger,
because you update disk as well. In case you test on the same amount of
parallel threads, throughput will obviously go down. However, if you
increase the load, i.e. execute more operations in parallel, throughput will
go back
I think the most straightforward way would be to run SQL query that selects
required subset.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Warming-up-RAM-from-durable-memory-tp16017p16039.html
Sent from the Apache Ignite Users mailing list archive at Nabb
Looks like executors start additional server nodes, while they should be in
client mode instead when standalone mode is used. I would recommend to
explicitly provide clientMode=true property in the config provided to
IgniteContext.
I also created a ticket for this:
https://issues.apache.org/jira/b
Hi Luqman,
It depends on what change you do exactly.
If a change is in data model, then it's best not to deploy these classes on
server nodes at all and take advantage of binary object. In this case you
don't even need to restart nodes, just start using new schema and it will be
picked up transpa
1 & 2. Starting with 2.0 all data is stored in off-heap memory, regardless of
persistence configuration. Memory mode configuration is removed.
3. Yes. Data is always stored in pages which can transparently reside both
in memory and on disk. If persistence is not enabled, you have only
in-memory pag
Aaron,
Please show the full exception trace and attach your configuration.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Did-Ignite-query-support-embed-DML-tp15120p15973.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Czeslaw,
To achieve best performance you should do data remodeling, get rid of
collections, store different types of objects in different caches and
reference them using foreign keys.
L2 cache can also be an option, but performance wise it will be less
effective.
-Val
--
View this message in
Ignite doesn't have auto increment functionality (at least for now), so you
need to generate IDs manually. I would load the data, query existing data to
get the largest ID value, initialize atomic sequence, and then use this
sequence to generate IDs.
-Val
--
View this message in context:
http:
John,
There are multiple places I believe. You can check PageMemory interface and
its implementations, for example.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Persistence-Store-and-sun-misc-UNSAFE-tp15920p15967.html
Sent from the Apache Ignite Users m
James,
First of all, please properly subscribe to the mailing list so that the
community can receive email notifications for your messages. To subscribe,
send empty email to user-subscr...@ignite.apache.org and follow simple
instructions in the reply.
It's not recommended to run sync operations i
It's actually better to discuss this on dev list. However, I agree with
Nikolay, I doubt such patch can be accepted.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Do-not-store-javax-persistence-Transient-fields-tp15604p15924.html
Sent from the Apache Igni
Sami,
Streamer is running on a client and it doesn't store any data. It just
accumulates updates in local queues, batches them and sends to server nodes.
The cache will then store the data on server side on off-heap memory.
-Val
--
View this message in context:
http://apache-ignite-users.7051
Carsten,
The build process was changed. See DEVNOTES.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-2-1-build-script-issue-tp15787p15790.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Roger,
Yeah, that's weird and hard to tell why this happens. Obviously there is
some kind of race condition. In any case, I would do everything correctly
first, and then check if issue persists or not.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Missin
Roger,
If they are not needed, they should be part of value object rather than key
object. When you create a key object to do a cache lookup or other
operation, you should provide exact fields for all values included in it.
-Val
--
View this message in context:
http://apache-ignite-users.7051
Well, the world has to know about this :) One of our committers, Roman
Shtykh, wrote an article about Ignite's latest features in Japanese
language! Such a bummer I can't read it, but looks awesome:
https://techblog.yahoo.co.jp/oss/ignite_newmemory/
-Val
--
View this message in context:
http:/
Sorry, copy-paste problem :)
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/ignite-TcpDiscoveryKubernetesIpFinder-properties-tp15741p15749.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Roger,
Since Ignite stores data in binary format, ыщ it basically ignores your
hashCode and equals implementations. However, I see that in these methods
you use only two fields out of four which makes me think that you can create
two keys with same portId and dateTime, but different hour and bucke
Franck,
I see only POM file attached. Can you attach the whole project you're
referring to?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Failure-to-deserialize-simple-model-object-tp15440p15745.html
Sent from the Apache Ignite Users mailing list archive
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/ignite-TcpDiscoveryKubernetesIpFinder-properties-tp15741p15744.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
For pagination you should use 'OFFSET .. LIMIT ..' syntax. Note that you
should also order the result set (use ORDER BY) in this case, because
otherwise ordering is undefined and you can get weird results.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Pag
This is answered on SO:
https://stackoverflow.com/questions/45337691/b-tree-and-index-page-in-apache-ignite
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/B-Tree-Index-Pages-and-Memory-Region-tp15740p15742.html
Sent from the Apache Ignite Users mailing lis
Hi Franck,
The latest version of Ignite is 2.0.0 and it's available on Maven Central.
If you're using Ignite, that's the one you should use.
GridGain uses the mentioned repo to provide maintenance builds to their
customers.
-Val
--
View this message in context:
http://apache-ignite-users.705
This exception is optional and does not affect anything.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Do-not-store-javax-persistence-Transient-fields-tp15604p15704.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Jessie,
atomicLong() method will do this for you automatically. It either creates
new instance or returns existing one. There is no need for separate
initialization step.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Streaming-test-tp14039p15646.html
Sen
Hi Roger,
Can you show the FcPortStatsKey class?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Missing-object-when-loading-from-Cassandra-with-multiple-queries-tp15644p15645.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Ignite also honors the standard 'transient' keyword.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Do-not-store-javax-persistence-Transient-fields-tp15604p15640.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Jessie,
Ignite#atomicLong method will return existing instance if it is already
initialized, so you actually don't need to manage this manually.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Streaming-test-tp14039p15525.html
Sent from the Apache Ignite U
Hi Martin,
1. Generally yes, I would recommend to use the same configuration. Even if
you do otherwise, you need to make sure that discovery configuration is
correct everywhere.
1.1. Classes that are part of configuration like cache store implementation,
must be explicitly deployed on all nodes in
Configuration contains only bean name, not bean itself, so it has to be
provided explicitly on every node. Reasoning behind this is that cache store
implementation is rarely can be serialized.
On the client node cache store is needed for transactional caches, because
in this mode store is updated
messageQueueLimit provides back pressure. If there are too many outbound
pending messages in the queue, sender will wait until at least one of the
messages is sent.
slowClientQueueLimit is the maximum amount of outbound messages to a client
node before that client is kicked out from topology. This
What does this table mean? Please provide proper description of your problem
- what you're trying to achieve, what you tried, what doesn't work as
expected, etc.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite2-0-Scaling-tp15261p15266.html
Sent from
SQL schema is currently not dynamic. I.e. you can dynamically add field to
binary object, but it can't be used in queries. In future we will support
ALTER TABLE syntax that will allow to do this.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Cache-Store-i
I responded here:
http://apache-ignite-users.70518.x6.nabble.com/Streaming-test-td14039.html
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Use-AtomicLong-or-AtomicSequence-in-Streaming-tp14350p15264.html
Sent from the Apache Ignite Users mailing list arch
I ran the test. Atomic sequence initialization should be moved out of init()
because it is called synchronously with node start, and they wait for each
other.
After I do this, everything works fine. Although CSV file has only 348 and
it gets loaded multiple times in a loop.
Is there anything else
This looks like a part of QueryEntity configuration which is related to
Ignite SQL, not to persistence. Can you clarify what you want to achieve,
show full configuration, and explain what exactly is not working?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.c
SQL queries are currently not enlisted in transactions. You should use
key-value API if you need full transactional support. For SQL it will be
added in future versions, hopefully by the end of this year.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/same
Can you provide a reproducer?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/issue-in-REPLICATED-model-cache-between-server-and-client-tp15154p15256.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Can you provide a reproducer?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/about-write-behind-problems-tp15160p15255.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Did you try the web console?
https://apacheignite-tools.readme.io/docs/ignite-web-console
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Issue-with-Ignite-Zeppelin-tp14990p15254.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Aaron,
On-heap can be used as a cache for off-heap, it's size is controlled by
eviction policy. Note that on-heap will have copies of entries, everything
that is on-heap is stored off-heap as well.
Off-heap is the primary storage, it holds all the in-memory data. It is
controlled by memory polici
Did you set the writeThrough property? It's needed to enable write to cache
store, regardless of whether it's write-behind mode or not.
cacheCfg.setWriteThrough(true);
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/about-write-behind-problems-tp15160p1521
You should provide full class name in configuration.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/RE-Custom-SQL-Functions-ClassNotFoundException-tp15215p15220.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Not sure why injection doesn't work, but you can always get Ignite instance
via Ignition.ignite().
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Data-loss-validation-TopologyValidator-tp15147p15214.html
Sent from the Apache Ignite Users mailing list archi
Amit Pundir wrote
> Does that mean the on-heap will benefit only when we have more 'get'
> operations but will degrade the performance with 'put' operations?
Yes, I think it's accurate. However, even for reads performance will be
noticeable inly in certain scenarios. As I already mentioned, this i
Performance of Java API is much better than REST API, for multiple reasons.
One of the major reasons is that Ignite client node is aware of affinity and
can always send request to node that holds particular piece of data, while
with REST there is one node which acts as a router.
-Val
--
View th
Amit,
I think I already answered that :) See above.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Large-value-of-idleConnectionTimeout-in-TcpCommunicationSpi-tp15138p15206.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi Lucky,
Write behind affects the way how underlying store is updated when you update
the cache. Load cache process is different and is not related to this. More
information here: https://apacheignite.readme.io/docs/persistent-store
-Val
--
View this message in context:
http://apache-ignite-
Aaron,
Memory policy is not required on client, but it must be configured on
server. Do you have memory policy configuration for Account_Region?
Take a look at MemoryPoliciesExample, it demonstrates how to do this.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nab
Hi Lucky,
I don't understand the use case. Are you loading data from DB to cache, or
writing to cache? Can you provide exact steps describing what you're doing?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/write-behind-problems-tp15152p15201.html
Sent f
Jaipal, Ramesh,
Does any of you have a reproducer that can be shared? This looks like a
serious issue, but it obviously can be reproduced only under some specific
circumstances, I doubt someone will be able to do this without your help.
-Val
--
View this message in context:
http://apache-igni
Query cursor results are paginated automatically while you iterate over the
cursor. I.e. if the page size is 1024 (default), you will never have more
than 1024 entries in local memory. After you finish iteration through the
first page, he second one will be requested, and so on. This allows to avoi
If you just calculate the sum, then you don't need to join, right? Just
remove the WHERE clause.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Did-Ignite-query-support-embed-DML-tp15120p15198.html
Sent from the Apache Ignite Users mailing list archive at
Amit,
Current architecture assumes that all data is stored off-heap. On-heap is
just a cache. I.e., when you update, you update both anyway.
Also serialization overhead exists regardless of whether you use on-heap or
off-heap, because values are always stored in binary format. To reduce this
over
Ajay,
Visor and Web Console are two tools we have. If you need something that is
not available there, you can always create your own application that will
collect the required information.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cache-Consol
Yes, you're right. Apparently I was looking at the wrong version of the code.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Large-value-of-idleConnectionTimeout-in-TcpCommunicationSpi-tp15138p15195.html
Sent from the Apache Ignite Users mailing list archi
Ajay,
To get better distribution you should have more partitions. Ideally, number
of partitions should be at least order of magnitude bigger than number of
nodes. I also recommend to use power of 2 as number of partitions (128, 256,
512, ...).
Default value is 1024 and it works fine in majority o
Hi Zbyszek,
You're subscribed now, all good.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/java-lang-IllegalStateException-Data-streamer-has-been-closed-tp15059p15190.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
You don't have table A in the subquery, I guess that's the reason for the
failure. What exactly are you trying to achieve with this query? What is the
purpose of 'WHERE b.id = a.id' clause?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Did-Ignite-query-su
Your code works for me. What exactly do you mean by 'not working'?
And btw, you should ALWAYS call Transaction#close() in the finally block
after transaction is finished. Your current code cause unfinished
transactions and hangs.
-Val
--
View this message in context:
http://apache-ignite-user
It looks like you didn't add Ignite libraries to Zeppelin. IgniteJdbcDriver
is included in ignite-core.jar and this JAR has to be on classpath.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Issue-with-Ignite-Zeppelin-tp14990p15144.html
Sent from the Apach
Amit,
Default idle timeout is actually 10 minutes. Therefore connection will be
closed only it is not used for at least 10 minutes, not 30 seconds. Are you
sure that delay is caused by reopening a closed connection? If yes, are you
sure it's caused by idle timeout?
Answering your questions:
1. Ge
Hi Amit,
Primary data storage in 2.0 is off-heap and it always has all the in-memory
data. You can have additional on-heap cache, but its generally limited and
it duplicates the off-heap data. Why don't you want to use off-heap?
-Val
--
View this message in context:
http://apache-ignite-users
Bob,
Original query is wrong for sure. \\\" means that you actually have a slash
in SQL string which is obviously wrong. As for the second error, it looks
like a misconfiguration. Please provide more information on what exactly you
did, your configuration, etc.
-Val
--
View this message in con
Matt,
undeployTask method is about task deployment [1], it's unrelated to the
discussion.
[1] https://apacheignite.readme.io/docs/deployment-spi
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Compute-Grid-Questions-tp14980p15091.html
Sent from the Apache
Hi Menglei,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
menglei wrote
> Hi,
>
> Currently we are heavily usi
Hi Roger,
There is a bug in the code, I created a ticket:
https://issues.apache.org/jira/browse/IGNITE-5781. Thanks for letting us
know about the issue.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Problem-with-Visor-and-Cassandra-Cache-Store-tp15076p15
Mimmo,
Can you describe the use case? What are you trying to achieve exactly?
Generally, sending a lot of data across the network is something that you
should avoid, of course. So it's hard to tell if your approach is correct or
not without knowing all details.
-Val
--
View this message in co
Hi Zbyszek,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
zbyszek wrote
> Hello All,
>
> I was wondering if an
Luqman,
Got it. This is not available out of the box. But I think you can just
create a cluster singleton service that will periodically call
IgniteCache#loadCache to poll the data. Will this work?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/database-t
One more point. You don't have to and should not pass IgniteCache as a
parameter. Instead, you can inject Ignite into transient field in the
CacheStore implementation and then use it to acquire cache:
@IgniteInstanceResource
private transient Ignite ignite;
-Val
--
View this message in context
Hi Czeslaw,
Does myEntity object that is returned from DAO have old or new version
value?
Generally, your approach can work, but this looks like a serious performance
overhead. Can you clarify, how exactly this versioning is used? Do you
actually need it after switching to Ignite?
-Val
--
Vie
Correct, you can use either annotations or query entities to configure SQL
model for Ignite. More information here:
https://apacheignite.readme.io/docs/indexes
There is no way to use Hibernate configurations directly unless Ignite is
used as L2 cache for Hibernate:
https://apacheignite-mix.readme.
You showed hot methods, not memory consumption... Do you have a project that
you can share with us? This way I will be able to run on my side and
investigate.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-2-0-vs-Ignite-1-7-or-later-tp14401p15028.ht
Hi Luqman,
What exactly do you mean by "new rows"? Is the database updated directly and
you then need to refresh cache with these updates?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/database-table-polling-tp15006p15027.html
Sent from the Apache Ignite
Saji,
Ignite node in this case is embedded into the application. So it's up to
this application to control the lifecycle. To stop an embedded node just
call Ignition.stop().
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Can-java-code-look-up-cache-in-ser
I can't reproduce this behavior. Can you create a small project that I can
run on my side and investigate?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Information-tp14330p15025.html
Sent from the Apache Ignite Users mailing list archive at Nabble
Check this:
https://cwiki.apache.org/confluence/display/IGNITE/Persistent+Store+Overview
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Data-eviction-to-disk-tp14535p15024.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi,
Did you check execution plans?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/query-performance-degrade-when-adding-a-filter-criteria-order-by-tp14624p15023.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
I think creating your own pool on client side is the easiest way to support
you case.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Async-Service-proxy-tp14478p15022.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi Priya,
I would encourage you to learn the basics about how Java heap memory works.
Basically, if you have different Xms and Xmx, JVM will start with lower
bound, and then will allocate memory as needed until upper bound is reached.
Log shows the amount of currently allocated memory.
-Val
--
Yitzhak,
Ignite doesn't allow reconnection of server nodes after segmentation. In
general case this causes data inconsistency, so it was a design decision to
make sure segmented nodes are fully restarted.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Net
In 2.1 Ignite will have a persistence store that will allow to execute
queries without preloading the data. But this will be an internal storage
implementation, read-through for queries is impossible with arbitrary
CacheStore implementation.
-Val
--
View this message in context:
http://apache-
Setting port ranges should help:
X.X.X.1:47500..47509
X.X.X.2:47500..47509
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Two-ignite-instances-on-a-vm-tp14965p15018.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
1. I'm not sure this is correct. It's actually assumed that CacheStore is
shared across nodes, i.e. it's a single entry point for persisting data.
Underlying store itself can be distributed or not, but this must be
transparent to Ignite. If each node writes to its own local storage, it all
gets muc
Hi Matt,
1. Each task or closure execution creates a session that has an ID. You can
cast returned IgniteFuture to ComputeTaskFuture (unfortunately there is no
other way now) and then use getTaskSession() method to get the session
description. However, this information is available only on the nod
301 - 400 of 2301 matches
Mail list logo