I would recommend to create a ticket with design proposal and share it on dev
list. You will get much better feedback there.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Simple-Compute-API-for-SQL-returned-data-tp9962p9980.html
Sent from the Apache Ignit
If there is no value in cache yet, you can still set the new one in the
transformer (which is actually an entry processor). I'm not sure I
understand what is not working for you here, can you give an example?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Can you still provide a test case?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/grouped-index-sort-vs-filter-tp9885p9977.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
This issue is also discussed here:
http://stackoverflow.com/questions/41522571/build-issues-with-ignite
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/jar-file-issues-tp9957p9978.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
You can serialize the object and check the length of the byte array:
byte[] arr = ignite.configuration().getMarshaller().marshal(new Person(10L,
"first", "last"));
System.out.println(arr.length);
Then refer to this page to calculate the total cache capacit:
https://apacheignite.readme.io/docs/cap
Anil,
I mean total count. This happens automatically.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/NOT-IN-in-ignite-tp9861p9950.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi Tejas,
Did you check the execution plan? Are there any scans?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Affinity-tp9744p9943.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi,
The aggregation should happen on the client and you should get the correct
result. Are nodes discovering each other? Can you prepare a test case that
reproduces the issue?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/NOT-IN-in-ignite-tp9861p9940.htm
Hi,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
hitendrapratap wrote
> I have Spring Boot Ignite Application
Hi,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
--
View this message in context:
http://apache-ignite-use
I'm not aware of such plans.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cluster-tp9527p9915.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi Gaurav,
How many nodes do you have? I think your results only possible with local
deployment when there is only one server node. When network is added into
the picture, streamer batching should provide big improvement.
If this is not the case, please provide a test case that we can run to
repr
Currently there is no way to manually control which index to use. Do you see
a difference in performance? I'm not sure ordering is taking advantage of
the index there in any case.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/grouped-index-sort-vs-filter-
Sam,
There is no exact date as for now. I would recommend to monitor the dev list
for activity around this.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p9883.html
Sent from the Apache Ignite Users mailing list arc
The thread continues here:
http://apache-ignite-users.70518.x6.nabble.com/Affinity-td9744i20.html
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Affinity-tp9744p9882.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
The answer is here:
http://apache-ignite-users.70518.x6.nabble.com/NearCache-can-be-used-through-ODBC-interface-td9859.html
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Near-cache-can-be-used-with-SQL-Database-tp9805p9881.html
Sent from the Apache Ignite
I would recommend to use one of the non-empty constructors for QueryIndex,
they all set SORTED as default. Frankly, I would remove the one that is
without parameters, it doesn't make much sense and error-prone.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.co
It is supported if you provide key and value classes in CacheConfiguration,
but this is possible only in code, not in XML.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/JCache-and-CacheManager-getCache-tp9847p9853.html
Sent from the Apache Ignite Users ma
You don't need to provide types to get a cache. Just do this:
Cache cache = cacheManager.getCache("myCache");
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/JCache-and-CacheManager-getCache-tp9847p9851.html
Sent from the Apache Ignite Users mailing list a
Can you show the trace?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/JCache-and-CacheManager-getCache-tp9847p9848.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
This is just a version ignite-aws module was developed on, and I don't think
newer versions were ever tested. However, I believe it should work with
1.11.76. Can you try it out and let us know if it does?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Old-
Tejas,
You can send a closure to server node and iterate through local data using
IgniteCache.localEntries() method. This should give you a clear picture.
BTW, is this the correct thread? Looks like you posted in a wrong one by
mistake.
-Val
--
View this message in context:
http://apache-ign
Hi,
Can you create a unit test that will reproduce the issue and share it with
us? It's hard to tell anything having only this trace.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Streaming-Exception-on-client-nothing-on-server-side-tp9807p9808.html
Sent
Hi,
Can you please clarify the question? What are you trying to achieve?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Near-cache-can-be-used-with-SQL-Database-tp9805p9806.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
This discussion seems to be a complete duplicate of this one:
http://apache-ignite-users.70518.x6.nabble.com/Affinity-td9744.html. Let's
continue there.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Re-Afinity-Key-tp9774p9804.html
Sent from the Apache Ign
Anil,
Can you create a unit test that will demonstrate the problem?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Affinity-tp9744p9803.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi,
This is possible with Scan query only. Use ScanQuery.setPartition(..) method
to specify a partition you want to scan.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Partitions-within-single-node-in-Apache-Ignite-tp9726p9802.html
Sent from the Apache I
Dmitry,
I probably do not correctly understand what is wrong, but it feels like
something needs to be fixed anyway. If you agree, can you please move this
to dev list?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Error-while-writethrough-operation-in-Ig
Yes, this is possible. See this thread:
http://apache-ignite-users.70518.x6.nabble.com/Does-Ignite-support-query-and-index-the-embedded-object-column-td9663.html
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-to-query-data-from-List-tp9766p9785.html
Se
Hi Anil,
Because that's how it works :) Main reason behind this is to ensure data
consistency. When you restart the node, its memory is cleaned up and it
joins as a brand new node. Rejoin with existing data will require additional
processing and in general case will not guarantee consistency.
Mak
Hi Kumar,
You can't query collections this way. You should store each Person as a
separate entry to achieve this.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-to-query-data-from-List-tp9766p9767.html
Sent from the Apache Ignite Users mailing list ar
If collocation doesn't work, you should get incorrect results unless you
enabled distributed joins (which you should not do if you're not going to
utilize them).
Performance depends on many factors, number of entries for example.
Regardless of what you do, you will scan the portion of Product tabl
The code snippet you provided is actually only for the case when String is
the whole value, not wrapped in some other object. But this is not your
case, right? I think we're missing something here, reproducer would really
help :)
rebuildIndexes() method is on the private API and I don't see any us
Anil,
NOOP means that you have to create your own custom listener for
EVT_NODE_SEGMENTED event, so it can't be a default policy. Doing nothing
will not work, because node can't rejoin the topology after segmentation.
Full node restart is needed anyway.
-Val
--
View this message in context:
ht
IgniteDataStreamer and loadCache() are different approaches and separate
APIs. If you want to use IgniteDataStreamer to load the data, simply start a
client node, fetch data from database and stream it into the cluster through
streamer.
-Val
--
View this message in context:
http://apache-ignit
Dmitry,
Can this be detected and reported with a better exception? Sounds like a
usability issue to me.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Error-while-writethrough-operation-in-Ignite-tp9696p9760.html
Sent from the Apache Ignite Users mailing
Option 1 is not usually good, because large heaps are likely to cause long GC
pauses. You should either start several smaller nodes, or use off-heap
memory [1].
As for query performance, there is a chance that you can improve *latency*
by starting several nodes (generally, as many nodes as many co
Anil,
This will work. There is only one rule - everything with the same affinity
key value will be mapped to the same partition, and therefore will reside on
the same node.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Affinity-tp9744p9757.html
Sent from
Hi Steve,
Currently this is not possible. If what you're saying is true, then in
probably make sense to add such option, but I want to reproduce it myself
first. Can you at least provide timings for different dataset sizes with and
without indexes? A reproducer would be even better. Also are you s
Hi Ankit,
If shared memory library is not loaded, nodes will simply switch to TCP and
everything will work fine. Actually, shared memory communication is known to
be not very stable, so usually we recommend to disable it. Most likely it
will not be default anymore in 2.0, or probably will be even
Shawn,
You just should keep in mind that this is an object that will be serialized,
sent across network and invoked on other side. The purpose of entry
processor is to be executed on server side and atomically read and update a
single entry. Having said that, the logic you described doesn't make m
Hi,
I tried to reproduce this behavior, but without success. Data loading time
increased linearly for me when I increased number of entries. Is it possible
for you to provide a reproducer that I would be able to run and investigate?
-Val
--
View this message in context:
http://apache-ignite-u
Shawn,
#1 - Yes, peer class loading is needed if you don't have entry processor
implementation class deployed on server nodes.
#2 - Correct. Moreover, I would recommend not to use lambdas or anonymous
classes for anything that can be serialized, but to use static classes
instead (this is actually
Anil,
Nothing is stopped. Once again, in case of segmentation you will have two
independently running clusters. And in of the previous messages you told
that this is desirable behavior for you as you have a read-only use case. Am
I missing something? Did you run any test that gave different result
Steve,
Can you show your indexing configuration?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/LoadCache-Performance-decreases-with-the-size-of-the-cache-tp9645p9688.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Ignite does not track the size of each entry. However, you can use
CacheMetrics.getOffHeapAllocatedSize() and
CacheMetrics.getOffHeapEntriesCount() to get an approximation.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Off-heap-memory-contents-in-linux-tp
Anil,
All this is very use case specific. STOP is used by default because it's the
safest option and gives more control. However, it does imply some manual
operations to resolve the segmentation.
In any case, above is about the feature that is available only in GridGain,
not in Ignite. In Ignite
Hi Shawn,
Cache is not updated because you never update it :) i.e. entry.setValue() is
never called.
Bad performance can be caused by the fact that you do multiple operations
one after another in a single thread without any batching. Consider using
invokeAll or IgniteDataStreamer.
As for OOME, i
Hi Roman,
1. I don't think this is normal behavior, but I don't think there is any
hardcoded value like this either. Any idea what these 10 seconds are spent
for? Did you do any debugging?
2. I don't there are plans for this. Feel free to create a JIRA ticket (you
can even contribute the implemen
Hi,
Generally, having large heaps is not a very good practice, because
eventually it will likely cause long GC pauses and performance degradations.
I would recommend to scale out by adding more nodes, or use off-heap memory
to store your data.
[1] https://apacheignite.readme.io/docs/off-heap-memo
Hi Piali,
To my knowledge there are ODBC connectors for Python available, so if ODBC
API is enough for you in C++, you can do the same for Python. Does this make
sense? Can you please clarify what is missing for you?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.na
Embedded objects are supported, but the flat schema is created, i.e. 'street'
and 'zip' fields will appear directly in the 'Organization' table. The query
will look like this:
select street from Organization where id= ?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6
I meant the compute as well. With compute you can send your code to any node
and have full control on this.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Wait-till-the-cache-loading-is-complete-before-querying-through-ignite-client-tp9643p9662.html
Sent f
With LOCAL cache you always have to run the query locally. I.e., you have to
send a closure to the server node you want to query and call the cache API
there.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Wait-till-the-cache-loading-is-complete-before-que
Hi Chris,
See my comments below...
Chris Berry wrote
> * How does a Clustered & Partitioned IgniteCache handle a
> `cache.getAll(Set<> keys)` behind the scenes ??
> Will it transparently fan those requests out to the appropriate Partition
> for me (from the Client – or is it a “double hop” — on
Hi,
You're right, this heavily depends on a lot of factors and it's almost
impossible to tell if these numbers are good or bad. For large data sets you
can try partition aware data loading [1]. It can give performance
improvement and also is pretty scalable. I.e. you can make it even better by
add
Hi Steve,
Did you do any profileration? I would recommend to use VisualVM or JFR to
see what the hotspots are.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/LoadCache-Performance-decreases-with-the-size-of-the-cache-tp9645p9652.html
Sent from the Apache
Hi,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
pmazumdar wrote
> I have a cache loader job that puts data in
pmazumdar wrote
> Can we do the same from java code or config file?
No, the amount of heap memory can be specified only prior to JVM startup.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Increase-Java-heap-space-for-Ignite-tp1768p9650.html
Sent from the
Ignite doesn't have full segmentation resolution support out of the box. With
Ignite you will always end up with two independent working cluster in case
of segmentation. You will then have to restart on of these clusters, but if
there were updates on the restarted cluster after the segmentation, th
Tom,
I am not sure there is a memory leak at all because I have no idea what you
code is doing. I'm just guessing here, but without success so far. Looking
through code didn't give anything useful as well, it looks correct. So if
you need my help further, please provide something that will help me
Hi Shawn,
You can dynamically change the schema when using binary format (default
format for storing data in Ignite):
https://apacheignite.readme.io/docs/binary-marshaller
However, with SQL this is not currently possible. DDL support is currently
in progress and will be ready sometime first-secon
Well, I need to reproduce it to investigate further. I tried to dig up in the
code, but do not see anything that can lead to memory leak, and even to so
many instances of CacheConfiguration in this map. Is it possible for you to
create a reproducer for this? That would be really helpful.
-Val
-
Hi,
It looks like you're not subscribed to the list. Please properly subscribe
to the mailing list so that the community can receive email notifications
for your messages. To subscribe, send empty email to
user-subscr...@ignite.apache.org and follow simple instructions in the
reply.
As for the is
I would also create indexes on 'id' fields at least. And do not forget to
check the execution plan [1] to make sure everything works as expected.
[1] http://apacheignite.gridgain.org/docs/performance-and-debugging
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabbl
Hi Randy,
There is no "must" for having indexes, but indexes in many cases can speed
up the execution. If you remove index from 'salary' field in the example, it
will still work, but will imply full scan. However, sometimes there can be
queries that do not benefit from any indexes. Like calculatin
Hi,
By a project on GitHub I actually meant that something that I can run right
away and be sure that I do absolutely the same that you do. Basically, a
unit test.
I tried to run your code anyway, but it works fine for me. Attached is the
test I created, please try to run it and check what you're
Hi Sam,
I reproduced the issue using your code and created a ticket:
https://issues.apache.org/jira/browse/IGNITE-4450
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p9616.html
Sent from the Apache Ignite Users maili
Are you sure that all of them are in this map? What is the size of rmtCfgs
map?
Actually this map can be non-empty only on a node which is not fully started
yet. Basically, when a new node joins a topology, it collects the
configuration from all nodes for validation check, does the check and clean
This happens because Ignite adds predefined _key and _val fields to every
table. They represent whole key and value objects and deserialize on the
client when you select them. Now they are also selected when you do 'select
*'. This will improved in future, but for now you should avoid doing this
an
Can you show your configuration?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Failed-to-connect-tp9556p9613.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
vahan wrote
> Imagine we have cluster of nodes with replicate
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-can-I-save-the-job-result-which-was-ran-on-server-which-lost-the-connection-tp9462p9610.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi Rishi,
You job is executed on a server node, which doesn't have Person class on
classpath. To deploy your code on server side simply create a JAR with
required classes and put it into IGNITE_HOME/libs folder before starting the
nodes.
Another way to overcome the issue is utilize BinaryObject A
Hi,
@QuerySqlField is annotation for Ignite SQL [1], it has nothing to do with
Cassandra integration. To specify column name which differs from Java field
name, you should use 'field' tags inside 'valuePersistence', like shown in
the example [2].
[1]
https://apacheignite.readme.io/v1.8/docs/index
Embedded instance is the one started with Ignition.start().
I look at the histogram and see that there are more than 88000 instances of
CacheConfiguration in each application, while the number of caches is only
6. So these are instances are not really used by Ignite, but are saved
somewhere, most
I understand what you want to do technically, but what is the purpose of this
exercise? Basically, such distribution limits you to only two nodes which is
not scalable. Why not rely on the partitioning strategy provided by Ignite?
If you still need to do this, you can implement your own AffinityFu
Where are the links? Can you show an example?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Online-documentation-tp9561p9568.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi Rishi,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
rishireddy.bokka wrote
> I recently started using Ignit
Dmitry,
Basically this exception means that you tried to deserialize a binary object
that was never serialized on this cluster. When a type is serialized for the
first time, Ignite saves the meta data for this type in a system cache, so
once you deserialize this object (potentially on a different
It looks like you changed the IP finder configuration and pointed it to port
range starting with 48500. However, node binds to 47500 by default, so you
can't connect. Does it work with the default configuration?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.c
Hi,
1. Yes, in case write-through is used CacheStore will be updated from the
node that runs a transaction. If client doesn't have access, you can send a
closure to one of the servers and run a transaction there. Another option is
to use write-behind, but in this case you won't have transactional
Hi,
Main documentation is still on apacheignite.readme.io. For example, here is
the write-behind page (link is not changed from previous version):
https://apacheignite.readme.io/docs/persistent-store#write-behind-caching
apacheignite-mix.readme.io contains only sections about integrations with
ot
CacheStore can be called on client node, in particular if TRANSACTIONAL cache
is used. Transactional cache updates the store from a node which coordinates
a transaction which is usually a client.
This is not needed for ATOMIC caches though and can be improved, we have a
ticket for this [1]. For no
clear() only removes entries from in-memory cache. remove() is full-pledged
update operation, which does write-through, can be enlisted in transaction,
guarantees consistency, etc.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Difference-between-cache-cle
When configuring with QueryEntity, Ignite doesn't now the exact field type,
so it creates a regular sorted index by default. You need to specify that
this is a geospatial index explicitly:
new QueryIndex("coords", QueryIndexType.GEOSPATIAL)
-Val
--
View this message in context:
http://apache-
First of all, I'm not sure how it works with Dataframes. Since we don't have
Dataframe support yet, only RDD, using Dataframe can potentially not work as
we expect (I don't have enough Spark expertise to tell if this is the case
or not). The only way to check this is to create tests.
Other than th
It seems to me you have a lot of embedded instances that are not properly
stopped and/or disconnected. Can this be the case. How many instances of
GridKernalContextImpl do you have in heap?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-1-6-0-suspec
Encryption between nodes is possible, but it's enabled separately from REST
[1].
Encryption of stored data is not supported out of the box, but you can try
to use CacheInterceptor [1] to support this. Note that it will break SQL
query execution, so if you're going to use them, I don't think encryp
Hi,
Yes, all Ignite APIs including IgniteCache are thread safe and it's OK to
use it concurrently from multiple threads. Actually, calling Ignite.cache()
method does not create a new instance, but returns an existing one, so you
will end reusing the same instance anyway.
-Val
--
View this mess
How many caches do you have?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-1-6-0-suspected-memory-leak-from-DynamicCacheDescriptor-tp9443p9517.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
It looks like an issue related to [1]. Let's investigate one by one.
[1]
http://apache-ignite-users.70518.x6.nabble.com/Ignite-1-6-0-suspected-memory-leak-from-DynamicCacheDescriptor-td9443.html
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-1-6-0-
Any idea why you have so many CacheConfiguration objects? Who holds
references to them?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-1-6-0-suspected-memory-leak-from-DynamicCacheDescriptor-tp9443p9515.html
Sent from the Apache Ignite Users mailing
No, unfortunately not. Probably someone closer to this topic can tell if
there are any plans to add this support.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-can-I-save-the-job-result-which-was-ran-on-server-which-lost-the-connection-tp9462p9514.htm
I believe the only workaround is to create your own filter. You can use
ignite.cluster.nodes() to get all nodes in topology and ClusterNode.order()
to determine node's order. The one with minimal order is the oldest one.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x
Hi Murali,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
muralisajja wrote
> I want to know how read the csv fi
CacheConfiguration objects consume 0% though, so there should be something
else. Is there any understanding what is actually consuming the space? Can
you upload the whole .hprof file somewhere?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-1-6-0-su
Hi Ganesh,
1. Index is updated each time an entry is updated. It doesn't actually
matter, what initiates the update - put() operation, read-through,
rebalancing or anything else. Thus, once rebalancing is completed, all
indexes are up to date.
2. Rebalancing mode only controls the use of public AP
Hi,
To be honest, I don't quite understand these diagrams :) Can you give some
comments?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Apache-Spark-Ignite-Integration-tp8556p9494.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Anil,
You're setting 100 limit for result set rows, that's why you see only 100
rows.
statement.setMaxRows(100);
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Fetching-large-number-of-records-tp9267p9491.html
Sent from the Apache Ignite Users mailing li
801 - 900 of 2301 matches
Mail list logo