It sounds like you simply need a cache replicated across all client
applications, and actually no need for standalone server nodes. If that's
the case, start an embedded node using Ignition.start() in each application,
and configure a cache with cache mode set to REPLICATED.
-Val
--
View this m
Rishi,
Secondary file system acts as a persistent storage for in-memory FS. It
sounds like you're looking for simple shared network file server.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-as-a-shared-file-sysytem-tp10361p10367.html
Sent from th
Hi Shawn,
Built in expiration is cache expiration, which by definition doesn't touch
database. This means that you need to manually use remove() operation to
delete from both cache and store. I think approach #1 is the best way to
approach this.
-Val
--
View this message in context:
http://ap
ignite_user2016 wrote
> I have clustered web app,I would like to share files between web app
> hosted on different machine. I am thinking I can make use of
> IgniteFileSystem across both host ?
>
> How does it handle read and write across set of Ignite clusters ? and how
> the files physically st
Hi Rishi,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
--
View this message in context:
http://apache-igni
I've never seen such approach :) I doubt it will be efficient, but you can
give it a try.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Fetch-by-contains-on-a-field-as-Array-or-List-tp10276p10351.html
Sent from the Apache Ignite Users mailing list archive
Responded on StackOverflow:
http://stackoverflow.com/questions/41960996/apache-ignite-pool-size-in-executor
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Executor-service-Thread-pool-size-tp10338p10350.html
Sent from the Apache Ignite Users mailing list a
Hi Tab,
Client exchange heartbeats with only one of the server nodes. If there is a
problem with this connection, it affects only client node, not server
topology.
As for non-discovery communication (i.e. any operation requests and
responses), however, client can connect to any node in topology.
Rafael,
Basically allowOverwrite=false is intended for initial data loading. For
this task write-through doesn't make sense.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Datastreamer-ignores-Persitent-Store-tp10312p10348.html
Sent from the Apache Ignite
Take a look at AffinityFunction interface and its implementations provided by
Ignite. This should give you good understanding of how it works.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Partition-mapping-to-Ignite-node-tp10300p10347.html
Sent from the
Anil,
The implementation of eviction policy is not provided out of the box, so you
will have to do it yourself. But it's not a very difficult task in my view.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/backup-to-swap-tp10255p10346.html
Sent from the A
Why do you need to automatically execute a query? Who will get the result?
Please clarify your use case.
StreamVisitor is just a closure that is invoked for each entry arrived to a
server node. It's up to you what logic to put there.
-Val
--
View this message in context:
http://apache-ignite-
Hi Rafael,
Please try to set allowOverwrite flag before streaming:
streamer.allowOverwrite(true);
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Datastreamer-ignores-Persitent-Store-tp10312p10323.html
Sent from the Apache Ignite Users mailing list archiv
Hi Tab,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
Tab Spangler wrote
> (1) Are the Clients truly clients of
1. You will need to make sure that partitioning strategy is the same in
Ignite and Kafka. I believe it can be possible, but haven't seen any
examples of this.
2. You can always implement your own AffinityFunction. This is not a trivial
task though, you need to make sure everything works as you expe
1. There is no standard practice actually, it depends on how your application
is organized. Use whatever better fits your particular case.
2. https://apacheignite.readme.io/docs/load-balancing
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Service-Deployme
Hi,
Atomic configuration must be consistent on all nodes. As for general cache
configuration, you also must be consistent on all nodes if it's provided on
startup. However, it's ok if there is a configuration for a particular cache
only on client - it will distributed during cache start.
-Val
Hi,
That sounds like a very weird use case. Why don't you use partitioned cache?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Get-distributed-query-data-from-Ignite-local-caches-in-cluster-tp10310p10317.html
Sent from the Apache Ignite Users mailing lis
Sam,
Collections can't be indexed in Ignite. You can easily implement CONTAINS
function as you described, but as I mentioned, using it will trigger full
scan.
Another option you have is to implement IndexingSpi and use SpiQuery to
execute queries. But this will imply implementing indexing and qu
Hi Sam,
To do the search you can create a custom function that will do the job [1].
But note that this search will not be indexed. To create an index you need
to store each element of collection as a separate cache entry.
https://apacheignite.readme.io/docs/miscellaneous-features#custom-sql-funct
pragmaticbigdata wrote
> I will spend sometime in understanding what this means but by "Hadoop
> compliant implementation" are you hinting that HDFS needs to be running
> even if I have S3 as the secondary file system?
It's any FS that has a connector that implements
org.apache.hadoop.fs.FileSyste
Got it. I believe there is no mechanism to inject custom analyzer. I'm not a
big Lucene expert, do you have an example of how this can be done?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Text-query-tp8661p10296.html
Sent from the Apache Ignite Users ma
Anil,
What exactly does not exist? There is a swap space implementation out of the
box, and you only need to implement eviction policy.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/backup-to-swap-tp10255p10293.html
Sent from the Apache Ignite Users mail
Surprisingly, the name of the thread is 'ttl-cleanup-worker' :) It's started
only if there is at least one cache using it.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/eagerTtl-tp10249p10275.html
Sent from the Apache Ignite Users mailing list archive at
I meant that Cassandra itself will be involved only when you load the data
into caches, which is a separate step that should happen prior to query
execution. When Ignite query is executed, Cassandra is not touched.
The answer on your question is yes - any joins are possible, similar to any
relatio
Binary marshaller is default internal format, there is no need to set it
explicitly in configuration. If you remove the 'marshaller' property from
configuration, it will be used. If it doesn't work this way as well, please
attach configuration and the exception trace.
-Val
--
View this messag
Hi,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
Jenny B. wrote
> I am exploring Apache Ignite on top of Cassa
What do you mean by usage? Which metrics are you looking for exactly?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-to-get-On-heap-memory-usage-for-a-map-tp10263p10267.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi Anil,
Take a look at this discussion:
http://stackoverflow.com/questions/40403598/can-you-evict-ignite-cache-backups-to-disk
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/backup-to-swap-tp10255p10266.html
Sent from the Apache Ignite Users mailing list
Hi,
Your test is incorrect. This lines are not synchronized and can be reordered
with each other and with the validation that happens in another thread:
simpleCacheValidationWithLock.setIndex(i - 1);
simpleCacheValidationWithLock.setValueToValidate(valueCounter);
After I moved them into the sync
Anil,
Take a look at lifecycle beans:
https://apacheignite.readme.io/docs/ignite-life-cycle
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Cache-stopped-tp9503p10253.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi,
Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.
avpdiver wrote
> Is there possibility to use my custom lucene
Hi Sam,
According to code, there is actually a thread per node not per cache. Do you
observe different behavior?
Also I didn't find anything about "thread per cache" in JavaDoc. Can you
please show where you read this?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6
You can implement the CacheStore interface to provide integration with any
kind of storage: https://apacheignite.readme.io/docs/persistent-store
There is no Kafka based implementation out of the box.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Kafka-as
I don't think I will be able to explain better than it's done in the ticket
:) Current approach is naive and not always effective for multiple reasons.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/OFFHEAP-TIERED-and-memory-fragmentation-tp10218p10235.htm
Lukas,
Check that nodes can connect to each other (i.e. there are no network
issues, no firewall or ports are opened, etc.). Another possible reason is
GC - make sure that you have enough heap memory.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Connect
Random means anywhere in empty memory space. When you create an entry, a new
block is create for this value only. If you remove an entry, memory is
released. Once removed, it can be used by any application including Ignite.
Basically, the actual location where memory is allocated is defined by OS.
Hi,
In current implementation connections can be established either way, so
servers must be able to connect to clients. Therefore client nodes should
also have address resolvers exposing their public addresses.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.c
Hi Peter,
Current implementation rendomly allocates memory for each value when it's
added, and frees memory when the entry is removed. Therefore, the
segmentation is possible if you constantly remove entries and create new
once. In case you usually *update* instead, especially with low throughput
Are you expecting these to queries to be long running? If not, this ticket
will not help you in any way and you should work on resolving the root cause
of bad query performance. Check that the execution plan is correct and that
you have enough resources (memory and CPU) to execute all queries. You
Shawn,
It looks like you had too many asynchronous operations executed at the same
time and therefore too many futures created in memory. Async approach
requires more accuracy, and if everything works for you with sync
operations, I would use them.
-Val
--
View this message in context:
http:/
I would suggest to create IgniteConfiguration programmatically (use
IgniteContext constructor that accepts closure instead of XML file path).
However, it looks like there is a room to improve, I created a ticket:
https://issues.apache.org/jira/browse/IGNITE-4593
-Val
--
View this message in co
Hi Chris,
Did you check the heap dump? What is consuming memory?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/OOMs-with-Compute-Grid-tp10169p10173.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Partitions are needed because number of keys is infinite, while number of
partitions is fixed. If topology changes, you need to remap only 1K
partitions (default value) instead of remapping all the keys in cache.
Distribution can indeed be uneven in case collocation is used, but in most
cases this
The path to configuration file you provided will be used on all nodes, driver
and workers. So the actual file should be replicated and it looks like you
have different version of it on different Spark nodes. Can this be the case?
-Val
--
View this message in context:
http://apache-ignite-users
Hi Jim,
If there are two nodes, but only one entry in the resolution map, how both
nodes provide proper mapping? Basically, it looks like discovery is working,
but communication is not. Note that both nodes must be able to instantiate
connection to each other.
-Val
--
View this message in cont
Sam,
That's actually interesting :) I believe this command behaves this way
because the purpose of the command is to fetch the Ignite log, not any
arbitrary file. On the other hand, logic in GridLogCommandHandler looks
weird and limited. Do you have any ideas about how it SHOULD work?
My current
Sam,
What if you does not provide the 'path' parameter? Does it work?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/REST-service-command-LOG-tp10148p10159.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Sam,
I don't think there is a reason, feel free to create a ticket for this
feature.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/REST-Service-for-CLEAR-tp10150p10158.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Are you sure the configuration change is properly applied on Spark worker
nodes? It must start Ignite nodes in client mode when you set this property.
The one client you have is most likely the one running on the driver.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6
Correct.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Index-Maintenance-During-Transactions-tp10088p10143.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Yakov,
Makes sense.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Data-Rebalancing-Transactions-tp10092p10142.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi Sam,
That's an interesting point. Ignite provides plugin framework (see
org.apache.ignite.plugin package), which allows to add new components to
Ignite. However, this way you will completely replace REST processor with
something new, while you only want to make the endpoint pluggable. Would you
Sam,
NONE mode is something that should be used very accurately. It means what it
means - no rebalancing is happening when topology is changing, unless you
trigger the rebalancing manually. For example, if you have a cache with one
backup and you lose one of the nodes, you end up having only one c
Hi Sam,
Ignite starts its own HTTP server for rest API, so it's definitely possible
when running in JBoss as well as anywhere else. You just need to add
ignite-rest-http module with dependencies to classpath and the endpoint will
start automatically.
If you're using Maven, add this to pom.xml:
I think Dmitry meant that indexes are updated synchronously with transaction
commit. However, note that SQL queries are currently not transactional, so
you can still get dirty reads in the result set.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Index-Ma
Hi Jim,
Please show you AddressResolver implementation, your configuration and
describe the deployment in more details (how many nodes, how addresses are
assigned to them, etc).
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-in-Docker-tp10120p10124
Hi,
This actually sounds like a use case for continuous queries [1]. They will
automatically notify the client about all updates that satisfy a filter,
providing ordering and exactly-once delivery guarantees.
[1] https://apacheignite.readme.io/docs/continuous-queries
-Val
--
View this message
Try to add this in the configuration file to force client mode for nodes
started within Spark:
Make sure not to do this for your standalone server nodes, of course.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Running-Spark-app-in-cluster-tp10073p1012
1.a. Help with what? Do you know how Spark behaves in this case and what
guarantees does it provide? To be honest, I'm still struggling to understand
why you don't want to use Ignite API directly for updates. Is there a use
case that you tried to implement, but it didn't work for some reason?
1.b.
Hi,
1. Rebalancing mode only changes the behavior of public API, i.e. if you
call IgniteCache.put() for example, it will wait for rebalancing to finish.
If it's a standalone node where you do not use the API, rebalancing mode
does not have any affect.
2. Newly added node start serving cache reque
Hi,
1.a. I think this depends on Spark and how it handles failover in such
cases. Basically, loading data to Ignite from Spark RDD is a simple
iteration through all partitions in this RDD.
1.b. You will not lose any data if you have at least one backup.
2. Can you clarify this? igniteContext.fro
Hi,
Can you show how you create the IgniteContext? Are you using XML or creating
IgniteConfiguration in code?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Running-Spark-app-in-cluster-tp10073p10103.html
Sent from the Apache Ignite Users mailing list arc
Hi,
1. This is correct.
2. In transactional cache a new session is created per transaction.
Connection for a session is acquired from provided data source.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Merging-with-RDBMS-transaction-tp10093p10102.html
S
Hi Chris,
This happens because you tried to create a near cache on a server node. If
that's what you're trying to achieve, then provide near cache configuration
in CacheConfiguration.setNearConfiguration(..) method.
However, the most common use case for a near cache is to have it on a client
node
Are you sure nodes are discovering each other and there are no topology
changes in the middle? Sounds like you're sporadically losing some data
which can happen when you lose too many nodes at a time. Can you try to
change the cache mode to REPLICATED or increase number of backups?
-Val
--
View
Hi,
Can you provide more details? How deployment looks like, what you're doing,
what is result and why it's not what you expect, etc...
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Running-Spark-app-in-cluster-tp10073p10078.html
Sent from the Apache Ign
Can you try to upgrade to 1.8 and check how 'delete from ...' query performs
in your case? From what I hear, this is the most appropriate solution.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-to-remove-large-amount-of-data-from-cache-tp10024p10077.h
Affinity key is part of the configuration and can't be change in runtime. The
only way to do this is to create another cache with new config and migrate
the data.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Re-partitioning-when-partition-key-changes-tp1
Hi Mani,
Cache configuration will indeed be broadcasted - that's how cache start
procedure currently works, it doesn't depend on cache mode. However, if
cache mode is LOCAL, the data will be stored only the node where you created
it.
-Val
--
View this message in context:
http://apache-ignite-
Answered on SO:
http://stackoverflow.com/questions/41612845/igniterdd-doesnt-return-rows-in-the-dataframe
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/IgniteRDD-doesn-t-return-rows-in-the-dataframe-tp10058p10065.html
Sent from the Apache Ignite Users mai
Well, you should've mentioned that you're using KafkaStreamer :) If so, then
this was indeed fixed in IGNITE-4140, but as it's backward incompatible
change, it will be available only in 2.0/
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Convert-stream-dat
Yes, by batching I mean removeAll or IgniteDataStreamer. However, I would
consider using expiration, I think it can really help you.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-to-remove-large-amount-of-data-from-cache-tp10024p10063.html
Sent from t
First of all, I would check that there are no memory issue under load. This
(as message states) is likely to happen when one or more nodes suffer from
long GC pauses. If this is not the case, I would set failure detection
timeout to several seconds and then try several values to check which one
wor
RoaringBitmap is Externalizable with custom serialization logic, therefore
can't be represented as BinaryObject. In such cases toBinary() method return
the original object without changes.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Efficient-way-to-get
What is the exception? Can you show the trace?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Efficient-way-to-get-partial-data-of-a-cache-entry-tp9965p10047.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Sounds like you can simply store all 5 latest entries as a single entry and
update the collection atomically using entry processor and invoke() method
(check current size within entry processor and remove oldest element if
needed). Once updated, you can do the computation. Will this work for you?
Current API doesn't really allow this if you use specific generics. However,
you can do this in unchecked manner and switch to . A little
dirty, but should work.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Convert-stream-data-key-value-tp10033p10044.htm
I just noticed that you set ipFinder.setShared(true); which actually causing
this behavior. If you remove this line, B will wait for A in your case, as I
described before.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Rediscovery-after-Startup-tp10007p100
Shawn,
BinaryObject always return another BinaryObject for your custom type,
because otherwise it would mean that you have classes and most likely there
is no reason to use BinaryObject and withKeepBinary in the first place.
However, you can always call BinaryObject.deserialize() to convert to
ins
Gaurav,
Thread pool and collision SPI are separate from each other. First one is
just a set of threads used to run user's code (jobs, closures, event
listeners, etc). While the latter is specific to Compute Grid and allows to
regulate how jobs are executed (limits, ordering, etc).
-Val
--
View
Not sure why the flag doesn't exist in .NET API, but you can overcome this by
configuring caches statically in the XML file (they use Java-based beans).
Will this work for you?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-to-enable-cache-statistics-i
Hi,
What is 'partition key' and how is it used? Can you give an example for this
use case?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Re-partitioning-when-partition-key-changes-tp10031p10039.html
Sent from the Apache Ignite Users mailing list archive
You're removing one by one and each remove is synchronous distributed
operation, so this is going to be very slow. You should use batching or even
IgniteDataStreamer for this. BTW, maybe you can utilize automatic expiration
[1] for this?
As for jammed cluster, this is weird. The first thing to che
Mani,
1. Specific timeout value depends on you environment network speed, etc.
Basically, client will automatically reconnect to server when the latter is
restarted, but it will fail to do that if timeout expires.
2. Local cache must work, I would assume you're doing something. Please
prepare a t
Hi,
Scan query doesn't lock anything. It behaves like any concurrent map - if
concurrent update happened on an entry that is not visited by the iterator
yet, you will then get new value.
What kind of locking are you looking for? It sounds like you want to block
the whole cache, which is generally
Mikhail,
>From what I here, you can simply use SQL for this task. Is there something
in particular that doesn't work for you?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Faceted-search-tp10034p10035.html
Sent from the Apache Ignite Users mailing list a
Shawn,
map() method is execute locally on the master node, so you can do all the
checks outside the task and then execute only if needed. Will this work?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/compute-SQL-returned-data-tp9890p10016.html
Sent from
If node can't connect to any of the addresses, and can't bind to any of the
addresses, it will actually wait and will join once someone is started. I
probably do not understand the problem, please explain in more detail.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6
Hi Tejas,
I don't see anything obvious in the plan. Note that when you join, you still
have to scan one of the sides, so if intermediate datasets after applying
conditions are still large, performance can be not very good. Joins are
applied in the order they appear in the plan, so you can go throu
Hi,
What timestamps are you referring to? If you want to measure the latency of
a particular call, surround it with timestamps and print out the duration.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Java-Service-Deployment-tp9933p10011.html
Sent from t
Lukas,
New node connects to the one that is already running, not other way around.
Actually, for node to start it must be able to connect to one of the
addresses in the IP finder, OR be able to bind to one of these addresses. So
the fact that your node started means that its own address was provi
Lukas,
According to the log, this node is the first one in the topology and it is
successfully started, this is correct behavior. Can you clarify what is
wrong?
BTW, you see all these errors only because you have DEBUG logging enabled.
-Val
--
View this message in context:
http://apache-igni
getHeapMemoryUsed gives the total amount of heap memory used. This includes
all overheads introduced by Ignite, any short living objects and even
garbage that is not collected yet. Monitoring it is a good idea when running
test or benchmarks with all data loaded, but it will not give you an
estimat
Hi,
Shared memory is known to be not very stable under load, and it will not be
default anymore in 2.0 (and will probably be deprecated). I would recommend
to disable it, just add the following into configuration:
-Val
--
View this message in context:
http://apache-ignit
Hi Gaurav,
There is no limitation and this approach should work. The number of jobs
executed in parallel will be limited by number of threads on public pools
and by collision SPI [1].
[1] https://apacheignite.readme.io/docs/job-scheduling
-Val
--
View this message in context:
http://apache-i
I see what you're saying now. Streamed value is passed to your implementation
as an argument, so you can do smth like this:
stmr.receiver(new StreamTransformer() {
@Override public Object process(MutableEntry e, Object...
args) {
Object streamed = args[0];
// Entry processor l
Can you attach your configuration?
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Unable-to-start-2-server-nodes-on-same-machine-tp9987p1.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi Shawn,
This means what it says - ComputeTask.map() method returned no jobs, i.e.
null or empty map.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/compute-SQL-returned-data-tp9890p.html
Sent from the Apache Ignite Users mailing list archive at Nabb
It works properly for me when I remove collocated flag. Not sure why is that,
will ask on dev list.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/NOT-IN-in-ignite-tp9861p9982.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
701 - 800 of 2301 matches
Mail list logo