Subash,
Yes, I reproduced it now. Looks like a bug to me, I created a ticket:
https://issues.apache.org/jira/browse/IGNITE-8027
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Subash,
When you run the test, is this the only node, or there are some other nodes
running? I actually can't reproduce the issue even with your code. Here is
what I do:
- Clean up the work directory.
- Run the test, wait for output, stop.
- Comment out 'put' invocations.
- Run again.
The output
Subash,
This is weird, I'm doing exactly the same and not able to reproduce the
issue. Can you share your whole test so that I can run it as-is?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I was actually not correct here. Although there are some known issues,
expiration is supposed to work with both memory and persistence.
Subash, can you provide more details on how you reproduce the issue? I'm
getting null even after I restart the node. Are you doing anything else
there?
-Val
-
Currently expired entry is removed only from memory. The same will be
supported for persistence layer after this ticket is implemented:
https://issues.apache.org/jira/browse/IGNITE-5874.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Naveen,
Ignite provides out of the box implementation for RDBMS. The easiest way to
integration would be to use Web Console to generate all required POJO
classes and configurations:
https://apacheignite-tools.readme.io/docs/automatic-rdbms-integration
-Val
--
Sent from: http://apache-ignite-us
Vishwas,
I update the doc and the example. Thanks for pointing this out!
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Prasad,
SQL engine never deserializes your objects and always works with binary
data, it does not depend on how you created and configured your schema.
As for "does querying table using jdbc code work faster than querying cache"
question, can you clarify it please? What exactly are you comparing?
Build works for me (and most likely for everyone else as there are no
complaints), so it looks like your local issue. I would try the following:
- Clean up local Maven repo.
- Run without custom setting.xml
- Run with verbose output to see if Maven provides more details on the
issue.
-Val
--
Se
Looks like you provided AddressResolver on TcpDiscoverySpi, so communication
SPI is not aware of it. Try setting it on IgniteConfiguration instead, that
will make both components to pick it up.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Prasad,
1. In write-behind mode database updates are not transactional. If you want
Ignite to maintain underlying Oracle transaction to make sure cache and DB
are always consistent, you need to use synchronous write-through. As for
compute, you can execute transaction within compute, but you can't
Nicolas,
Can you please show the whole trace?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Oleksandr,
Generally, this heavily depends on your use case, workload, amount of data,
etc... I don't see any issues with your configuration in particular, but you
need to run your tests to figure out if it provides results that you expect.
-Val
--
Sent from: http://apache-ignite-users.70518.x
Shawn,
I doubt this feature makes much sense as it doesn't really add value. If you
need to avoid data loss, using backups (and sometimes persistence) is a
must, and with backups you can easily achieve functionality you're
describing.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabb
Hi Oleksandr,
1. Yes, Ignite is production ready.
2. I doubt there is a fully pledged example for this, but I don't see any
reason why it wouldn't be possible. You can run anything you want within a
service.
3. Ignite is always and fully free. If you're asking about commercial
products build on to
Rajesh,
Cassandra session is already released to the poll when onSessionEnd is
called. I don't think there is currently a way to acquire this information.
Here is a ticket for improvement:
https://issues.apache.org/jira/browse/IGNITE-7798
-Val
--
Sent from: http://apache-ignite-users.70518.x6.
Can you try to reproduce the issue in a smaller project that you would be
able to share? Honestly, issue is absolutely not clear to me.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Default page size in Ignite 2.1 was 2048 (2K), in 2.3 it was increased to
4096 (4K). Since your storage was created on 2.1 with page size of 2K, you
need to restore with the same size. To achieve this you should explicitly
set DataStorageConfiguration#pageSize property to 2048 when starting Ignite
Matt,
GridCacheWriteBehindStore#stop() is an internal method that is invoked when
cache is destroyed or node is stopped, you should not call it explicitly.
Alexey's suggestion makes sense to me, I think you should implemented it
this way.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.
Mike,
Not sure I understand a question.. Can you show an example of your data
model and clarify what kind of collocation strategy you want to achieve?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Luqman,
I don't see why not. It will probably require pretty big cluster, but looks
like your Redis cluster is not very small either :) Ignite is a highly
scalable system, so you can test with smaller clusters of different sizes,
check what maximum throughput they provide and then extrapolate t
That's correct. Custom SQL functions must be explicitly deployed on all nodes
and can't be deployed dynamically.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Mike,
Looks like you already have a solution, right?
http://apache-ignite-users.70518.x6.nabble.com/RE-Dates-before-epoch-java-sql-Date-tp20034.html
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Pavel,
Can you please elaborate on this? What is generated by Spring and why is it
not suitable for Ignite?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi John,
1. True.
2. The blog actually provides the answer:
When the Backup Nodes detect the failure, they will notify the Transaction
coordinator that they committed the transaction successfully. In this
scenario, there is no data loss because the data are backed up and can still
be accessed an
Hi Tim,
Cache configuration is defines when it's started. So @QuerySqlField
annotation on the new field does not have affect unless you restart the
cluster or at least destroy the cache and create with the new configuration.
Field are added on object level transparently, but to modify the SQL sche
Rajarshi,
You don't have to add dependencies explicitly of course. Ignite is a
standard Maven project and there is no additional magic that you need to
consider. Just add required Ignite modules that are required for your
project into POM file and Maven will do the rest.
-Val
--
Sent from: htt
Hi Prasad,
I understand that the example you provided can be a simplified one, however
wanted to mention that this particular piece of code does not require a
transaction at all. You can just execute a single invoke() operation,
optionally returning required value (its API allows that). This will
This is also discussed on StackOverflow:
https://stackoverflow.com/questions/48723261/is-there-any-work-around-for-indexing-list-in-apche-ignite-and-use-in-where-clau
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Colin,
Unfortunately this is not possible at the moment, you need to restart nodes
to change service implementation. There is a feature request for improving
this: https://issues.apache.org/jira/browse/IGNITE-6069
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Why are using local cache instead of partitioned in dev? If you have local
caches, then you will get behavior exactly like you described - you will
remove from one node, but not remove on others. Can this be the case?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I would also recommend to take a look at EntryProcessor [1]. It allows to do
co-located atomic updates, so it doesn't require transaction and reduces the
amount of data transferred across network, so it should give best
performance results.
[1] https://apacheignite.readme.io/docs/jcache#entryproce
Created a ticket for this improvement:
https://issues.apache.org/jira/browse/IGNITE-7641
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ChainedTransactionManager doesn't guarantee that both transactions will be
rolled back in both cases, here is a quote from JavaDoc:
Using this implementation assumes that errors causing a transaction rollback
will usually happen before the transaction completion or during the commit
of the *most i
Hi Jonathan,
You can't rename/move a directory if destination is a subdirectory of
source. For example, you can't do this:
/A/B -> /A/B/C
Looks like JavaDoc has incorrect example, I will check this and fix.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Artёm,
Which version of Ignite is this? By any chance, do you have a reproducer
that you can share with us (a small project on GitHub would be the best
option)?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
By standalone cluster I just mean a regular Ignite cluster running
independently from Spark. The easiest way is to start a node is using
ignite.sh script providing proper configuration file.
Once you switch IgniteContext to standalone mode, all nodes started within
Spark processes will run in clie
Well, then you need IGNITE-3653 to be fixed I believe. Unfortunately, it's
not assigned to anyone currently, so apparently no one is working on it. Are
you willing to pick it up and contribute?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ak47,
What exactly doesn't work? Can you describe the issue you're having?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
If client node based driver [1] is used, then you can also add
transactionsAllowed=true parameter to URL to overcome this error.
This will NOT enable transactional support, but will force the driver to go
through transaction related methods without exceptions. So for example, if
there are several
Ganesh,
If you provide full path, you don't need classpath: prefix. If you choose to
have this file on classpath, then you should use the prefix and then provide
the path relative to one of classpath roots. Also note that it has to be on
classpath of your application, I don't know if $CLASSPATH va
Ranjit,
Generally, removing and adding nodes in unpredictable way (which happens in
embedded mode because we basically rely on Spark here) is a very bad anti
pattern when working with distributed data. It can have serous performance
implications as well as data loss.
Data nodes are supposed to be
This looks like this issue: https://issues.apache.org/jira/browse/IGNITE-3653
Do you have P2P class loading enabled? If yes, can you try to disable it?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Umur,
No, it doesn't use shared memory and I doubt what you tell is even possible.
However, I still not sure I understand what is the purpose of all this. What
is your ultimate goal here?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Ariel,
There is no way to do this with the current API. What is the use case for
this?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Looks like you have a node filter in cache configuration and using lambda to
provide it. I would recommend to create a static class instead, deploy it on
all nodes in topology (both clients and servers) and then restart. Most
likely the issue will go away.
-Val
--
Sent from: http://apache-ignit
Ranjit,
Is it a really frequent event for node to crash in the middle of loading
process? If so, then I think you should fix that instead of working around
by disabling rebalancing. Such configuration definitely has a lot drawbacks
and therefore can cause issues.
-Val
--
Sent from: http://apac
Umur,
When you talk about "physical page mappings", what exactly are you referring
to? Can you please elaborate a bit more on what and why you're trying to
achieve? What is the issue you're trying to solve?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Ganesh,
Thin driver uses one of the nodes as a gateway, so once you add second node,
half of updates will have to make two network hops instead of one, so
slowdown is expected. Although, it should not get worse further when you add
third, fourth and so on node.
The best option for this case would
Queries are actually always async, meaning that query method itself doesn't
return any data. You get a cursor and the data is then fetched while you
iterate.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Rajesh,
Ignite has only non-unique indexes. For information on how to create them
please refer to the documentation: https://apacheignite-sql.readme.io/docs.
You can do this either via cache configuration or using CREATE INDEX command
depending on your use case.
As for the logging, here is some i
When you create a table via SQL, you already fully describe its schema, so
there is no need for QueryEntity. Can you clarify what you're trying to
achieve?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Ranjit,
Then it sounds like you're recreating embedded mode on your own :) In this
case deprecation will not affect you of course, but this still a NOT
recommended way to use Ignite with Spark.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Rajesh,
Whether you work with POJOs or with BinaryObject API, data is stored in the
same binary format which allows to get field values without deserialization.
Therefore indexing on binary data is possible. Please go through the
documentation page I provided earlier, it gives more detailed descri
You should check the trace for the root cause, it will likely give your more
pointers. Generally, this error just means that the cache store was not able
to update your underlying database. Obviously, there are a lot of possible
reasons for that.
-Val
--
Sent from: http://apache-ignite-users.70
Rajesh,
This actually sounds exactly like binary format Ignite uses to store the
data: https://apacheignite.readme.io/docs/binary-marshaller
Doing this manually (i.e. explicitly saving some byte array and creating
indexes over this array) would not be possible, but I don't think you really
need i
Hi Rajesh,
It's not clear what you're trying to achieve. So are you going to store the
attributes as a map or in serialized form as a byte array? And what exactly
should be indexed?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
No, you have to call loadCache explicitly in your code. If you want to do
this on startup, you can try utilizing a lifecycle bean:
https://apacheignite.readme.io/docs/ignite-life-cycle. Although keep in mind
that you need to make sure that all nodes are started before you start
loading.
-Val
--
2.4 (soon to be released) will have a binary protocol that will provide
limited API for various cache operations. Basically, you will be able to
create a socket, connect to Ignite server and then run this operations. Here
is the documentation (still work in progress, but gives a good idea about
wha
Ranjit,
Embedded mode in Spark RDD will be deprecated in 2.4 which is about to be
released. My recommendation would be to use standalone mode instead.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Aleksey,
Are you talking about use case when a transaction spawns a distributed cache
AND a local cache? If so, this sounds like a very weird use case. Do we even
allow this?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Jeff,
By default you will have a single data region limited to 80% of physical
memory, and all caches will be assigned to this region.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
zbyszek,
Would you mind creating a Jira ticket that would describe the issue and
proposed solution? Even better if you provide a patch for it, sounds like
you're bigger expert in Lucene than me :)
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Matt,
Can you please clarify what you mean by a custom cache adapter?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
zbyszek,
Generally, the answer is no. Binary format depends on internal Ignite
context, so there is no clean way to create a binary object without starting
Ignite. The code that was provided in the referenced thread is a hacky
workaround which could probably worked in one of the previous versions,
Ganesh,
ignite.sh itself doesn't look for any properties files, it's your XML that
has a reference to one. Looks like currently it's looking for a classpath
resource. Make sure it's available there, or fix the path in the XML is
location should be different.
-Val
--
Sent from: http://apache-ig
Mikael,
First of all, the trace should contain the cause with more details. What
does it tell? If this doesn't help to figure our the reason of the failure,
please show the cache configuration.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Franz,
Running an Ignite client node embedded into an application would be the best
way to interact with the cluster. You will have all the APIs available and
also it's the preferred option from performance standpoint.
Ignite already has thread pool pre configured, there is no need to specify
Ignite stores data in off-heap memory, so you can't limit the number of
entries, but you can limit the amount of memory allocated for caches. This
is done via data region configuration:
https://apacheignite.readme.io/docs/memory-configuration
-Val
--
Sent from: http://apache-ignite-users.70518.
Updates are supported. You're probably using an older version of Ignite.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Indexes would not be used during joins, at least in current implementation.
Current integration is implemented as a regular Spark data source which
provides each relation separately. Spark then performs join by itself, so
Ignite indexes do not help.
The easiest way to get binaries would be to use
Which version are you on?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
zbyszek,
Ignite fetches query results in pages. When you call next() or hasNext() for
the first time, client requests first page and receives it from server. It
then iterates though this page which is obviously much faster (no network
communication). Once the page is exceeded, next page is request
Tim,
Can you try replacing lambda with a static class? Does it help?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
"Optimal consumption" doesn't mean that you give high ingestion throughput
for free. Data streamer is highly optimized for a particular use case and if
you try to achieve same results with putAll API, you will likely get worse
consumption.
If low memory consumption is more important for you than h
I can't reproduce this behavior and actually it sounds very weird that scan
query can hang with empty cache. Do you have a complete test reproducing the
problem that you can share with us?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Can you please show what exactly you're doing? How do you create the table?
How do you enable the console and what are your next steps? Is the issue
reproduced only in H2 console (i.e. what if you execute the query in Web
Console for example)?
-Val
--
Sent from: http://apache-ignite-users.70518
Hi Humphrey,
Can you show the code?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
No, it was added in 2.3.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Cody,
If you want to dynamically change object schema, then you should avoid
deploying classes on server nodes. Ignite binary format [1] will make sure
that the change happens transparently, so you don't even need to perform
rolling upgrade.
However, this will not affect SQL schema, i.e. will not
AffinityKeyMapper is deprecated and should not be used. I would recommend to
decouple data models for Ignite and JPA, and then properly optimize Ignite
model. Using the same model for two completely different tasks can be
tricky.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com
Web Console currently does not allow to specify affinity keys on
configuration screen, there is a ticket for this improvement:
https://issues.apache.org/jira/browse/IGNITE-4709
For now the only option is to manually set the required annotations in
generated classes.
-Val
--
Sent from: http://a
Hi Dave,
Yes, TcpCommunicationSpi can establish connections both ways, so server
nodes should be able to connect to client nodes.
Thanks,
Valentin
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
You used try-with-resources block when starting the instance:
try(Ignite ignited =
Ignition.start(System.getProperty(IGNITE_CONFIG_FILE_PROP))){
ignited.active(true);
ctIgnite = ignited;
}
When this block is completed, instance is closed, therefore you get
exceptions afterwards. Get r
Ignition.start never returns null, it either throws exception or returns
ready to use Ignite instance. Please check your code, and if you can't find
a reason, create a separate project that would reproduce the issue and share
it with us somehow (e.g. via GitHub). This way we will be able to help yo
Hi,
You should specify the same version for all Ignite artifacts in your
project. If you're using 2.3, ignite-schedule:2.3.0 should be used.
Artifacts for this module are not deployed to Maven central after 1.2 due to
licensing restrictions (it uses LGPL dependencies). So you should either
build
I replied to this on StackOverflow:
https://stackoverflow.com/questions/47931737/how-to-connect-apache-ignite-node-by-static-ip-address
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
The result set is different when you add explain - it only returns execution
plans:
https://apacheignite-sql.readme.io/docs/performance-and-debugging#using-explain-statement
However, you're trying to get values for multiple columns which are not
there, therefore it fails.
-Val
--
Sent from: ht
John,
This should be done via BinaryConfiguration:
https://apacheignite.readme.io/docs/binary-marshaller#configuring-binary-objects
Javadoc is incorrect, I will file a ticket to fix it.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
If it's a single node, I would just create a special local cache for these
entries and deploy it on that node. Will this work?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Oops, I read wrong! This is not supported. There is a ticket, but it doesn't
seem to be active at the moment:
https://issues.apache.org/jira/browse/IGNITE-1417
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Ryamond,
It is supported, see here:
https://apacheignite-sql.readme.io/docs/net-sql-api
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Biren,
I meant that you can have a standalone cluster and embed client node into
the application instead of server node. Making these caches replicated can
be also an option - in this case all reads will be local and fast.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Your assumptions are correct. There is also an issue [1] that likely is
causing this behavior. As a workaround you can try to force IgniteContext to
start everything in client mode. In order to achieve this, use
setClientMode(true) in the closure that created IgniteConfiguration:
IgniteOutClo
Then the insert syntax I provided should work.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I don't think raksja had an issue with only one record in the RDD.
IgniteRDD#count redirects directly to IgniteCache#size, so if it returns 1,
so you indeed have only one entry in a cache for some reason.
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Amit,
Does it work if you start without nohup?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Biren,
If half of your operations became multiple times slower, why would you
expect throughput to increase? In case you don't use collocation, I would
recommend you to switch to client-server deployment. Initial performance
with two nodes and fully replicated cache can be slower than now, but you
Looks like you're running in embedded mode. In this mode server nodes are
started within Spark executors, so when executor is stopped some of the data
is lost. Try to start a separate Ignite cluster and create IgniteContext
with standalone=true.
-Val
--
Sent from: http://apache-ignite-users.705
Biren,
That's a wrong expectation because local in-memory read is drastically
faster than a network read. How do you choose a server node to read from?
What is overall use case?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
101 - 200 of 2301 matches
Mail list logo