Hi. two platforms (spark and ignite) use in memory for computing. instead of
loading data into ignite catch we also can loading data to spark memory and
catch it on spark node. if we could do this (catching on spark node) why we
load data to ignite catch?. loading to ignite catch have benefit only
Hello,
Please try to apply and consider generic optimization techniques for the
persistence:
https://apacheignite.readme.io/docs/durable-memory-tuning
In the meantime:
- Try to keep investigating the cause of the GC pause unless you 100%
sure it's caused by rebalancing
- Increase Ignite
Check the following pages:
- Ignite tooling: https://apacheignite-sql.readme.io/docs/sql-tooling
- Performance and debugging:
https://apacheignite-sql.readme.io/docs/performance-and-debugging
- Ignite console: console.gridgain.com
--
Denis
On Wed, Jan 2, 2019 at 5:49 AM Lokesh Sharma
Yes, a custom affinity function is what you need to control entries
distribution across physical machines. It's feasible to do. Worked with one
of Ignite customers who did something similar for their needs - the code is
not open sourced.
--
Denis
On Wed, Jan 2, 2019 at 10:17 AM Mikael wrote:
>
Hello,
Please share the actual queries and execution plans for each. Use "EXPLAIN
SELECT" in Ignite instead (not H2 version).
--
Denis
On Wed, Jan 2, 2019 at 10:58 AM JayX wrote:
> Hello!
>
> I have been working on Ignite and comparing its performance to traditional
> databases. However, the p
Are you using JDBC/ODBC drivers? Just want to know why it's hard to execute
SQL queries outside of transactions.
Can you switch to pessimistic transactions instead?
--
Denis
On Wed, Jan 2, 2019 at 7:24 AM whiteman wrote:
> Hi guys,
>
> As far as I am concerned this is a breaking behaviour. In
Is there a way to name a table different from the type without resorting to the
use of DDL? I'm using the annotation approach to creating a table, and I seem
to be restricted to having the tables named after the type of class that was
annotated.
Hello!
I have been working on Ignite and comparing its performance to traditional
databases. However, the performance was not significantly better as
expected. Any advice on how to improve the performance would be greatly
appreciated!
The dataset I have is around 10M records, stored on a cluster
Hi!
By default you cannot assign a specific affinity key to a specific node
but I think that could be done with a custom affinity function, you can
do pretty much whatever you want with that, for example set an attribute
in the XML file and use that to match with a specific affinity key
value
Thanks for the responses. To summarise:
* JVM Heap (Xmx) - Not normally used by Ignite for caching data.
* MaxDirectMemorySize - Used by Ignite for some file operations but not for
caching data. As per above, 256m is usually sufficient.
* DataRegion maxSize - Used by Ignite to determine how much m
Thanks Mikael.
I did come across that link before, but I am not sure it addresses my
concern. I want to see how I need I size my physical VMs based on affinity
keys. How would I say for India affinity key use this super size VM and for
others use the other smaller ones, so the data doesn't get shu
Hi guys,
As far as I am concerned this is a breaking behaviour. In Apache Ignite v
2.5 it was possible to have the SQL query inside the optimistic serializable
transaction. Point here is that SQL query might not be part of the
transaction (no guarantees) but was at least performed. In 2.7 this cod
You can find some information about capacity planning here:
https://apacheignite.readme.io/docs/capacity-planning
About your India example you can use affinity keys to keep data together
in groups to avoid network traffic.
https://apacheignite.readme.io/docs/affinity-collocation
Mikael
Den
Is there a way to query Ignite database that is deployed remotely?
I use H2 Debug Console locally but this doesn't work for remote purposes.
Thanks Naveen.
-- Cache Groups: When would I start considering cache groups, if my system
is growing, and sooner or later I will have to add to my caches and I need
to know 1) should I starting grouping now (I'd think yes), 2) if no, when,
what number of caches?
-- Capacity Planning: So, there is
Unfortunately I could not just change my pom because we use ignite in
docker and this is a part of modules inside docker.
Of course As solution I can build my own docker but this is not very
useful.
Also you can check that tests of cassandra modules fails :(
https://github.com/apache/ignite/t
Hi,
Apache Ignite 2.7 has an updated version for a number of dependencies. It
was done for security reasons and to remove possible vulnerable versions of
components from Apache Ignite distribution.
Probably you know: Which version of Guava is compatible with Apache
Cassandra store? Are there any
Hi All
I got exceptions in ignite after update to 2.7.0
2019-01-02 10:09:52,824 ERROR [cassandra-cache-loader-#101]
log4j.Log4JLogger (Log4JLogger.java:586) - Failed to execute Cassandra
loadContactsCache operation
class org.apache.ignite.IgniteException: Failed to execute Cassandra
loadContac
Where does the data in your Spark DataFrame come from? As I understand it, that
would all be in Spark’s memory anyway?
Anyway, I didn’t test this exact scenario, but it seems that it writing
directly to an Ignite DataFrame should work — why did you think it wouldn’t? I
can’t say whether it woul
That’s a great investigation! I think the developer mailing list
(http://apache-ignite-developers.2346864.n4.nabble.com) would be a better place
to discuss the best way to fix it, though.
Regards,
Stephen
> On 2 Jan 2019, at 07:20, otorreno wrote:
>
> Hi everyone,
>
> After the new release (
20 matches
Mail list logo