Hi,
I have read the reply below.
http://apache-ignite-users.70518.x6.nabble.com/Eager-TTL-and-query-td11662.html
It mentioned that TTL will only update by IgniteCache API (put/get/invoke and
etc). And SQL query won’t update TTL.
I use PostgreSQL as persistence store.
Is there any method to
How do I configure a cluster as a persistent, replicated SQL datastore?And
can it perform as an active-active, high-availability solution that can be
scaled horizontally?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi, stan,Thanks. I am evaluating igniteRDD. it is cool! I am new to spark, I can get sqlContext by ds.sqlContext.If I want to query all using select, what's the table name?see below in spark-shell, I want to do something like > sc.sql("select * from "). Where is the table
Here I have a five entries as below:
(14,4);
(21,1);
(32,2);
(113,3);
(15,5);
and I want to use SortedEvictionPolicy to keep below 3 entries (sort by
values in descending order and get top 3 items):
15-->5
14-->4
113-->3
The actual output is:
21-->1
32-->2
113-->3
14-->4
15-->5
issue 1: The
Hi,
I have a use case where in Windows env. we want to cache huge blobs which
may range from 20KB to 2MB on boxes where also other services are running
(so cache should consumes less CPU). They need to have persistence
capability during system crash or process restarts. I am using Intel Xeon
Nicolas,
Can you please show the whole trace?
-Val
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Igniters,
It's a pleasure for me to announce that ASF and Ignite community officially
accepted the genetic algorithms to Ignite machine learning code base:
http://incubator.apache.org/ip-clearance/ga-grid-ignite.html
Thanks to Turik Campbell for the contribution and Ignite ML contributors
and
Oleksandr,
Generally, this heavily depends on your use case, workload, amount of data,
etc... I don't see any issues with your configuration in particular, but you
need to run your tests to figure out if it provides results that you expect.
-Val
--
Sent from:
Not sure how to check the GC log, but here's a minimal complete example using
two java classes:
Try these two queries.
SELECT COUNT(DISTINCT((idnumber,value))) FROM athing
SELECT COUNT (*) FROM (SELECT COUNT(*) FROM athing GROUP BY idnumber,value)
Both queries do the same thing, you'll find
Yeah. Thats exactly what I am doing and seems to have improved. The frequency
is set at 5000 i.e. 5 seconds
Thanx and Regards,
KR Kumar
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Mike,
Have you checked GC log? Have you seen long pauses?
Is it possible to share SQL query and corresponding execution plan [1]?
Also, please share cache configurations.
[1]
https://apacheignite-sql.readme.io/docs/performance-and-debugging#using-explain-statement
Thanks!
--
Sent from:
Unfortunately, at this stage in dev, I'm only doing runs on one machine, and
though I am using partitioned data to do query parallelism, it seems I lose
that in the GROUP BY. Does GROUP_BY distribute at all?
Might a spark layer on top give a better distribution path?
Mike
Hi Mike,
It seems that GROUP_BY requires to fetch all dataset into java heap (in
order to sort data) and it may lead to long GC pauses.
I think that data collocation [1] should improve performance with using
GROUP BY.
[1] https://apacheignite.readme.io/docs/affinity-collocation
Thanks!
--
Hello,
> However, it seems that currently cache data has to be "declared" to use
> SQL
> AHEAD of time BEFORE data is loaded in order to use SQL. Is this correct?
Yes, you are correct. Cache configuration should be properly configured
before.
You cannot query data via SQL API without
Hi,
I guess the problem in "setSwapPath(...)", it is not the path for
persistence.
Try to do something like this:
storePath="/data/ignite2/swap/"
cfg.setDataStorageConfiguration(
new DataStorageConfiguration()
.setWriteThrottlingEnabled(true)
.setPageSize(4 * 1024)
Oh, also, during the difference in time, only 1-2 cores seem to be involved.
Mike
From: Williams, Michael
Sent: Monday, February 26, 2018 9:40 AM
To: user@ignite.apache.org
Subject: Slow Group-By
All,
Any advice on speeding up group-by? I'm getting great performance before the
group-by clause
All,
Any advice on speeding up group-by? I'm getting great performance before the
group-by clause (on a fairly decent size data set) , but adding it slows things
down horribly. Without the group-by clause, the query takes about a minute, and
all cores are fully in use. With the group-by cores,
Hi guys,
What is recommended hardware for for Ignite node if it is used as
distributed cache and persistent store?
I see you mentioned that separate SSDs are recommended for WAL, index and
data:
https://apacheignite.readme.io/docs/durable-memory-tuning
What can you say about such minimal
The issue is fixed and the fix is already in master.
Best Regards,
Igor
On Thu, Feb 1, 2018 at 1:26 PM, Igor Sapego wrote:
> I'm currently working on the fix.
> Sorry guys, I was not able to fit it in 2.4.
> I can share a separate patch when it will be ready though if you
I think he means when *write-through* and *read-through* modes are enabled on
the 3rd party store, data might be written/read to/from one of those
persistence storage (not on both).
So if you save data "A" it might be stored in the 3rd party persistence, and
not in the native. When data "A" is
Hi Shawn,
You can use Ignite standalone and you can also use it together with Spark.
Please take a look at these SO question and an article:
https://stackoverflow.com/questions/36036910/apache-spark-vs-apache-ignite
Hi,
Were you able to resolve this issue? If yes, can you share your solution?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
22 matches
Mail list logo