[ANNOUNCE] Apache Ignite 2.14.0 Released

2022-10-11 Thread Taras Ledkov
The Apache Ignite Community is pleased to announce the release of
Apache Ignite 2.14.0.

Apache Ignite® is an in-memory computing platform for transactional,
analytical, and streaming workloads delivering in-memory speeds at
a petabyte scale.
https://ignite.apache.org

For the full list of changes, you can refer to the RELEASE_NOTES list
which is trying to catalogue the most significant improvements for
this version of the platform.
https://ignite.apache.org/releases/2.14.0/release_notes.html

Download the latest Ignite version from here:
https://ignite.apache.org/download.cgi

Please let us know if you encounter any problems:
https://ignite.apache.org/community/resources.html#ask

--
With best regards,
Taras Ledkov


Re: What does javax.cache.CacheException: Failed to execute map query on remote node mean?

2022-08-03 Thread Taras Ledkov
Hi John and Don,

I guess the root cause in the data types mismatch between table schema and 
actual data at the store or type of the query parameter.
To explore the gap, it would be very handy if you could provide a small 
reproducer (standalone project or PR somewhere).

> In my case I'm not even using UUID fields. Also the same code 2 diff 
> environment dev vs prod doesn't cause the issue. I'm lucky enough that it's 
> on dev and prod is ok.
> 
> But that last part might be misleading because in prod I think it happened 
> early on during upgrade and all I did was recreate the sql table.
> 
> So before I do the same on dev... I want to see what the issue is.
> 
> On Tue., Aug. 2, 2022, 6:06 p.m. ,  wrote:
> 
>> I‘m only speculating but this looks very similar to the issue I had last 
>> week and reported to the group here.
>>
>> Caused by: org.h2.message.DbException: Hexadecimal string with odd number of 
>> characters: "5" [90003-197]


--
With best regards,
Taras Ledkov


Re:Cache Exception for specific parameter values

2022-07-29 Thread Taras Ledkov
Hi,

Could you provide the original exception from the map node?
It must be available at the log files of the map node.


Re: Failed to Scan query data by partition index after insert data using DML

2021-05-11 Thread Taras Ledkov

Hi,

Please provide steps to reproduce.
I don't catch a case by the stacktrace.

On 08.05.2021 3:02, Henric wrote:

Hi,
Thanks for replay
I tried to used cache_name, but I still get Exception as below, I have
specify the cache name, I don't know why I still get this error.
I tried to set WRAP_VALUE to false, but it only works for single column.
Did I miss something important?

Caused by: class org.apache.ignite.binary.BinaryInvalidTypeException:
SQL_PUBLIC_CITY_5c1c4ecf_745a_4a99_bfbf_fde6de0bc215
at
org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:689)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1757)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
at
org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:796)
at
org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:142)
at
org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinary(CacheObjectUtils.java:176)
at
org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinaryIfNeeded(CacheObjectUtils.java:62)
at
org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinariesIfNeeded(CacheObjectUtils.java:135)
at
org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinariesIfNeeded(CacheObjectUtils.java:77)
at
org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinariesIfNeeded(GridCacheContext.java:1796)
at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryFutureAdapter.onPage(GridCacheQueryFutureAdapter.java:351)
at
org.apache.ignite.internal.processors.cache.query.GridCacheDistributedQueryManager.processQueryResponse(GridCacheDistributedQueryManager.java:403)
at
org.apache.ignite.internal.processors.cache.query.GridCacheDistributedQueryManager.access$000(GridCacheDistributedQueryManager.java:64)
at
org.apache.ignite.internal.processors.cache.query.GridCacheDistributedQueryManager$1.apply(GridCacheDistributedQueryManager.java:94)
at
org.apache.ignite.internal.processors.cache.query.GridCacheDistributedQueryManager$1.apply(GridCacheDistributedQueryManager.java:92)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1142)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:591)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$800(GridCacheIoManager.java:109)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1707)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1907)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$5200(GridIoManager.java:241)
at
org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:3916)
at
org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1862)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$5500(GridIoManager.java:241)
at
org.apache.ignite.internal.managers.communication.GridIoManager$10.run(GridIoManager.java:1829)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException:
SQL_PUBLIC_CITY_5c1c4ecf_745a_4a99_bfbf_fde6de0bc215
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at
org.apache.ignite.internal.util.IgniteUtils.forName(IgniteUtils.java:8900)
at
org.apache.ignite.internal.MarshallerContextImpl.getClass(MarshallerContextImpl.java:376)
at
org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:680)
... 27 more


Taras Ledkov wrote

Hi,

Please take a look at the documentation of the CREATE TABLE [1]

Use:

CREATE TABLE tableName (id LONG PRIMARY KEY, name VARCHAR)
WITH "|CACHE_NAME=|cacheName"

Please pay attention to other options. Its may be useful to use cache
API & SQL together.
Also be careful when you use cache API & SQL together.
You can get unexpected behavior on this way for several cases. See [2]


1. https://apacheigni

Re: Failed to Scan query data by partition index after insert data using DML

2021-05-07 Thread Taras Ledkov

Hi,

Please take a look at the documentation of the CREATE TABLE [1]

Use:

CREATE TABLE tableName (id LONG PRIMARY KEY, name VARCHAR)
WITH "|CACHE_NAME=|cacheName"

Please pay attention to other options. Its may be useful to use cache 
API & SQL together.

Also be careful when you use cache API & SQL together.
You can get unexpected behavior on this way for several cases. See [2]


1. https://apacheignite-sql.readme.io/docs/create-table
2. https://youtu.be/0lQy5J5hLJI?t=2051

On 07.05.2021 17:10, Henric wrote:

I tried to query data by partition index, when I insert data using cache API,
I can get data successfully, when I insert data using DML, I can't get data.

*I can get data by partition index using cache API*

IgniteCache cache = ignite.getOrCreateCache("cacheName");
cache.put(1, "v1");
ScanQuery sq = new ScanQuery(1); //1 is the id of partition used to store
entry created above
cache.query(sq).getAll();

*I can't get data using partition index which is insert by DML*

IgniteCache cache = ignite.getOrCreateCache("tableName");
cache.query(new SqlFieldsQuery("CREATE TABLE tableName (id LONG PRIMARY KEY,
name VARCHAR)")).getAll();
SqlFieldsQuery qry = new SqlFieldsQuery("INSERT INTO tableName (id, name)
value (?,?)");
cache.query(qry.setArgs(11L, "Mary Major")).getAll();

ScanQuery sq = new ScanQuery(11); //11 is the id of partition used to store
entry created above
cache.query(sq).getAll(); //nothing return here!
I tried SQL_PUBLIC_TABLENAME as cache name, I got Exception:
java.lang.ClassNotFoundException:
SQL_PUBLIC_TABLENAME_7b146bba_cd7f_452f_8abc

*Q: How can I query data inserted by DML using partition index? Thanks.*



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Understanding SQL join performance

2021-04-29 Thread Taras Ledkov

Hi,

Unfortunately I don't understand the root of the problem totally.
Looks like the performance depends linear on rows count:
10k ~ 0.1s
65k ~ 0.65s
650k ~ 7s
I see the linear dependency on rows count...

> Is Ignite doing the join and filtering at each data node and then 
sending

> the 650K total rows to the reduce before aggregation?

Which aggregation do you mean?
Please provide the query plan and data schema for details.

On 24.04.2021 3:24, William.L wrote:

Hi,

I am trying to understand why my colocated join between two tables/caches
are taking so long compare to the individual table filters.

TABLE1

Returns 1 count -- 0.13s

TABLE2

Returns 65000 count -- 0.643s


 JOIN TABLE1 and TABLE2

Returns 650K count -- 7s

Both analysis_input and analysis_output has index on (cohort_id, user_id,
timestamp). The affinity key is user_id. How do I analyze the performance
further?

Here's the explain which does not tell me much:



Is Ignite doing the join and filtering at each data node and then sending
the 650K total rows to the reduce before aggregation? If so, is it possible
for Ignite to do the some aggregation at the data node first and then send
the first level aggregation results to the reducer?






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: SQL query performance with JOIN and ORDER BY or WHERE

2021-04-13 Thread Taras Ledkov

Hi,

Please try to enforce use index for sorted select.
It make sense when filter produce a lot pf results.

So, full table will be scanned by ordered index, but result will be not 
materialized on map nodes.

Using the example of the first message, it looks like this:

SELECT JQ._KEY
FROM "JobQueue".JOBQUEUE AS JQ USE INDEX(IDX_JQ_QUEUED)
INNER JOIN "Jobs".JOBS AS J ON JQ.jobid=J._key
WHERE JQ.STATUS = 2
ORDER BY JQ.QUEUED ASC
LIMIT 20

When IDX_JQ_QUEUED was created on JOBQUEUE (QUEUED).
I cannot guarantee he performance boost. Just to try.

On 12.04.2021 23:02, Thomas Kramer wrote:


Hi Ilya,

unfortunately this also didn't help improving query performance. Not 
sure what else I can try. Or maybe it is expected? In my opinion it 
shouldn't take that long as the query without the ORDER BY clause is 
super fast. Since there is a index on the order field I would expect 
this should be fast.


Btw, I noticed that for some other queries the first call has this 
long execution while on every following call with the same SQL 
statement it returns within half a second. But I guess this is caching 
related and not the issue I see here?


Best Regards,
Thomas.


On 12.04.21 13:15, Ilya Kasnacheev wrote:

Hello!

I think you can try a (QUEUEID, STATUS) index.

Or maybe a (STATUS, QUEUEID), probably makes sense to try both.

Regards,
--
Ilya Kasnacheev


сб, 10 апр. 2021 г. в 00:22, <mailto:don.tequ...@gmx.de>>:


The QUEUED field is a BIGINT that contains timestamp from
System.currentTimeMillis(), so it should be pretty easy to sort,
shouldn’t it? Looks like the field STATUS (used in where
clause) and field QUEUED (used in order clause) are not working
optimal when used together. Does this make sense? Do I need to
create an index on both together?

I will take a look at UNION and WHERE EXISTS, I‘m not familiar
with these statements.

Thanks!


On 09.04.21 at 17:37, Ilya Kasnacheev wrote:

From: "Ilya Kasnacheev" mailto:ilya.kasnach...@gmail.com>>
Date: 9. April 2021
To: user@ignite.apache.org <mailto:user@ignite.apache.org>
Cc:
Subject: Re: SQL query performance with JOIN and ORDER BY or WHERE
Hello!

ORDER BY will have to sort the whole table.

I think that using index on QUEUED will be optimal here. What is
the selectivity of this field? If it s boolean, you might as well
use UNION queries.

Have you tried joining JOBS via WHERE EXISTS?

Regards,
-- 
Ilya Kasnacheev




пт, 9 апр. 2021 г. в 01:03, DonTequila mailto:don.tequ...@gmx.de>>:

Hi,

I have a SQL performance issue. There are indexes on both
fields that are
used in the ORDER BY clause and the WHERE clause.

The following statement takes about 133941 ms with several
warnings from
IgniteH2Indexing:

SELECT JQ._KEY
FROM "JobQueue".JOBQUEUE AS JQ
INNER JOIN "Jobs".JOBS AS J ON JQ.jobid=J._key
WHERE JQ.STATUS = 2
ORDER BY JQ.QUEUED ASC
LIMIT 20

But when I remove the ORDER BY part or the WHERE part from
the statement it
returns in <10ms.

What may I do wrong?

Thanks,
Thomas.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
<http://apache-ignite-users.70518.x6.nabble.com/>


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Slow SQL query when joining on a REPLICATED Cache

2021-03-31 Thread Taras Ledkov

Hi,

Please provide the plans for specified cases:
- original plan
- without condition
- with subquery (optional)

Also please provide the indexes schemas.

I suspect a not optimal index may be chosen.
e.g.
There are indexed:
idx_doc_col1  and idx_doc_doc_id

idx_doc_col1 - may be chosen because there is the EQ condition.

I guess you can try to create composite index for col1 and doc_id and 
make the performance better then expected %)

e.g.:

CREATE INDEX IDX_DOCS_DOC_ID_COL1 ON DOCS(DOC_ID, COL1)

On 31.03.2021 17:39, wiesenfe wrote:

Good afternoon,

I am facing a strange performance issue when doing SQL queries on my
cluster.
Here is the scenario (I cannot use real config etc because this source code
is protected):

I have 3 caches (a subset of a STAR schema).

CACHE1 is the fact table: EVENTS. It is a partitioned cache. It has an
affinityKey on USER ID.
CACHE2 is the user table: USERS. It is a partitioned cache. It has an
affinityKey on USER ID as well.
CACHE3 is the document table: DOCS. It is a replicated cache.

I also have the following config:

Every event from the event table has one USER ID and one DOCUMENT ID.
All the columns are indexed.
I run the query with setLocal(true) and setEnforceJoinOrder(true).



I would like to do the following:

1. SELECT *
2. FROM EVENTS

3. INNER JOIN USERS ON EVENTS.USER_ID = USERS.USER_ID
4. INNER JOIN DOCS  ON EVENTS.DOC_ID   = DOCS.DOC_ID

5. WHERE   DOCS.COL1 = 'some filter'
6. AND   USERS.COL2 = 'some other filter'


Here is what I observe:

When I run the query without line 5 (filter on documents), it is instant.
When I run the query with line 5 (both filters), it is 20X slower. (Even
though those filters are on indexed columns)- The EXPLAIN form the logs
indicates lots of scans.

If I run the query as so (syntax is not exact but the idea is there):

SELECT *
FROM (

   SELECT USERS.*, DOCS.*

   FROM EVENTS

INNER JOIN USERS ON EVENTS.USER_ID = USERS.USER_ID
INNER JOIN DOCS  ON EVENTS.DOC_ID   = DOCS.DOC_ID

   WHERE   DOCS.COL1 = 'some filter'

)

WHERE USERS.COL2 = 'some other filter'

I have the expected performances again.


It seems that the index on COL2 is being ignored when doing two joins and a
filter on each table.
What do you think about it ?


Thank yo very much !
Kind regards
Emmanuel










--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: very fast loading of very big table

2021-02-19 Thread Taras Ledkov

Hi Vladimir,

Did you try to use SQL command 'COPY FROM ' via thin JDBC?
This command uses 'IgniteDataStreamer' to write data into cluster and 
parse CSV on the server node.


PS. AFAIK IgniteDataStreamer is one of the fastest ways to load data.


Hi Denis,

Data space is 3.7Gb according to MSSQL table properries

Vladimir

9:47, 19 февраля 2021 г., Denis Magda :

Hello Vladimir,

Good to hear from you! How much is that in gigabytes?

-
Denis


On Thu, Feb 18, 2021 at 10:06 PM mailto:vtcher...@gmail.com>> wrote:

Sep 2020 I've published the paper about Loading Large Datasets
into Apache Ignite by Using a Key-Value API (English [1] and
Russian [2] version). The approach described works in
production, but shows inacceptable perfomance for very large
tables.

The story continues, and yesterday I've finished the proof of
concept for very fast loading of very big table. The
partitioned MSSQL table about 295 million rows was loaded by
the 4-node Ignite cluster in 3 min 35 sec. Each node had
executed its own SQL queries in parallel and then distributed
the loaded values across the other cluster nodes.

Probably that result will be of interest for the community.

Regards,
Vladimir Chernyi

[1]

https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api

<https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api>
[2] https://m.habr.com/ru/post/526708/
<https://m.habr.com/ru/post/526708/>



--
Отправлено из мобильного приложения Яндекс.Почты


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Ignite Client Node OOM Issue

2020-11-13 Thread Taras Ledkov

Hi,

SQL query may be potential cause of the OOM.

Please check that:
1. Simple scan+filter queries (e.g. SELECT * FROM  WHERE ) 
that may produce big results run in lazy mode (SqlFieldsQuery#setLazy);
2. Queries that requires not indexing sort or group aggregates don't 
produce  big result set (lazy mode isn't applied).


On 06.11.2020 17:07, Ravi Makwana wrote:

Hi,

*1) What load profile do you have? *
Ans: We have 2 clusters, each having 2 nodes, one cluster is having 
approx 15 GB data (Replication) & second cluster is having approx 5 GB 
data (Partitioned) with eviction policy.

*
*
*2) Do you use SQL queries?*
Ans: Yes, We are using.
*
*
*3) Is it possible to share your client node configuration?*
Ans: Yes, I have attached below.

Thanks,


On Fri, 6 Nov 2020 at 18:50, Vladimir Pligin <mailto:vova199...@yandex.ru>> wrote:


What load profile do you have? Do you use SQL queries? Is it
possible to
share your client node configuration?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
<http://apache-ignite-users.70518.x6.nabble.com/>


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Ignite 2.8.0. Heap mem issue

2020-03-31 Thread Taras Ledkov

Hi,

Thanks a lot for your reproducer.
It was not entirely clear to me, but very useful.

I've reproduced and discovered the issue.
The new ticket [1] is created and will be fixed ASAP.

Now you can try to use workaround:
- use constant values at the INSERT command;
- insert several rows by one query.
   e.g: INSERT INTO  VALUES (), ()


[1]. https://issues.apache.org/jira/browse/IGNITE-12848

On 18.03.2020 17:07, dbutkovic wrote:

Hi Alex,

i did another test and collected all the logs, GC logs, Heap mem dump, fiew
screenshots.

All files are in zip file. File is to big for upload, please download from
Jumbo mail link.

https://jumboiskon.tportal.hr/download/eeab9848-2494-4ab7-a2cb-88766db0fafa

Thanks, Dren



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Ignite memory leaks in 2.8.0

2020-03-24 Thread Taras Ledkov

Hi, Andrey

Hmm. There is ConnectionManager#cleanupConnections to close connections 
of terminated threads

(run periodically with default timeout is 2000 ms).

So, if the detached connection is recycled after use and returns into 
#threadConns it should be closed after the owner thread is terminated.

take a look at the test: H2ConnectionLeaksSelfTest#testConnectionLeaks

But I wrote: thread local logic of connection manager is mess, hard to 
understand and promising to many troubles.

I think we have to change it.

On 23.03.2020 23:00, Andrey Davydov wrote:
It seems detached connection NEVER become attached to thread other it 
was born. Because borrow method always return object related to caller 
thread. I.e. all detached connection borned in joined thread are not 
collectable forewer.


So possible reproduce scenario: start separate thread. Run in this 
thread some logic that creates detached connection, finish and join 
thread. Remove link to thread. Repeat.


пн, 23 мар. 2020 г., 15:49 Taras Ledkov <mailto:tled...@gridgain.com>>:


Hi,

Thanks for your investigation.
Root cause is clear. What use-case is causing the leak?

I've created the issue to remove mess ThreadLocal logic from
ConnectionManager. [1]
We 've done it in GG Community Edition and it works OK.

[1]. https://issues.apache.org/jira/browse/IGNITE-12804

On 21.03.2020 22:50, Andrey Davydov wrote:

A simple diagnostic utility I use to detect these problems:

import java.lang.ref.WeakReference;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import org.apache.ignite.Ignite;
import org.apache.ignite.internal.GridComponent;
import org.apache.ignite.internal.IgniteKernal;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;

public class IgniteWeakRefTracker {

    private static final Logger LOGGER =
LogManager.getLogger(IgniteWeakRefTracker.class);

    private final String clazz;
    private final String testName;
    private final String name;
    private final WeakReference innerRef;
    private final List>
componentRefs = new ArrayList<>(128);

    private static final LinkedList refs =
new LinkedList<>();

    private IgniteWeakRefTracker(String testName, Ignite ignite) {
        this.clazz = ignite.getClass().getCanonicalName();
        this.innerRef = new WeakReference<>(ignite);
this.name <http://this.name> = ignite.name <http://ignite.name>();
        this.testName = testName;

        if (ignite instanceof IgniteKernal) {
            IgniteKernal ik = (IgniteKernal) ignite;
            List components =
ik.context().components();
            for (GridComponent c : components) {
                componentRefs.add(new WeakReference<>(c));
            }
        }
    }

    public static void register(String testName, Ignite ignite) {
        refs.add(new IgniteWeakRefTracker(testName, ignite));
    }

    public static void trimCollectedRefs() {

        List toRemove = new ArrayList<>();

        for (IgniteWeakRefTracker ref : refs) {
            if (ref.isIgniteCollected()) {
                LOGGER.info("Collected ignite: ignite {} from
test {}", ref.getIgniteName(), ref.getTestName());
                toRemove.add(ref);
                if (ref.igniteComponentsNonCollectedCount() != 0) {
                    throw new IllegalStateException("Non
collected components for collected ignite.");
                }
            } else {
                LOGGER.warn("Leaked ignite: ignite {} from test
{}", ref.getIgniteName(), ref.getTestName());
            }
        }

        refs.removeAll(toRemove);

        LOGGER.info("Leaked ignites count:  {}", refs.size());

    }

    public static int getLeakedSize() {
        return refs.size();
    }

    public boolean isIgniteCollected() {
        return innerRef.get() == null;
    }

    public int igniteComponentsNonCollectedCount() {
        int res = 0;

        for (WeakReference cr : componentRefs) {
            GridComponent gridComponent = cr.get();
            if (gridComponent != null) {
                LOGGER.warn("Uncollected component: {}",
gridComponent.getClass().getSimpleName());
                res++;
            }
        }

        return res;
    }

    public String getClazz() {
        return clazz;
    }

    public String getTestName() {
        return testName;
    }

    public String getIgniteName() {
        return name;
    }

}


On Fri, Mar 20, 2020 at 11:51 PM Andrey Davydov
mailto:andrey.d

Re: Ignite memory leaks in 2.8.0

2020-03-23 Thread Taras Ledkov
 <- table - class:
java.lang.ThreadLocal$ThreadLocalMap, value:
java.lang.ThreadLocal$ThreadLocalMap$Entry[] #21

  <- threadLocals (thread object) - class:
java.lang.Thread, value: java.lang.ThreadLocal$ThreadLocalMap #2

Link to IgniteKernal leaks to ThreadLocal variable, so when we
    start/stop many instances of Ignite in same jvm during testing, we
got many stopped “zomby” ignites on ThreadLocal context of main
test thread and it cause OutOfMemory after some dozens of tests.

Andrey.


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Query timeout

2020-03-23 Thread Taras Ledkov

Hi,

Hm.. It works for me.
Looks at the Ignite test: 
JdbcThinStatementTimeoutSelfTest#testQueryTimeoutRetrival [1].


Please share your simple reproducer or test.

[1]. 
https://github.com/apache/ignite/blob/9f19e0160b1ca43a27908849ad46c65ebd8689f1/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinStatementTimeoutSelfTest.java#L212


On 23.03.2020 14:36, breathem wrote:

I try jdbc:ignite:thin://192.168.1.138:10800;queryTimeout=1 and
jdbc:ignite:thin://192.168.1.138:10800?queryTimeout=1 both.
Query is still executed more then 1 sec without exception.
Is this parameter supported in 2.8.0?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Query timeout

2020-03-23 Thread Taras Ledkov

Hi,

queryTimeout=N

in seconds according with JDBC specification: 
java.sql.Statement#setQueryTimeout


On 23.03.2020 12:47, breathem wrote:

Hi all,
Is there any way to set query timeout parameter in JDBC url?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: java.lang.IllegalMonitorStateException: attempt to unlock read lock, not locked by current thread

2020-03-18 Thread Taras Ledkov

Hi,

Looks like the issue is related to 
https://issues.apache.org/jira/browse/IGNITE-12800.

I've not fixed it yet in Ignite.

Hmm. I guess the error is happen when you select from REPLICATED cache. 
Am I right?
Now I recommend don't use 'lazy' mode via JDBC client for local queries 
and for query from replicated caches

(because it is transforms to local query).

I suppose we'll fix it at 2.8.1.

On 17.03.2020 4:32, yangjiajun wrote:

Hello.Thanks for u reply.

Actually I am using Hakaricp connection pool.I think my threads can't use
same connection at same time.The threads can only reuse same connection at
different time.My code works well in ignite 2.7 and it also works when I set
lazy=false in ignite 2.8.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: ERROR: h2 Unsupported connection setting "MULTI_THREADED"

2020-02-20 Thread Taras Ledkov

Hi,

Ignite uses H2 version 1.4.197 (see [1]).


[1]. https://github.com/apache/ignite/blob/master/parent/pom.xml#L74

On 20.02.2020 4:36, Andrew Munn wrote:
I'm building/running my client app w/Gradle and I'm seeing this 
error.  Am I overloading the Ingite H2 fork with the real H2 or 
something?  It appears I have the latest h2:


[.gradle]$ find ./ -name *h2*
./caches/modules-2/metadata-2.82/descriptors/com.h2database
./caches/modules-2/metadata-2.82/descriptors/com.h2database/h2
./caches/modules-2/files-2.1/com.h2database
./caches/modules-2/files-2.1/com.h2database/h2
./caches/modules-2/files-2.1/com.h2database/h2/1.4.200/6178ecda6e9fea8739a3708729efbffd88be43e3/h2-1.4.200.pom
./caches/modules-2/files-2.1/com.h2database/h2/1.4.200/f7533fe7cb8e99c87a43d325a77b4b678ad9031a/h2-1.4.200.jar



2020-02-19 19:59:28.229 ERROR 102356 --- [           main] 
o.a.i.internal.IgniteKernal%dev-cluster  : Exception during start 
processors, node will be stopped and close connections
org.apache.ignite.internal.processors.query.IgniteSQLException: Failed 
to initialize system DB connection: 
jdbc:h2:mem:b52dce26-ba01-4051-9130-e087e19fab4f;LOCK_MODE=3;MULTI_THREADED=1;DB_CLOSE_ON_EXIT=FALSE;DEFAULT_LOCK_TIMEOUT=1;FUNCTIONS_IN_SCHEMA=true;OPTIMIZE_REUSE_RESULTS=0;QUERY_CACHE_SIZE=0;MAX_OPERATION_MEMORY=0;BATCH_JOINS=1;ROW_FACTORY="org.apache.ignite.internal.processors.query.h2.opt.GridH2PlainRowFactory";DEFAULT_TABLE_ENGINE=org.apache.ignite.internal.processors.query.h2.opt.GridH2DefaultTableEngine
        at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.systemConnection(IgniteH2Indexing.java:434) 
~[ignite-indexing-2.7.6.jar:2.7.6]
        at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSystemStatement(IgniteH2Indexing.java:699) 
~[ignite-indexing-2.7.6.jar:2.7.6]
        at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.createSchema0(IgniteH2Indexing.java:646) 
~[ignite-indexing-2.7.6.jar:2.7.6]
        at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.start(IgniteH2Indexing.java:3257) 
~[ignite-indexing-2.7.6.jar:2.7.6]
        at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.start(GridQueryProcessor.java:248) 
~[ignite-core-2.7.6.jar:2.7.6]
        at 
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1700) 
~[ignite-core-2.7.6.jar:2.7.6]
        at 
org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1017) 
~[ignite-core-2.7.6.jar:2.7.6]
        at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038) 
~[ignite-core-2.7.6.jar:2.7.6]
        at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730) 
~[ignite-core-2.7.6.jar:2.7.6]
        at 
org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158) 
~[ignite-core-2.7.6.jar:2.7.6]
        at 
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1076) 
~[ignite-core-2.7.6.jar:2.7.6]
        at 
org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:962) 
~[ignite-core-2.7.6.jar:2.7.6]
        at 
org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:861) 
~[ignite-core-2.7.6.jar:2.7.6]
        at 
org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:731) 
~[ignite-core-2.7.6.jar:2.7.6]
        at 
org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:700) 
~[ignite-core-2.7.6.jar:2.7.6]
        at org.apache.ignite.Ignition.start(Ignition.java:348) 
~[ignite-core-2.7.6.jar:2.7.6]
        at 
com.centiva.ig.etl.loader.LoaderApplication.main(LoaderApplication.java:14) 
~[main/:na]
Caused by: org.h2.jdbc.JdbcSQLNonTransientConnectionException: 
Unsupported connection setting "MULTI_THREADED" [90113-200]
        at 
org.h2.message.DbException.getJdbcSQLException(DbException.java:622) 
~[h2-1.4.200.jar:1.4.200]


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: JDBC Thin Client does not return

2020-02-14 Thread Taras Ledkov

Hi,

I've not seen the thread dump from the node with order 3: 
'CAA9E3DB-266B-4040-B059-19BD281D5821'

(corresponds to log: ignite-caa9e3db.0.log) and from other nodes too.
It may hep a lot if the thread dumps are created when the query hangs.
Server node + client app thread dumps - the best info.

But JDBC was connected to this node (looks like the nodes' logs).

The described case is not clear for me.

But i see a lot of strange log messages at the node with order 3:
[...][WARNING][grid-nio-worker-client-listener-0-#30][ClientListenerProcessor] 
Closing NIO session because of unhandled exception [cls=class 
o.a.i.i.util.nio.GridNioException, msg=Operation timed out]


[...][SEVERE][grid-nio-worker-client-listener-0-#30][ClientListenerProcessor] 
Closing NIO session because of unhandled exception.

class org.apache.ignite.IgniteCheckedException: Invalid handshake message

[...][WARNING][grid-nio-worker-tcp-comm-2-#26][TcpCommunicationSpi] 
Unknown connection detected (is some other software connecting to this 
Ignite port?) [rmtAddr=/x.x.x.x]


It looks like:
- some network clients/services are configured invalid.
- someone tried to hack you. Is not it?. %)

On 12.02.2020 8:31, pg31 wrote:

Hi

Thank You for the reply Denis.

Please find the logs attached. The query was still running
Create_IDX_Stuck.zip
<http://apache-ignite-users.70518.x6.nabble.com/file/t2770/Create_IDX_Stuck.zip>



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Ignite performance - sudden drop after upgrade from 2.3.0

2019-10-02 Thread Taras Ledkov

Hi,

The root cause of the performance looks like in the change the default 
count of atomic backups [1] on the 2.4 and later.
Please try to use IgniteAtomicSequence instead of the IgniteAtomicLong 
to generate IDs.


[1]. org.apache.ignite.configuration.AtomicConfiguration#DFLT_BACKUPS

18.09.2019 11:37, Oleg Yurkivskyy (Luxoft) пишет:


Hi

Does anyone have any idea why there is such a drop in performance 
between ignite versions?


_

This communication is intended only for the addressee(s) and may 
contain confidential information. We do not waive any confidentiality 
by misdelivery. If you receive this communication in error, any use, 
dissemination, printing or copying is strictly prohibited; please 
destroy all electronic and paper copies and notify the sender immediately.



--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Get all cache entries

2019-09-16 Thread Taras Ledkov

Hi,


Please take a look at IgniteCache#query.
You can use ScanQuery example [1] .

[1]. 
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheQueryExample.java#L121



16.09.2019 18:04, Кузин Никита (Nikita Kuzin) пишет:

Hello!

What is more preferable way to get all elements from cache? (Something 
like IgniteStreamer, but other direction)


Thank you

_
С уважением, Никита Кузин
Ведущий программист-разработчик

* Интернейшнл АйТи Дистрибьюшн*
*
*
e-mail: nku...@iitdgroup.ru <mailto:nku...@iitdgroup.ru>
тел.: 84995021375 доб. 320
моб. тел.: 79260948887
115114, Москва, Дербеневская ул., 20-27


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: BLOB with JDBC Driver

2019-03-06 Thread Taras Ledkov

Hi,

But SQL type BLOB isn't supported by Ignite.
Please use BINARY SQL type.

06.03.2019 18:40, Taras Ledkov пишет:

Hi,

The JDBC Blob is supported by JDBC v2 driver (thick driver based on 
Ignite client node).

Thin JDBC driver hasn't supported Blob yet.

06.03.2019 13:50, KR Kumar пишет:
Hi - I trying out JDBC driver with ignite SQL tables. How do i insert 
a blob

into my cache table through jdbc?

Thanx and Regards,
KR Kumar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: BLOB with JDBC Driver

2019-03-06 Thread Taras Ledkov

Hi,

The JDBC Blob is supported by JDBC v2 driver (thick driver based on 
Ignite client node).

Thin JDBC driver hasn't supported Blob yet.

06.03.2019 13:50, KR Kumar пишет:

Hi - I trying out JDBC driver with ignite SQL tables. How do i insert a blob
into my cache table through jdbc?

Thanx and Regards,
KR Kumar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Data streamer has been closed.

2019-02-20 Thread Taras Ledkov

Hi,

Workaround: use ordered streaming:
SET STREAMING ON ORDERED

There is a bug at the Ignite server on not ordered mode.
The fix will be at the master soon.

20.01.2019 14:11, ilya.kasnacheev пишет:

Hello!

I have filed an issue https://issues.apache.org/jira/browse/IGNITE-10991

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: SQLFieldsQuery timeout is not working

2019-01-15 Thread Taras Ledkov
Looks like your query scans more then 4K rows to produce 200 rows result 
set.


15.01.2019 17:02, garima.j пишет:

Hi,

The number of rows in the table are 300k. In the SQL query, I specify the
limit as 10.

If I increase the limit to 200, it throws QueryCancelledException and times
out. Is timeout dependent on the resultset size as well?

Also, is there any way through which I can customize H2 timeout scanned row
count (instead of 4k rows).



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: SQLFieldsQuery timeout is not working

2019-01-15 Thread Taras Ledkov

Hi,

Timeout isn't dependent on the result set size but timeout is checked 
one time for 4K rows are scanned.


15.01.2019 17:02, garima.j пишет:

Hi,

The number of rows in the table are 300k. In the SQL query, I specify the
limit as 10.

If I increase the limit to 200, it throws QueryCancelledException and times
out. Is timeout dependent on the resultset size as well?

Also, is there any way through which I can customize H2 timeout scanned row
count (instead of 4k rows).



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: SQLFieldsQuery timeout is not working

2019-01-15 Thread Taras Ledkov

Hi,

How many rows does the result set contains? How many rows are scanned to 
produce the result?

Ignite use H2 as the SQL frontend.
H2 checks the timeout after each 4K scanned rows.

15.01.2019 14:18, garima.j пишет:

Hello,

I'm using the below code to execute a SQL fields query :

SqlFieldsQuery qry = new
SqlFieldsQuery(jfsIgniteSQLFilter.getSQLQuery()).setTimeout(timeout,TimeUnit.MILLISECONDS);
List listFromCache = cache.query(qry).getAll();

The query doesn't timeout at all. My timeout is 5 milliseconds and the data
is retrieved in 168 ms without timing out.

Please let me know what am I missing.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Sample configuration for one cache [name=KSDATA] and use it with JDBC (client do not find schema)

2018-12-06 Thread Taras Ledkov
ib.database.connectors.jdbc.CxObjJDBC.ExecuteDML(_CxObjJDBC.java:238_)


at 
com.gfi.rt.lib.database.connectors.jdbc.CxObjJDBC.TableIndexCreate(_CxObjJDBC.java:647_)


at com.gfi.rt.lib.database.connectors.CxTable.addIndex(_CxTable.java:81_)

at 
com.gfi.rt.lib.database.connectors.CxTable.addIndexes(_CxTable.java:96_)


at 
com.gfi.rt.lib.database.connectors.CxTable.addIndexes(_CxTable.java:104_)


at 
com.gfi.rt.bin.database.dbbench.BenchmarkMain.launch(_BenchmarkMain.java:270_)


at 
com.gfi.rt.bin.database.dbbench.BenchmarkMain.(_BenchmarkMain.java:100_)


at 
com.gfi.rt.bin.database.dbbench.BenchmarkMain.main(_BenchmarkMain.java:42_)


...

And so on ☹

Any full running java JDBC example will be welcome, or best my XML 
correction 😊


Cordialement,

*/—
NOTE : n/a/*
—
Gfi**Informatique
*Philippe Cerou*
Architecte & Expert Système
GFI Production / Toulouse
philippe.cerou @gfi.fr

—

1 Rond-point du Général Eisenhower, 31400 Toulouse

Tél. : +33 (0)5.62.85.11.55
Mob. : +33 (0)6.03.56.48.62
*www.gfi.world* <http://www.gfi.world/>

—

Facebook <https://www.facebook.com/gfiinformatique>Twitter 
<https://twitter.com/gfiinformatique>Instagram 
<https://www.instagram.com/gfiinformatique/>LinkedIn 
<https://www.linkedin.com/company/gfi-informatique>YouTube 
<https://www.youtube.com/user/GFIinformatique>

—
cid:image006.jpg@01D2F97F.AA6ABB50 <http://www.gfi.world/>


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: custom plugin question - jdbc client

2018-08-28 Thread Taras Ledkov

Hi,

You are absolutely right.
In case you use thin JDBC driver (recommended):
1. You have to define SSLContext factory for client connector for Ignite 
node [1]


2. And setup SSL socket factory for Ignite thin JDBC driver [2]

If you are going to use JDBCv2 driver please keep in mind that the 
JDBCv2 driver starts
the Ignite client node to connect to Ignite cluster and  read the 
documentation [3]


[1] 
org.apache.ignite.configuration.ClientConnectorConfiguration#setSslContextFactory 
(https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/ClientConnectorConfiguration.html)
[2] See `sslFactory` property: 
https://apacheignite-sql.readme.io/docs/jdbc-driver#jdbc-thin-driver

[3] https://apacheignite-sql.readme.io/docs/jdbc-client-driver

On 28.08.2018 12:39, wt wrote:

i have finally managed to get a plugin working for a white list on ignite
2.6. I am now going to start working on an authorization for users
connecting to the cluster.

How can i get clients pass through a kerberos ticket to the cluster? I think
i need to override the authorization context class but that would mean that
i need to do it both on the server and the clients for odbc\jdbc etc.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Basic Subquery Failure

2018-03-23 Thread Taras Ledkov

Hi,

Thanks a lot.
The problem has been already fixed. Take a look: 
https://issues.apache.org/jira/browse/IGNITE-7879



On 21.03.2018 3:12, besquared wrote:

Sorry about that I put them as raw text but it looks like it didn't go
through.. I'll just try without markup this time

CREATE TABLE people (
id LONG PRIMARY KEY,
name VARCHAR
);

SELECT people.name,
COUNT(1) AS rowcount
   FROM people
   JOIN ( SELECT DISTINCT name FROM people ) sub
  ON people.name = sub.name
   GROUP BY people.name

! Causing: javax.cache.CacheException: Failed to run map query
remotely.Failed to execute map query on the node:
c738645b-7f7a-4e39-a283-e49cb990e93d, class
org.apache.ignite.IgniteCheckedException:Failed to execute SQL query. Column
"__Z0.NAME" must be in the GROUP BY list; SQL statement:
! SELECT
! COUNT(1) __C0_0,
! __Z0.NAME __C0_1
! FROM PUBLIC.PEOPLE __Z0 [90016-195]

EXPLAIN plan is

"SELECT
 COUNT(1) AS __C0_0,
 __Z0.NAME AS __C0_1
FROM PUBLIC.PEOPLE __Z0
 /* PUBLIC.PEOPLE.__SCAN_ */"
"SELECT DISTINCT
 __Z1.NAME AS __C1_0
FROM PUBLIC.PEOPLE __Z1
 /* PUBLIC.PEOPLE.__SCAN_ */
ORDER BY 1"
"SELECT
 __Z0__NAME AS NAME,
 __X0__ROWCOUNT AS ROWCOUNT
FROM (
 SELECT
 CAST(SUM(__C0_0) AS BIGINT) AS __X0__ROWCOUNT,
 __C0_1 AS __Z0__NAME
 FROM PUBLIC.__T0
 ORDER BY 2
) __Z3
 /* SELECT
 CAST(SUM(__C0_0) AS BIGINT) AS __X0__ROWCOUNT,
 __C0_1 AS __Z0__NAME
 FROM PUBLIC.__T0
 /++ PUBLIC."merge_scan" ++/
 ORDER BY 2
  */
INNER JOIN (
 SELECT DISTINCT
 __C1_0 AS NAME
 FROM PUBLIC.__T1
 ORDER BY 1
) SUB__Z2
 /* SELECT DISTINCT
 __C1_0 AS NAME
 FROM PUBLIC.__T1
 /++ PUBLIC."merge_sorted": __C1_0 IS ?1 ++/
 WHERE __C1_0 IS ?1
 ORDER BY 1
 /++ index sorted ++/: NAME = __Z0__NAME
  */
 ON 1=1
WHERE __Z0__NAME = SUB__Z2.NAME
GROUP BY __Z0__NAME"




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Index not getting created

2017-12-04 Thread Taras Ledkov

Hi,

I see only that "cache not found" at the server.log. Something is wrong.

Is it possible to provide a test as the standalone java class or standalone
GitHub project so that I can run it and reproduce the problem?

On 30.11.2017 20:34, Naveen Kumar wrote:

Hi


Here is the node logs captured with -v option.


[22:56:41,291][SEVERE][client-connector-#618%IgnitePOC%][JdbcRequestHandler]
Failed to execute SQL query [reqId=0, req=JdbcQueryExecuteRequest
[schemaName=PUBLIC, pageSize=1024, maxRows=0, sqlQry=CREATE INDEX
idx_customer_accountId ON "Customer".CUSTOMER (ACCOUNT_ID_LIST),
args=[], stmtType=ANY_STATEMENT_TYPE]]

class org.apache.ignite.internal.processors.query.IgniteSQLException:
Cache doesn't exist: Customer

 at 
org.apache.ignite.internal.processors.query.h2.ddl.DdlStatementsProcessor.convert(DdlStatementsProcessor.java:343)

 at 
org.apache.ignite.internal.processors.query.h2.ddl.DdlStatementsProcessor.runDdlStatement(DdlStatementsProcessor.java:287)

 at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1466)

 at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1966)

 at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$6.applyx(GridQueryProcessor.java:1962)

 at 
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)

 at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2445)

 at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFieldsNoCache(GridQueryProcessor.java:1971)

 at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeQuery(JdbcRequestHandler.java:305)

 at 
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:164)

 at 
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:137)

 at 
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:39)

 at 
org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)

 at 
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)

 at 
org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)

 at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)

 at 
org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)

 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

 at java.lang.Thread.run(Thread.java:748)



Select query works fine

0: jdbc:ignite:thin://127.0.0.1> select ACCOUNT_ID_LIST from
"Customer".CUSTOMER where ACCOUNT_ID_LIST ='A10001';

++

|ACCOUNT_ID_LIST |

++

| A10001 |

++

1 row selected (1.342 seconds)


Create index query failed with the below error

0: jdbc:ignite:thin://127.0.0.1> CREATE INDEX idx_customer_accountId
ON "Customer".CUSTOMER (ACCOUNT_ID_LIST);

Error: Cache doesn't exist: Customer (state=5,code=0)

java.sql.SQLException: Cache doesn't exist: Customer

 at 
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:671)

 at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130)

 at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:299)

 at sqlline.Commands.execute(Commands.java:823)

 at sqlline.Commands.sql(Commands.java:733)

 at sqlline.SqlLine.dispatch(SqlLine.java:795)

 at sqlline.SqlLine.begin(SqlLine.java:668)

 at sqlline.SqlLine.start(SqlLine.java:373)

 at sqlline.SqlLine.main(SqlLine.java:265)


selectc query works fine even after issuing the create index query
which is failed

0: jdbc:ignite:thin://127.0.0.1> select ACCOUNT_ID_LIST from
"Customer".CUSTOMER where ACCOUNT_ID_LIST ='A10001';

++

|ACCOUNT_ID_LIST |

++

| A10001 |

++

1 row selected (1.641 seconds)

0: jdbc:ignite:thin://127.0.0.1>

On Thu, Nov 30, 2017 at 9:04 PM, Taras Ledkov  wrote:

Hi,

I cannot reproduce the issue with described steps.
Please check that the cache wasn't destroyed on

Re: Index not getting created

2017-11-30 Thread Taras Ledkov

Hi,

I cannot reproduce the issue with described steps.
Please check that the cache wasn't destroyed on the server.

i.e. please execute SELECT query again after failed CREATE INDEX.


On 30.11.2017 11:45, Naveen wrote:

Has anyone got a chance to look into into this issue where I am trying to
create an index, but its throwing an error saying cache does not exist

0: jdbc:ignite:thin://127.0.0.1>  select ACCOUNT_ID_LIST from
"Customer".CUSTOMER where ACCOUNT_ID_LIST ='A10001';
++
|ACCOUNT_ID_LIST |
++
| A10001 |
++
1 row selected (2.078 seconds)

**0: jdbc:ignite:thin://127.0.0.1> CREATE INDEX idx_customer_accountId ON
"Customer".CUSTOMER (ACCOUNT_ID_LIST);*
*Error: Cache doesn't exist: Customer (state=5,code=0)
java.sql.SQLException: Cache doesn't exist: Customer
 at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:671)
 at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130)
 at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:299)
 at sqlline.Commands.execute(Commands.java:823)
 at sqlline.Commands.sql(Commands.java:733)
 at sqlline.SqlLine.dispatch(SqlLine.java:795)
 at sqlline.SqlLine.begin(SqlLine.java:668)
 at sqlline.SqlLine.start(SqlLine.java:373)
 at sqlline.SqlLine.main(SqlLine.java:265)
0: jdbc:ignite:thin://127.0.0.1>




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: JdbcQueryExecuteResult cast error

2017-11-27 Thread Taras Ledkov

Hi,

Another simple check:
How many thread do you use for update & select? In case your application 
is multi-threaded be sure that separate JDBC Connection is used for each 
thread. Ignite JDBC API is not thread safe.


Otherwise, please share the smallest reproducer / steps to reproduce.


On 27.11.2017 18:37, Denis Mekhanikov wrote:
Could you attach queries, that you are trying to execute and full 
error message?


Also make sure, that JDBC driver version is also 2.3

Denis

сб, 25 нояб. 2017 г. в 12:39, Lucky <mailto:wanxing...@163.com>>:


Hi
    The ignite version is 2.3
When I update something by cache.query(new SQLFieldsQuery(sql)),
and then I execute query to get new records with jdbc thin mode,
It's got an error:
Caused by: java.lang.ClassCastException:
org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryExecuteResult
cannot be cast to
org.apache.ignite.internal.processors.odbc.jdbc.JdbcQueryMetadataResult
at

org.apache.ignite.internal.jdbc.thin.JdbcThinResultSet.meta(JdbcThinResultSet.java:1877)
at

org.apache.ignite.internal.jdbc.thin.JdbcThinResultSet.getMetaData(JdbcThinResultSet.java:714)



What's  suggestion?
    Thanks.




--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Failed to run reduce query locally

2017-08-28 Thread Taras Ledkov
ng.java:1278)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer(IgniteH2Indexing.java:1253)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:813)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$8.iterator(IgniteH2Indexing.java:1493)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:94)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$9.iterator(IgniteH2Indexing.java:1534)
~[ignite-indexing-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:94)
~[ignite-core-2.0.0.jar:2.0.0]
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.getAll(QueryCursorImpl.java:113)
~[ignite-core-2.0.0.jar:2.0.0]
... 181 more


It's very strange the query fails only when exactly one record is returned
in result set



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Failed-to-run-reduce-query-locally-tp16403.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Passing parameters to IgniteBiPredicate

2017-07-06 Thread Taras Ledkov

Hi,

Sure, please take a look at the example:
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheQueryExample.java#L155


On 06.07.2017 11:13, Guilherme Melo wrote:
Hello, I am having problems submitting a query to the service from a 
client. Using the example from the documentation:


|IgniteCache cache = ignite.cache("mycache"); int sal = 
100 // Find only persons earning more than 1,000. try (QueryCursor 
cursor = cache.query(new ScanQuery((k, p) -> p.getSalary() > sal)) { 
for (Person p : cursor) System.out.println(p.toString()); }|


It fails silently. I have also tried creating a class that implements 
the IgniteBiPredicate that takes the value as a parameter on the 
constructor.

Has anyone had experience pushing a scan query with parameters?
Thanks !


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Fetch column names in sql query results

2017-06-27 Thread Taras Ledkov

Hi,

Lets clarify.
Do you use IgniteCache#query(SqlFieldsQuery) that returns 
FieldsQueryCursor and

FieldsQueryCursor#getColumnsCount() returns zero?


On 27.06.2017 9:03, Megha Mittal wrote:

Hi,

I am using Ignite-2.0.0 and facing problem while using sql query. I need to
fetch column names while querying but only a list is returned using
SqlFieldsQuery with no column names. Also, I cannot use SqlQuery as I need
to add some aggregate columns, case clauses etc in select clause as well .
Please tell me is it possible to fetch column names while querying Ignite.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Fetch-column-names-in-sql-query-results-tp14089.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Hadoop + IGFS + Hive+Tez

2017-06-23 Thread Taras Ledkov

Hi,

I don't catch how you configured hadoop Job and where Ignite instances 
are started (on all hadoop hosts?).


Shared memory is default and preferred type of connection to Ignite.
In this case Ignite nodes must be started on hadoop nodes.

If you started  Ignite on either hosts IGFS URL must be specified with 
the host/port.


Please take a look:
https://apacheignite.readme.io/v1.2/docs/file-system#section-file-system-uri


On 21.06.2017 18:28, Mimmo Celano wrote:

Hi, thanks for reply.

Ignite configuration is a default configuration in 
config/hadoop/default-configuration.xml i don't change anything. 
Hadoop configuration is a cluster o 3 node, it work properly withouth 
ignite. The problem is in a connection between the client and the 2 
server. I don't change anything in ignite file configuration, the only 
things i do is the code who i posted.
I agree with you that is a configuration problem but i don't know how 
to configure it. The 2 server node are slave in hadoop configuration, 
but i don't think it's a problem. I don't find another ignite logs 
other that posted... I don't use tez. You think that i use another 
configuration for ignite instead the default for hadoop? I want just 
save all word of wordcount processing in a grid cache between 2 node 
server. There's no log of the task because the application crash 
before task..


The node are connected via lan. Thanks

2017-06-21 16:59 GMT+02:00 Taras Ledkov <mailto:tled...@gridgain.com>>:


Hi,

Please provide:
- ignite configuration;
- ignite logs;
- hadoop configurations;
- logs of the task.

I don't have tez experience but the cause of the problem looks
like Ignite is not configured properly.


On 21.06.2017 8:39, ishan-jain wrote:

Hi all, I have configured hive to run over IGFS for hadoop file system. I am
trying to install tez on it but am not able to configure tez-site.xml and
bashrc file properly.
I have downloaded a binary tez bin to my system (0.8.4)
I Put the tez tar ball in hdfs dire /user/tez
I pointed my tez.lib.uri to this path
I have configured mapred-site.xml to use yarn-tez
I have hive already configured to run over igfs.
But when i execute the mapred task it says
Failed to connect to shared memory endpoint .
I am using TCP as ipc endpoint configuration and have enabled ipc endpoint
property.



--
View this message in 
context:http://apache-ignite-users.70518.x6.nabble.com/Hadoop-IGFS-Hive-Tez-tp14012.html

<http://apache-ignite-users.70518.x6.nabble.com/Hadoop-IGFS-Hive-Tez-tp14012.html>Sent
from the Apache Ignite Users mailing list archive at Nabble.com.


-- 
Taras Ledkov

Mail-To:tled...@gridgain.com <mailto:tled...@gridgain.com>




--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Hadoop + IGFS + Hive+Tez

2017-06-21 Thread Taras Ledkov

Hi,

Please provide:
- ignite configuration;
- ignite logs;
- hadoop configurations;
- logs of the task.

I don't have tez experience but the cause of the problem looks like 
Ignite is not configured properly.



On 21.06.2017 8:39, ishan-jain wrote:

Hi all, I have configured hive to run over IGFS for hadoop file system. I am
trying to install tez on it but am not able to configure tez-site.xml and
bashrc file properly.
I have downloaded a binary tez bin to my system (0.8.4)
I Put the tez tar ball in hdfs dire /user/tez
I pointed my tez.lib.uri to this path
I have configured mapred-site.xml to use yarn-tez
I have hive already configured to run over igfs.
But when i execute the mapred task it says
Failed to connect to shared memory endpoint .
I am using TCP as ipc endpoint configuration and have enabled ipc endpoint
property.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Hadoop-IGFS-Hive-Tez-tp14012.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Affinity Issue(Collocate compute) in Ignite JDBC Query

2017-06-14 Thread Taras Ledkov

Hi,

I think you mean queries with condition on affinity columns.
In this case see more:
https://issues.apache.org/jira/browse/IGNITE-4509


On 14.06.2017 9:15, sandeepbellary wrote:

Hi,
Thanks for the reply.I understand that query will be split into a number of
subqueries for final reduce in PARTITIONED CACHE, but if I issue a query for
a single record(which ideally should be present in one node i.e.,
NON-REPLICATED cache), then the above behaviour is hampering the performance
of the application.Is there any way to restrict that behaviour?


Regards,
Sandeep




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Affinity-Issue-Collocate-compute-in-Ignite-JDBC-Query-tp13338p13685.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Same Affinity For Same Key On All Caches

2017-03-15 Thread Taras Ledkov

Folks,

I worked on issue https://issues.apache.org/jira/browse/IGNITE-3018 that 
is related to performance of Rendezvous AF.


But Wang/Jenkins hash integer hash distribution is worse then MD5. So, i 
try to use simple partition balancer close

to Fair AF for Rendezvous AF.

Take a look at the heatmaps of distributions at the issue. e.g.:
- Compare of current Rendezvous AF and new Rendezvous AF based of 
Wang/Jenkins hash: 
https://issues.apache.org/jira/secure/attachment/12858701/004.png
- Compare of current Rendezvous AF and new Rendezvous AF based of 
Wang/Jenkins hash with partition balancer: 
https://issues.apache.org/jira/secure/attachment/12858690/balanced.004.png


When the balancer is enabled the distribution of partitions by nodes 
looks like close to even distribution
but in this case there is not guarantee that a partition doesn't move 
from one node to another

when node leave topology.
It is not guarantee but we try to minimize it because sorted array of 
nodes is used (like in for pure-Rendezvous AF).


I think we can use new fast Rendezvous AF and use 'useBalancer' flag 
instead of Fair AF.


On 03.03.2017 1:56, Denis Magda wrote:
What??? Unbelievable. It sounds like a design flaw to me. Any ideas 
how to fix?


—
Denis

On Mar 2, 2017, at 2:43 PM, Valentin Kulichenko 
<mailto:valentin.kuliche...@gmail.com>> wrote:


Adding back the dev list.

Folks,

Are there any opinions on the problem discussed here? Do we really 
need FairAffinityFunction if it can't guarantee cross-cache collocation?


-Val

On Thu, Mar 2, 2017 at 2:41 PM, vkulichenko 
<mailto:valentin.kuliche...@gmail.com>> wrote:


Hi Alex,

I see your point. Can you please outline its advantages vs rendezvous
function?

In my view issue discussed here makes it pretty much useless in vast
majority of use cases, and very error-prone in all others.

-Val



--
View this message in context:

http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p11006.html

<http://apache-ignite-users.70518.x6.nabble.com/Same-Affinity-For-Same-Key-On-All-Caches-tp10829p11006.html>
Sent from the Apache Ignite Users mailing list archive at
    Nabble.com <http://Nabble.com>.






--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Detecting terminal condition for group of items in Ignite cache.

2016-12-16 Thread Taras Ledkov
In case the performance of a fix rate is appropriate for you I think your
solution make sense because it is very simple and robust.

Also, you can use IgniteScheduler (
https://ignite.apache.org/releases/mobile/org/apache/ignite/IgniteScheduler.html)
instead of spring scheduler.

On Wed, Dec 14, 2016 at 9:22 PM, begineer  wrote:

> Thanks for reply first of all. Well I had this in back of my mind but it
> will
> be like duplicating the data which we already have in other cache(trades
> cache which I am currently querying).
>
> So other way I can think of is using spring scheduler with 1 minute
> fix-rate
> and check if any item moved to SUCCESS state, if yes, postpone execution by
> one minute(something like fix-delay but this works after executing the job,
> but my requirement is  it should delay the execution itself.)
>
> If no more items moved to final state within mentioned time, we execute the
> scheduler.
>
> Any suggestions in this approach please or any better one?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Detecting-terminal-condition-for-group-
> of-items-in-Ignite-cache-tp9526p9539.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Detecting terminal condition for group of items in Ignite cache.

2016-12-14 Thread Taras Ledkov
Hi,

What do you think about an atomic counter of not successful trades, e.g.
the atomic cache with a company name as a key and Integer as a counter.
You can update the counter when add trade (increment) and at the listener
of the continuous query (decrement) and do something when counter equals
zero.

-- 

Taras Ledkov
Mail-To: tled...@gridgain.com


On Wed, Dec 14, 2016 at 2:02 PM, begineer  wrote:

> Hi,
> My sample application processes trades for different companies stored in
> Ignite cache. When all trades for particular company reaches SUCCESS stage,
> an automatic notification should be triggered and some other
> system/application will react to it. To do this, When ever any trade
> reaches
> SUCCESS stage, I detect it using Continuous query, I am comparing total
> size
> of trades for particular company in cache and the trades for that company
> which are in SUCCESS stage. If they are equal, trigger a notification else
> wait.
> There are two drawbacks with this approach.
> 1. I have to query through all the trades for that company twice every time
> trade reach SUCCESS stage which is really really bad.
> 2. notification can be triggered multiple times if multiple threads
> processing trades in parallel in SUCCESS stage.(i.e lets say last 5 items
> reach SUCCESS stage together, so 5 emails will be sent since continuous
> query will run local listener 5 times)
>
> Is there a better way to do same task. I am hoping it is. My current
> approach cannot work in real time application.
>
> Below is my code.
>
>
>
> public class TerminalEventsUsingContQuery {
> IgniteCache cache;
>
> public static void main(String[] args) {
> new TerminalEventsUsingContQuery().test();
> }
>
> private void test() {
>
> Ignite ignite = Ignition.start("examples/
> config/example-ignite.xml");
> CacheConfiguration config = new
> CacheConfiguration<>("TradesCache");
>
> cache = ignite.getOrCreateCache(config);
> ContinuousQuery query = new
> ContinuousQuery<>();
> query.setLocalListener(events -> events.forEach(e ->
> process(e.getValue(;
>
> query.setRemoteFilterFactory(factoryOf(e ->
> TradeStatus.SUCCESS.equals(e.getValue().getStatus(;
> query.setInitialQuery(new ScanQuery((k, v)
> ->
> TradeStatus.SUCCESS.equals(v.getStatus(;
> buildData();
> QueryCursor> cursor =
> cache.query(query);
> cursor.forEach(entry -> process(entry.getValue()));
> Trade t9 = new Trade(9, TradeStatus.SUCCESS, "type1", 100);
> cache.put(t9.getId(), t9);
> }
>
> private void process(Trade trade) {
> List> totalperRef = cache
> .query(new ScanQuery((k,
> v) -> v.getRef() ==
> trade.getRef())).getAll();
>
> List> totalSuccessForRef =
> cache.query(new
> ScanQuery(
> (k, v) -> v.getRef() == trade.getRef() &&
> TradeStatus.SUCCESS.equals(v.getStatus(.getAll();
>
> if (totalperRef.size() == totalSuccessForRef.size()) {
> System.out.println("Terminal condition reached.
> Notify the handler for :
> " + trade.getRef());
> } else {
> System.out.println("Terminal condition not reached
> yet. Current
> processing Trade : " + trade.getId());
> }
> }
>
> private void buildData() {
> Trade t1 = new Trade(1, TradeStatus.SUCCESS, "type1", 100);
> Trade t2 = new Trade(2, TradeStatus.FAILED, "type1", 101);
> Trade t3 = new Trade(3, TradeStatus.EXPIRED, "type1", 102);
> Trade t4 = new Trade(4, TradeStatus.SUCCESS, "type1", 100);
> Trade t5 = new Trade(5, TradeStatus.CHANGED, "type1", 103);
> Trade t6 = new Trade(6, TradeStatus.SUCCESS, "type1", 100);
> Trade t7 = new Trade(7, TradeStatus.CHANGED, "type1", 103);
> Trade t8 = new Trade(8, TradeStatus.SUCCESS, "type1", 101);
> cache.put(t1.getId(), t1);
> cache.put(t2.getId(), t2);
> cache.put(t3.getId(), t3);
> cache.put(t4.getId(), t4);
> cache.put(t5.getId(), t5);
> cache.put(t6.getId(), t6);
> cache.put(t7.getId(), t7);
> cac

Re: Not able to join cluster with Zookeeper based IP finder

2016-12-05 Thread Taras Ledkov
Please provide the logs from all 3 nodes.
Also please provide FULL logs if it is possible:

[15:05:05] Quiet mode.
[15:05:05]   ^-- To see **FULL** console log here add
-DIGNITE_QUIET=false or "-v" to ignite.{sh|bat}



On Mon, Dec 5, 2016 at 6:22 PM, ghughal  wrote:

> Yes, issue is only reproducible when nodes are started simultaneously. We
> are
> using marathon (on mesos) to start apps so we don't have control over when
> node starts. Marathon almost always starts multiple instance at same time.
>
> I'm attaching logs for both nodes as well entrie from zookeeper. Please
> note, we have one client node. So for one of the node you will see 2 hosts
> and for other node you'll see only 1 host in cluster. Ideally, we should be
> seeing 3 hosts for both nodes.
>
> node1.log
> 
> node2.log
> 
> zookeeper_entries.txt
>  n9399/zookeeper_entries.txt>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Not-able-to-join-cluster-with-
> Zookeeper-based-IP-finder-tp9311p9399.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Problem with ReentranLocks on shutdown of one node in cluster

2016-12-05 Thread Taras Ledkov
Hi,

Thanks a lot for reproducible scenario and for the test.
I create the issue to track:
https://issues.apache.org/jira/browse/IGNITE-4369.

On Wed, Nov 30, 2016 at 7:35 PM, vladiisy  wrote:

> Hi,
>
> i have a problem with my application using reentrant locks, they seems to
> going corrupt after one node leaves my cluster. The threads with locks
> hanging on GridCacheLockImpl.unlock for ever:
>
> ">>JUnit-Test-Worker-0" #160 prio=5 os_prio=0 tid=0x20d57800
> nid=0x21f4 runnable [0x2e28e000]
>java.lang.Thread.State: RUNNABLE
> at java.lang.Thread.yield(Native Method)
> at
> org.apache.ignite.internal.processors.datastructures.
> GridCacheLockImpl$Sync.tryRelease(GridCacheLockImpl.java:469)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.release(
> AbstractQueuedSynchronizer.java:1261)
> at
> org.apache.ignite.internal.processors.datastructures.
> GridCacheLockImpl.unlock(GridCacheLockImpl.java:1296)
> at
> com.iisy.solvatio.core.cluster.ignite.IgniteTests$
> Worker.run(IgniteTests.java:156)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 
>
> i have a junit test IgniteTests.java
>  n9303/IgniteTests.java>
> , it shows the problem.
>
> Is it a bug in ignite or my mistake?
>
> Great thanks in advance,
> Vladimir
>
> Here my JUnit
> 
> package ignite.cluster.tests;
>
> import java.util.ArrayList;
> import java.util.Arrays;
> import java.util.Collection;
> import java.util.HashMap;
> import java.util.List;
> import java.util.Map;
> import java.util.concurrent.CountDownLatch;
> import java.util.concurrent.ExecutorService;
> import java.util.concurrent.Executors;
> import java.util.concurrent.ThreadFactory;
> import java.util.concurrent.TimeUnit;
>
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteLock;
> import org.apache.ignite.IgniteSystemProperties;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.cluster.ClusterNode;
> import org.apache.ignite.configuration.IgniteConfiguration;
> import org.apache.ignite.events.EventType;
> import org.apache.ignite.logger.slf4j.Slf4jLogger;
> import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
> import
> org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
> import org.junit.After;
> import org.junit.Assert;
> import org.junit.Before;
> import org.junit.Test;
>
> import com.google.common.util.concurrent.ThreadFactoryBuilder;
>
> public class IgniteTests {
>
> private Ignite[] clusterNodes;
>
> @Before
> public void start() throws Exception {
> int numNodes = 2;
> this.clusterNodes = new Ignite[numNodes];
> for (int i = 0; i < numNodes; i++) {
> this.clusterNodes[i] = startClusterNode(i);
> }
> boolean ready = waitClusterHasNodes(this.clusterNodes[0],
> numNodes);
> if (!ready) {
> throw new IllegalStateException("Cluster not ready after 15
> seconds");
> }
> }
>
> public boolean waitClusterHasNodes(final Ignite node, final int
> numNodes) throws InterruptedException {
> boolean ready = false;
> for (int i = 0; i < 15; i++) {
> Collection nodes0 = node.cluster().nodes();
> if (nodes0.size() == numNodes) {
> ready = true;
> break;
> }
> Thread.sleep(1000);
> }
> return ready;
> }
>
> private Ignite startClusterNode(final int nodeIndex) throws Exception {
> String nodeName = "node" + nodeIndex;
> List addresses = Arrays.asList("127.0.0.1:48500..48502");
>
> System.setProperty(IgniteSystemProperties.IGNITE_UPDATE_NOTIFIER,
> Boolean.toString(false));
> IgniteConfiguration config = new IgniteConfiguration();
>
> config.setClassLoader(this.getClass().getClassLoader());
> config.setPeerClassLoadingEnabled(false);
> config.setGridLogger(new Slf4jLogger());
>
> config.setGridName(nodeName);
> config.setConsistentId(nodeName);
>
> config.setIncludeEventTypes(EventType.EVT_CACHE_OBJECT_PUT);
>
> Map userAttributes = new HashMap<>();
> userAttributes.put("clusterName", "JUnit-Cluster");
> config.setUserAttributes(userAttributes);
>
> config.setMetricsLogFrequency(0);
>
> TcpDiscoverySpi spi = new TcpDiscoverySpi();
> TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
> ipFinder.setAddresses(addresses);
> config.setDiscoverySpi(spi);
> config.setClientMode(false);
>
> Ignite ignite = Ignition.start(config);
> return ignite;
> }
>
> @After
> public void stop() {
> if (this.clusterNodes != null) {
> for (int i = 0; i < this.clusterNodes.length; i++) {
> shutdown(i);

Re: Not able to join cluster with Zookeeper based IP finder

2016-12-05 Thread Taras Ledkov
Hi,

So, is the case reproduced only when the nodes are started simultaneously?
Could you add any delay between instances starts?

Looks like an issue. Please provide the logs, please.



On Thu, Dec 1, 2016 at 1:29 AM, ghughal  wrote:

> We are seeing intermittent issue where ignite node is not able to join the
> cluster for our specific configuration. Our ignite instances are part of
> spring boot application and it’s running inside docker container. These
> docker container gets deployed by marathon framework on mesos. When we try
> to start 2 instances, marathon starts both instance at about same time and
> it seems Ignite is getting into some race condition.
>
> Since we are running inside docker container, we are using
> TcpDiscoveryZookeeperIpFinder with AddressResolver. Here’s code for that:
>
>
>
> It seems each node starts at same time and checks with zookeeper if there
> are any other nodes in cluster. It doesn’t find any (in most cases) and
> creates separate cluster. If we destroy one of the instances and start it
> again it joins the cluster and everything works fine.
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Not-able-to-join-cluster-with-
> Zookeeper-based-IP-finder-tp9311.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite - FileNotFoundException

2016-10-13 Thread Taras Ledkov

I guess 64K-128K must be enough.


On 13.10.2016 13:04, Anil wrote:

What is the recommended file descriptor limit for ignite data load ?

Thanks.

On 13 October 2016 at 15:16, Taras Ledkov <mailto:tled...@gridgain.com>> wrote:


Please check the file descriptors OS limits.


On 13.10.2016 12:36, Anil wrote:


When loading huge data into ignite - i see the following
exception.. my configuration include off heap as 0 and swap
storage to true.

 org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to process directory: /tmp/ignite/work/ipc/shmem
java.io.FileNotFoundException:
/tmp/ignite/work/ipc/shmem/lock.file (Too many open files)
at java.io.RandomAccessFile.open0(Native Method)
at
java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at
java.io.RandomAccessFile.(RandomAccessFile.java:243)
at

org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryServerEndpoint$GcWorker.cleanupResources(IpcSharedMemoryServerEndpoint.java:608)
at

org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryServerEndpoint$GcWorker.body(IpcSharedMemoryServerEndpoint.java:565)
at

org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)


Looks like it is due to high load.

How can we avoid this exception? thanks.

Thanks.



    -- 
    Taras Ledkov

Mail-To: tled...@gridgain.com <mailto:tled...@gridgain.com>




--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Ignite - FileNotFoundException

2016-10-13 Thread Taras Ledkov

Please check the file descriptors OS limits.


On 13.10.2016 12:36, Anil wrote:


When loading huge data into ignite - i see the following exception.. 
my configuration include off heap as 0 and swap storage to true.


 org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to process directory: /tmp/ignite/work/ipc/shmem
java.io.FileNotFoundException: /tmp/ignite/work/ipc/shmem/lock.file 
(Too many open files)

at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.(RandomAccessFile.java:243)
at 
org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryServerEndpoint$GcWorker.cleanupResources(IpcSharedMemoryServerEndpoint.java:608)
at 
org.apache.ignite.internal.util.ipc.shmem.IpcSharedMemoryServerEndpoint$GcWorker.body(IpcSharedMemoryServerEndpoint.java:565)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)

at java.lang.Thread.run(Thread.java:745)


Looks like it is due to high load.

How can we avoid this exception? thanks.

Thanks.




--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Loading Hbase data into Ignite

2016-10-12 Thread Taras Ledkov

Hi,

FailoverSpi is used to process jobs failures.

The AlwaysFailoverSpi implementation is used by default. One tries to 
submit a job the 'maximumFailoverAttempts' (default 5) times .


On 12.10.2016 13:09, Anil wrote:

HI,

Following is the approach to load hbase data into Ingnite

1. Create Cluster wide singleton distributed custom service
2. Get all region(s) information in the init() method of your custom 
service
3. Broadcast region(s) using ignite.compute().call() in execute() 
method of your custom service

4. Scan a particular region and load the cache

Note : Need to handle node failure during cache load as distributed 
service is deployed on some other node.



How a broadcast job process intermediate failure handled in ignite 
compute() ? rescheduled ? or ignored ? Please clarify.


Please let me know if you see any anti-pattern in terms of ignite ?

Thanks.






On 11 October 2016 at 20:49, Anil <mailto:anilk...@gmail.com>> wrote:


Thank you Vladislav and Andrey. I will look at the document and
give a try.

Thanks again.

On 11 October 2016 at 20:47, Andrey Gura mailto:ag...@apache.org>> wrote:

Hi,

HBase regions doesn't map to Ignite nodes due to architectural
differences. Each HBase region contains rows in some range of
keys that sorted lexicographically while distribution of keys
in Ignite depends on affinity function and key hash code. Also
how do you remap region to nodes in case of region was splitted?

Of course you can get node ID in cluster for given key but
because HBase keeps rows sorted by keys lexicographically you
should perform full scan in HBase table. So the simplest way
for parallelization data loading from HBase to Ignite it
concurrently scan regions and stream all rows to one or more
DataStreamer.


On Tue, Oct 11, 2016 at 4:11 PM, Anil mailto:anilk...@gmail.com>> wrote:

HI,

we have around 18 M records in hbase which needs to be
loaded into ignite cluster.

i was looking at

http://apacheignite.gridgain.org/v1.7/docs/data-loading
<http://apacheignite.gridgain.org/v1.7/docs/data-loading>

https://github.com/apache/ignite/tree/master/examples
<https://github.com/apache/ignite/tree/master/examples>

is there any approach where each ignite node loads the
data of one hbase region ?

Do you have any recommendations ?

Thanks.






--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Getting Error [grid-timeout-worker] when running join query on Single Ignite Node

2016-09-29 Thread Taras Ledkov
)
^-- Node [id=6caab193, name=null]
^-- H/N/C [hosts=1, nodes=2, CPUs=16]
^-- CPU [cur=0.03%, avg=3.31%, GC=0%]
^-- Heap [used=1219MB, free=70.23%, comm=4095MB]
^-- Public thread pool [active=0, idle=32, qSize=0]
^-- System thread pool [active=0, idle=32, qSize=0]
^-- Outbound messages queue [size=0]
[09:49:04,879][INFO][grid-timeout-worker-#81%null%][IgniteKernal]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=6caab193, name=null]
^-- H/N/C [hosts=1, nodes=2, CPUs=16]
^-- CPU [cur=0%, avg=3.23%, GC=0%]
^-- Heap [used=1222MB, free=70.14%, comm=4095MB]
^-- Public thread pool [active=0, idle=32, qSize=0]
^-- System thread pool [active=0, idle=32, qSize=0]
^-- Outbound messages queue [size=0]
[09:50:04,883][INFO][grid-timeout-worker-#81%null%][IgniteKernal]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=6caab193, name=null]
^-- H/N/C [hosts=1, nodes=2, CPUs=16]
^-- CPU [cur=0.03%, avg=3.17%, GC=0%]
^-- Heap [used=1227MB, free=70.04%, comm=4095MB]
^-- Public thread pool [active=0, idle=32, qSize=0]
^-- System thread pool [active=0, idle=32, qSize=0]
^-- Outbound messages queue [size=0]
[09:51:04,882][INFO][grid-timeout-worker-#81%null%][IgniteKernal]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=6caab193, name=null]
^-- H/N/C [hosts=1, nodes=2, CPUs=16]


Here is what my join query (Generated by some custom engine which work 
perfectly normal with postgres with same tables and joins)looks like:


SELECT  DISTINCT  m1 AS m1,m1_TYP AS m1_TYP FROM (SELECT entry AS 
a5,QS5.a1 AS a1,QS5.a4 AS a4,QS5.m1 AS m1,m1_TYP AS m1_TYP
 FROM "TABLE1".TABLE1 AS T,(SELECT entity AS m1,typ AS m1_typ,elem AS 
a5,QS4.a1 AS a1,QS4.a4 AS a4
 FROM "TABLE2".TABLE2 AS T,(SELECT entry AS a2,QS3.a1 AS a1,QS3.a4 AS 
a4,QS3.m1 AS m1,m1_TYP AS m1_TYP
 FROM "TABLE1".TABLE1 AS T,(SELECT entry AS a4,QS2.a1 AS a1,QS2.a2 AS 
a2,QS2.m1 AS m1,m1_TYP AS m1_TYP
 FROM "TABLE1".TABLE1 AS T,(SELECT entry AS a3,QS1.a1 AS a1,QS1.a2 AS 
a2,QS1.a4 AS a4,QS1.m1 AS m1,m1_TYP AS m1_TYP
 FROM "TABLE1".TABLE1 AS T,(SELECT a1 AS a1,m1 AS m1,m1_TYP AS 
m1_TYP,COALESCE(S3.elem,val3) AS a3,COALESCE(S4.elem,val4) AS 
a4,COALESCE(S2.elem,val2) AS a2
 FROM (SELECT entry AS a1,T.val2 AS m1,T.typ2 AS m1_TYP,T.val6 AS 
VAL3,T.val6 AS VAL4,T.val8 AS VAL2

 FROM "TABLE1".TABLE1 AS T,(SELECT elem AS a1
 FROM "TABLE2".TABLE2 AS T
 WHERE entity = '3' AND typ = 5001
  AND(prop = '1oh~#some_prop1')) AS QS0 WHERE  entry = QS0.a1
  AND   (T.prop0 = '4xm~#type' AND T.prop8 = '1oh~#some_prop2' AND 
T.prop6 = '1oh~#some_prop3' AND T.prop6 = '1oh~#some_prop3' AND 
T.prop2 = '1oh~#is_atom_of')
  AND  T.val0 = '7a~') AS Q1 LEFT OUTER JOIN "TABLE3".TABLE3 AS S3 ON 
 Q1.VAL3 = S3.list_id LEFT OUTER JOIN "TABLE3".TABLE3 AS S4 ON 
 Q1.VAL4 = S4.list_id LEFT OUTER JOIN "TABLE3".TABLE3 AS S2 ON 
 Q1.VAL2 = S2.list_id

   WHERE   (  (a1  <>  COALESCE(S4.elem,val4)
) )) AS QS1 WHERE entry = QS1.a3
  AND(T.prop0 = '4xm~#type' AND T.prop5 = '1oh~#some_prop1' AND 
T.prop8 = '1oh~#some_prop4')
  AND  T.val0 = '562~' AND T.val5 = '1' AND T.val8 = '6o7~') AS QS2 
WHERE entry = QS2.a4

  AND(T.prop0 = '4xm~#type' AND T.prop5 = '1oh~#some_prop1')
  AND  T.val0 = '7a~' AND T.val5 = '1') AS QS3 WHERE entry = QS3.a2
  AND(T.prop0 = '4xm~#type' AND T.prop5 = '1oh~#some_prop1')
  AND  T.val0 = '562~' AND T.val5 = '1') AS QS4 WHERE entity = QS4.m1 
AND typ = QS4.m1_TYP

  AND(prop = '1oh~#isome_prop5')) AS QS5 WHERE entry = QS5.a5
  AND(T.prop0 = '4xm~#type' AND T.prop5 = '1oh~#some_prop1')
  AND  T.val0 = '1eg~' AND T.val5 = '0') AS QS6 LIMIT 100


PS: I've been able to run the similar query with join and that doesn't 
give the above error logs. Please help me with that.







--
Thanks & Regard

Manish Mishra
Software Consultant,
Knoldus Software, LLP



--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Out of order updates for CacheEntryListeners and multi-cache transactions

2016-09-26 Thread Taras Ledkov
There is no way to guarantee order of updates different keys. Even for a 
single cache.

The order is guaranteed only for the one key (create / update).

On 21.09.2016 19:07, ross.anderson wrote:

Hi,

So with a simple setup:
Two nodes, A and B
Two TRANSACTIONAL caches y and z, both 

On node B I register a CacheEntryCreatedListener to cache y and to cache z
which just logs directly out on the same thread.

On node A I:
Start a transaction
Insert the value '1', '1' to cache y, and '2', '2' to cache z
Commit the transaction

On node B I receive '2', '2' for cache z and then '1', '1' for cache y -
i.e. in the reverse order to how I inserted them in the caches.

Is there any way to guarantee order of updates across multiple caches?

Thanks,
Ross



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Out-of-order-updates-for-CacheEntryListeners-and-multi-cache-transactions-tp7864.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: ignite memory issues:Urgent in production

2016-09-20 Thread Taras Ledkov

Lets separate the issues.

1. Clients affect the displayed total grid's heap.
I think it is not cause of the server failures.

2. Is server nodes shutdown or crashed / failed?
If node is failed please attach thelog of the failed server. Please 
attach the full log (from the server node start) if the log is not huge.


On 20.09.2016 12:52, percent620 wrote:

Can anyone help me to fix this issue as this issue happens in our production
env?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ignite-memory-issues-Urgent-in-production-tp7817p7842.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Optimizing Local Cache Iteration

2016-09-19 Thread Taras Ledkov

Hi, Jaime

AFAIK there is no way to hook the rebalance now.

Do you try the CacheMemoryMode.OFFHEAP_VALUES mode? I understand that 
this option isn't related to keep time at serialization

but it may affect performance in case of big value object.

On 18.09.2016 6:44, jaime spicciati wrote:

All,
I put together a simple ignite application to iterate over all cache 
entries using broadcast() and scanQuery() (I am currently evaluating 
the two approaches). The goal is to iterate over all of the cache 
values local to the ignite instance as fast as possible.


The data I am storing in the grid is relatively large, 10k of data for 
each cache value and the keys are just strings. My initial benchmarks 
are decent, I am able to iterate over 133k entries/second per ignite 
instance. If I store just the keys and not the large cache values I 
can iterate over the keys at a rate of around 1.8 million 
entries/second (getting as close to this performance is my goal)


The compromise I have found is to store the 10k of data via java 
unsafe() calls offheap, and annotate the field with transient 
(avoiding serialization). This approach is giving me around 1.4 
million entries /second which is orders of magnitude faster than the 
133k when the large data was serialized.


I believe the unsafe() approach will work but will break down if the 
Ignite framework attempts to rebalance which in turn will start 
copying the data around the cluster. If I go down this road are there 
hooks anywhere to deserialize the offheap data before it is shipped to 
another node during a rebalance? Or am I barking up the wrong tree on 
this one entirely?


I have done all of the typical optimizations such as turning off 
copyOnRead, reducing backups, setting a large heap, etc.


Thanks



--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Re: Increase Ignite instances can't increase the speed of compute

2016-09-12 Thread Taras Ledkov
Hi,

How many MatchingJobs do you submit?


On Tue, Sep 13, 2016 at 12:29 PM, 胡永亮/Bob  wrote:

> Hello,  Vladislav 
>
> The following is some code.
>
> ...
> IgniteCompute compute = ignite.compute();//.withAsync();
>
> compute.run(new MatchingJob(m_mapReadyDataPara));
> }
>
> private static class MatchingJob implements IgniteRunnable{
>
> private Map> m_mapReadyData;
> private IgniteCache>> mapMatchingData;
> *//This is a cache in Ignite cluster.*
> ...
>
> public void run() {
> ...
> Iterator>> entryKeyIterator1 = m_
> mapReadyData.entrySet().iterator();*//**m_mapReadyData is the input
> data, its size is 5000 for every job now.*
> Map>> local_writeCache =
>  new HashMap>>();
> ...
> *//Then the job read detail data from m_mapReadyData, and compute.*
> while (entryKeyIterator1.hasNext()) {
> Entry> eKey1 =
> entryKeyIterator1.next();
> String carKey = eKey1.getKey();
> Map value1 = eKey1.getValue();
>
> *//local node cache*
> Map> existMapbaselink = local_
> mapMatchingData.get(carKey);
> if(existMapbaselink == null){
> existMapbaselink = mapMatchingData.get(carKey); * //Read
> data to compute with it from Ignite cache. This data's size is 154M for *
> PARTITIONED *mode.*
> if(existMapbaselink != null)
> local_mapMatchingData.putIfAbsent(
> carKey, existMapbaselink);
> }
>
>* //some compute logic code*
>
> mapbaselink = local_writeCache.get(carKey);
> if(mapbaselink == null){
> mapbaselink = new TreeMap>();
> }
> mapbaselink.put(stdtime, ListBaseLink);
> local_writeCache.put(carKey, mapbaselink);
> }
>
> *//batch to write data into Ignite.*
> Iterator>>> it = local_
> writeCache.entrySet().iterator();
> while(it.hasNext()){
> Entry>>
> entry = it.next();
> String carKey = entry.getKey();
> final Map> value = entry.getValue();
>
> if(!mapMatchingData.containsKey(carKey)){
> mapMatchingData.put(carKey, value);
> }else{
> mapMatchingData.invoke(carKey,
>  new EntryProcessor>, Void>() {
>   @Override
>   public Void process(MutableEntry<
> String, Map>> entry, Object... args) {
>   Map> map =
> entry.getValue();
>   map.putAll(value);
>   entry.setValue(map);
>   return null;
>   }
>   });
> }
>
> }
>
>
> --
> bob
>
>
> *From:* Vladislav Pyatkov 
> *Date:* 2016-09-12 18:37
> *To:* user@ignite.apache.org
> *Subject:* Re: Increase Ignite instances can't increase the speed of
> compute
> Hello,
>
> I don't understand, what do you try to measure, without code.
> Size of calculation task, size of data moved into network have importance.
>
> Could you please provide code example?
>
> On Mon, Sep 12, 2016 at 12:33 PM, 胡永亮/Bob  wrote:
>
>> Hi, everyone:
>>
>> I am using Ignite for computing and cache.
>>
>> I use the same input data and the same compute logic.
>> When my ignite cluster's node is 2 in 2 machines, the total cost time
>> is 38s.
>>
>> But, when I increase the Ignite cluster nodes to 3 in 3 machines, the
>> cost time is 32s/51s/41s
>> 4 instances in 4 machines, the cost time is 32s/40s.
>>
>>  The compute speed can't change faster, what may the reason be?
>>
>> Thanks.
>>
>> Bob
>>
>> 
>> ---
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>> 
>> ---
>>
>
>
>
> --
> Vladislav Pyatkov
>
>
> 
> ---
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> con

Re: Finding Key to Node Mapping

2016-09-10 Thread Taras Ledkov
Hi,

Please provide any additional details about your case if it is possible.

In general the answer is not. Because the mapping is based on cluster
topology.
Affinity functions uses information about nodes IDs and nodes order.


On Sat, Sep 10, 2016 at 5:29 PM, Alper Tekinalp  wrote:

> Hi all.
>
> It might be a silly question but is it possible to find cache key to node
> mapping without starting an ignite instance (server or client)? I use
> FairAffinityFucntion as affinity function?
>
>
> Regards.
>
> --
>
> *Alper Tekinalp*
>
>
> *Software Developer*
>
>
> *Evam ​Stream Analytics*
>
> *Atatürk Mah.Turgut Özal Bulvarı Gardenya 1 Plaza 42/B K:4 Ataşehir /
> İSTANBUL*
>
> *Tlf : +90216 688 45 46 <%2B90216%20688%2045%2046> Fax : +90216 688 45 47
> <%2B90216%20688%2045%2047> Gsm:+90 536 222 76 01
> <%2B90%20553%20489%2044%2099>*
> *www.evam.com <http://www.evam.com/>*
> <http://www.evam.com>
>

-- 

Taras Ledkov
Mail-To: tled...@gridgain.com


Re: partitions' balance among ignite nodes

2016-08-26 Thread Taras Ledkov

Hi, blueh.

Please take a look at the issue 
https://issues.apache.org/jira/browse/IGNITE-3018.


I created partition distribution diagrams when i worked on the task.

Heatmaps are attached to the issue.

There are diagrams for 3, 64, 100, 128, 200, ..., 600 nodes.

Pay your attention to the left diagram (MD5 title). This is a current 
implementation available on the production.


Horizontally: type of node (primary,  backup0, backup 1);
Vertically: all nodes from topology;
Z-order: count of partitions.


On 26.08.2016 4:45, bluehu wrote:

I have test the partitions' balance among ignite nodes, parts=1024, backup=1

I use 32 ignite nodes to test, and found one node have 50 primary
partitions(max) but the other only have 21 primary partitions(min), in
addition, one node have 83 primary+backup partitions(max) but the other only
have 52 primary+backup partitions(min).

so it looks like bad partitions' balance among ignite nodes, do you have a
report on partitions' balance? and any suggestion if I want to deploy more
ignite nodes?





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/partitions-balance-among-ignite-nodes-tp7327.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Ignite Compute - Rebalance Scenario

2016-08-16 Thread Taras Ledkov

Hi,

Affinity job doesn't block rebalance. We reserve partition to prevent 
remove data from the node while job is running.
So, topology is changed, partitions are moved to another nodes but data 
is available locally during affinity job. Magic!
Please modify your test like below and be sure that data is available 
for local query.


@IgniteInstanceResource
private Ignite ig;
...
public void run() {
...
while (true) {
assert ignite.cache(CACHE_NAME).localPeek('a', null).equals('a');

On 16.08.2016 11:32, Kamal C wrote:

Vladislav,

Partitions are moved to another node while executing the job.

Val,

I've tried with the latest nighty build. Still, it has the same behavior.
To reproduce the issue, I've used `while(true)` inside the IgniteRunnable
task. You can reproduce it with the below gist[1].

[1]: https://gist.github.com/Kamal15/0a4066de152b8ebc856fc264f7b4037d

Regards,
Kamal C

On Sat, Aug 13, 2016 at 12:15 AM, vkulichenko 
mailto:valentin.kuliche...@gmail.com>> 
wrote:


Note that this was changed recently [1] and the change was not
released in
1.7. Kamal, can you try the nightly build [2] and check if it
works as you
expect there?

[1] https://issues.apache.org/jira/browse/IGNITE-2310
<https://issues.apache.org/jira/browse/IGNITE-2310>
[2]

https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/lastSuccessfulBuild/

<https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/lastSuccessfulBuild/>



--
View this message in context:

http://apache-ignite-users.70518.x6.nabble.com/Ignite-Compute-Rebalance-Scenario-tp7004p7021.html

<http://apache-ignite-users.70518.x6.nabble.com/Ignite-Compute-Rebalance-Scenario-tp7004p7021.html>
Sent from the Apache Ignite Users mailing list archive at Nabble.com.




--
Taras Ledkov
Mail-To: tled...@gridgain.com