Re: RE: Transactional Reader Writer Locks

2022-01-06 Thread Alexei Scherbakov
Gurmehar,
"I need to understand when we mean LOCK, is this lock is acquired on entire
Cache or on record we are trying to update .
Please clarify ." - Ignite uses record level locking.

Jay,
I would avoid entire cache locks due to performance reasons, but if it's
really necessary, lock object cache seems the best solution.



вт, 28 дек. 2021 г. в 13:39, :

> Hi Thomas,
>
>
>
> thanks for your feedback.
>
>
>
> We originally discarded that option since it was our understanding that
> this would not work as desired in a transactional context.
>
>
>
> Maybe we were wrong. Will look into it again. Thanks!
>
>
>
> Jay
>
>
>
>
>
>
>
> *From:* don.tequ...@gmx.de 
> *Sent:* Tuesday, 28 December 2021 10:59
> *To:* user@ignite.apache.org
> *Subject:* Re: RE: Transactional Reader Writer Locks
>
>
>
> Instead of creating a cache with lock objects wouldn’t it be easier to use
> a semaphore for each cache where you want to achieve strong reader-Writer
> consistency?
>
>
>
> https://ignite.apache.org/docs/latest/data-structures/semaphore
>
>
>
> Then every time before reading/writing you acquire the semaphore first.
>
>
>
> I guess this semaphore does essentially what you’re doing manually.
>
>
>
> Regards
>
> Thomas.
>
>
>
>
>
>
>
> On 28.12.21 at 10:41, jay.et...@gmx.de wrote:
>
> And if anyone knows a different way to achieve entire-cache-locks in
> Optimistic Serializable transactions: We always appreciate the help.
>
>
>
> Jay
>
>
>
>
>
> *From:* Gurmehar Kalra 
> *Sent:* Monday, 27 December 2021 07:31
> *To:* user@ignite.apache.org; alexey.scherbak...@gmail.com
> *Subject:* RE: Transactional Reader Writer Locks
>
>
>
> Hi,
>
> I need to understand when we mean LOCK, is this lock is acquired on entire
> Cache or on record we are trying to update .
> Please clarify .
>
>
>
> Regards,
>
> Gurmehar Singh
>
>
>
> *From:* Alexei Scherbakov 
> *Sent:* 16 December 2021 22:40
> *To:* user 
> *Subject:* Re: Transactional Reader Writer Locks
>
>
>
> [CAUTION: This Email is from outside the Organization. Unless you trust
> the sender, Don’t click links or open attachments as it may be a Phishing
> email, which can steal your Information and compromise your Computer.]
>
> Hi.
>
>
>
> You can try OPTIMISTIC SERIALIZABLE isolation, it might have better
> throughput in contending scenarios.
>
> But this is not the same as RW lock, because a tx can be invalidated after
> a commit if a lock conflict is detected.
>
> No RW lock of any kind is planned, AFAIK.
>
>
>
> вт, 7 дек. 2021 г. в 23:22, :
>
> Dear all,
>
>
>
> we’re running in circles with Ignite for so long now. Can anyone please
> help? All our attempts to custom-build a Reader Writer Lock (/Re-entrant
> Lock) for use inside transactions have failed.
>
>
>
> Background:
>
> - Multi-node setup
>
> - Very high throughput mixed read/write cache access
>
> - Key-Value API using transactional caches
>
> - Strong consistency absolute requirement
>
> - Transactional context required for guarantees and fault-tolerance
>
>
>
> Using Pessimistic Repeatable-Read transactions gives strong consistency
> but kills performance if there’s a large number of operations on the same
> cache entry (and they tend to introduce performance penalties in
> entire-cache operations and difficulties in cross-cache locking as well).
> All other transactional modes somehow violate the strong consistency
> requirement as we see it and were able to test so far.
>
>
>
> In other distributed environments we use reader writer locks to gain both
> strong consistency and high performance with mixed workloads. In Ignite
> however we’re not aware that explicit locks can be used inside
> transactions: The documentation clearly states so (
> https://ignite.apache.org/docs/latest/distributed-locks
> <https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fignite.apache.org%2Fdocs%2Flatest%2Fdistributed-locks&data=04%7C01%7Cgurmehar.kalra%40hcl.com%7Cbb66d82317d148b6221608d9c0b6ee08%7C189de737c93a4f5a8b686f4ca9941912%7C0%7C0%7C637752714176933321%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=1eNvAIsE5mgD6H0CSO6IX%2BSIw2nprWcQ1KX%2B5iZfwcc%3D&reserved=0>)
> and trying to custom-build a reader writer lock for use inside transactions
> we always end up concluding that this may not be achievable if there are
> multiple ways to implicitly acquire but none to release locks.
>
>
>
> Are we out of luck here or
>
> - did we

Re: Transactional Reader Writer Locks

2021-12-21 Thread Alexei Scherbakov
*Note, however, that until the commit happens you can still read a partial
transaction state, so the transaction logic must protect against it.*
This means you shouldn't pass partial read result outside transaction scope
(like writing to external storage) until a commit has happened.

*So this basically means that in this mode all reads of an entity are
sequential, right?*
No, reads can be done without blocking using OPTIMISTIC SERIALIZABLE mode.
This is the big difference with PESSIMISTIC REPEATABLE_READ mode.

*Any hints on how to solve this?*
I would suggest avoiding high contention on the same key. Ignite currently
uses lock based concurrency control and is not well suited for such a
scenario.

пн, 20 дек. 2021 г. в 17:27, :

> Actually, we get even worse performance with Optimistic Serializable
> transactions. Reason may be:
>
>
>
> “[..] Another important point to note here is that a transaction fails
> even if an entry was read without being modified [..],since the value of
> the entry could be important to the logic within the initiated transaction.
> ”
>
>
>
> So this basically means that in this mode all reads of an entity are
> sequential, right? If so, it cannot help with performance when having a lot
> of reads and the need for strong consistency.
>
>
>
>
>
>   Guys: Ignite is such an impressive product with so so many awesome
> features but effectively we have all our data in memory - and it is just
> dead-slow...
>
>
>
>
>
> Any hints on how to solve this?
>
>
>
> Appreciate it.
>
>
>
> Jay
>
>
>
>
>
>
>
> *From:* jay.et...@gmx.de 
> *Sent:* Friday, 17 December 2021 08:44
> *To:* user@ignite.apache.org
> *Subject:* RE: Transactional Reader Writer Locks
>
>
>
> Hi Alexei
>
>
>
> Thanks for your note. We have actually considered optimistic transactional
> transactions before but to be honest we’re not entirely sure how to address
> this:
>
>
>
> Wiki: “When using OPTIMISTIC transactions, full read consistency can be
> achieved by disallowing potential conflicts between reads. This behavior is
> provided by OPTIMISTIC SERIALIZABLE mode.* =>* *Note, however, that until
> the commit happens you can still read a partial transaction state, so the
> transaction logic must protect against it. <=*”
>
>
>
> How would a tx logic look like that protects against partial transaction
> state in optimistic serializable mode?
>
>
>
> Jay
>
>
>
>
>
> *From:* Alexei Scherbakov 
> *Sent:* Thursday, 16 December 2021 18:10
> *To:* user 
> *Subject:* Re: Transactional Reader Writer Locks
>
>
>
> Hi.
>
>
>
> You can try OPTIMISTIC SERIALIZABLE isolation, it might have better
> throughput in contending scenarios.
>
> But this is not the same as RW lock, because a tx can be invalidated after
> a commit if a lock conflict is detected.
>
> No RW lock of any kind is planned, AFAIK.
>
>
>
> вт, 7 дек. 2021 г. в 23:22, :
>
> Dear all,
>
>
>
> we’re running in circles with Ignite for so long now. Can anyone please
> help? All our attempts to custom-build a Reader Writer Lock (/Re-entrant
> Lock) for use inside transactions have failed.
>
>
>
> Background:
>
> - Multi-node setup
>
> - Very high throughput mixed read/write cache access
>
> - Key-Value API using transactional caches
>
> - Strong consistency absolute requirement
>
> - Transactional context required for guarantees and fault-tolerance
>
>
>
> Using Pessimistic Repeatable-Read transactions gives strong consistency
> but kills performance if there’s a large number of operations on the same
> cache entry (and they tend to introduce performance penalties in
> entire-cache operations and difficulties in cross-cache locking as well).
> All other transactional modes somehow violate the strong consistency
> requirement as we see it and were able to test so far.
>
>
>
> In other distributed environments we use reader writer locks to gain both
> strong consistency and high performance with mixed workloads. In Ignite
> however we’re not aware that explicit locks can be used inside
> transactions: The documentation clearly states so (
> https://ignite.apache.org/docs/latest/distributed-locks) and trying to
> custom-build a reader writer lock for use inside transactions we always end
> up concluding that this may not be achievable if there are multiple ways to
> implicitly acquire but none to release locks.
>
>
>
> Are we out of luck here or
>
> - did we miss something?
>
> - are there workarounds you know of?
>
> - are there plans to implement transactional re-entrant locks in future
> releases?
>
>
>
> Jay
>
>
>
>
>
>
>
>
> --
>
>
> Best regards,
>
> Alexei Scherbakov
>


-- 

Best regards,
Alexei Scherbakov


Re: Transactional Reader Writer Locks

2021-12-16 Thread Alexei Scherbakov
Hi.

You can try OPTIMISTIC SERIALIZABLE isolation, it might have better
throughput in contending scenarios.
But this is not the same as RW lock, because a tx can be invalidated after
a commit if a lock conflict is detected.
No RW lock of any kind is planned, AFAIK.

вт, 7 дек. 2021 г. в 23:22, :

> Dear all,
>
>
>
> we’re running in circles with Ignite for so long now. Can anyone please
> help? All our attempts to custom-build a Reader Writer Lock (/Re-entrant
> Lock) for use inside transactions have failed.
>
>
>
> Background:
>
> - Multi-node setup
>
> - Very high throughput mixed read/write cache access
>
> - Key-Value API using transactional caches
>
> - Strong consistency absolute requirement
>
> - Transactional context required for guarantees and fault-tolerance
>
>
>
> Using Pessimistic Repeatable-Read transactions gives strong consistency
> but kills performance if there’s a large number of operations on the same
> cache entry (and they tend to introduce performance penalties in
> entire-cache operations and difficulties in cross-cache locking as well).
> All other transactional modes somehow violate the strong consistency
> requirement as we see it and were able to test so far.
>
>
>
> In other distributed environments we use reader writer locks to gain both
> strong consistency and high performance with mixed workloads. In Ignite
> however we’re not aware that explicit locks can be used inside
> transactions: The documentation clearly states so (
> https://ignite.apache.org/docs/latest/distributed-locks) and trying to
> custom-build a reader writer lock for use inside transactions we always end
> up concluding that this may not be achievable if there are multiple ways
> to implicitly acquire but none to release locks.
>
>
>
> Are we out of luck here or
>
> - did we miss something?
>
> - are there workarounds you know of?
>
> - are there plans to implement transactional re-entrant locks in future
> releases?
>
>
>
> Jay
>
>
>
>
>


-- 

Best regards,
Alexei Scherbakov


Re: NodeOrder in GridCacheVersion

2020-02-27 Thread Alexei Scherbakov
Hi Prasad.

nodeOrder is local counter used for updates ordering. The version is
incremented when lock is aquired for enlisted tx entry.

Are you sure where is no concurrent transaction on this replicated cache
which works significant time until committed ?
How long have you re-tried after getting optimistic exception ?
Do you have stable topology (no concurrently failing node) when this is
happens ?
Do you have on-heap cache enabled ?

чт, 27 февр. 2020 г. в 16:06, Prasad Bhalerao :

> Hi Alexey,
>
> Key value is not getting changed concurrently,  I am sure about it. The
> cache for which I am getting the exception is kind of bootstrap data and it
> changes very rarely. I have added retry logic in my code and it failed
> every time giving the same error .
>
> Every time if fails in GridDhtTxPrepareFuture.checkReadConflict ->
> GridCacheEntryEx.checkSerializableReadVersion method and I think it fails
> due to the change in value of nodeOrder. This is what I observed while
> debugging the method.
> This happens intermittently.
>
> I got following values while inspecting GridCacheVersion object on
> different nodes.
>
> Cache : Addons (Node 2)
> serReadVer of entry read inside Transaction: GridCacheVersion
> [topVer=194120123, order=4, nodeOrder=2]
> version on node3: GridCacheVersion [topVer=194120123, order=4, nodeOrder
> =1]
>
> Cache : Subscription  (Node 3)
> serReadVer of entry read inside Transaction:  GridCacheVersion
> [topVer=194120123, order=1, nodeOrder=2]
> version on node2:  GridCacheVersion [topVer=194120123, order=1, nodeOrder
> =10]
>
> Can you please answer following questions?
> 1) The significance of the nodeOrder w.r.t Grid and cache?
> 2) When does it change?
> 3) How it is important w.r.t. transaction?
> 4) Inside transaction I am reading and modifying Replicated as well as
> Partitioned cache. What I observed is this fails for Replicated cache. As
> workaround, I have moved the code which reads Replicated cache out of
> transaction block. Is it allowed to read and modify both replicated and
> Partitioned cache i.e. use both Replicated and Partitioned?
>
> Thanks,
> Prasad
>
> On Thu, Feb 27, 2020 at 6:01 PM Alexey Goncharuk <
> alexey.goncha...@gmail.com> wrote:
>
>> Prasad,
>>
>> Since optimistic transactions do not acquire key locks until prepare
>> phase, it is possible that the key value is concurrently changed before the
>> prepare commences. Optimistic exceptions is thrown exactly in this case and
>> suggest a user that they should retry the transaction.
>>
>> Consider the following example:
>> Thread 1: Start tx 1, Read (key1) -> val1
>> Thread 2: Start tx 2, Read (key2) -> val1
>>
>> Thread 1: Write (key1, val2)
>> Thread 1: Commit
>>
>> Thread 2: Write (key1, val3)
>> Thread 2: Commit *Optimistic exception is thrown here since current value
>> of key1 is not val1 anymore*
>>
>> When optimistic transactions are used, a user is expected to have a retry
>> logic. Alternatively, a pessimistic repeatable_read transaction can be used
>> (one should remember that in pessimistic mode locks are acquired on first
>> key access and released only on transaction commit).
>>
>> Hope this helps,
>> --AG
>>
>

-- 

Best regards,
Alexei Scherbakov


Re: Adding a binary object to two caches fails with FULL_SYNC write mode configured for the replicated cache

2016-07-19 Thread Alexei Scherbakov
Hi,

It's also not working with PRIMARY_SYNC if you do gets.

I've created an issue: https://issues.apache.org/jira/browse/IGNITE-3505

You may watch it to be notified on a progress.

Thanks for participating in Apache Ignite's community.

2016-07-19 8:05 GMT+03:00 pragmaticbigdata :

> Please see my comments below
>
> > Currently you have to make a copy of BinaryObject for each cache
> operation
> > because it's not immutable and internally caches some information for
> > performance reasons.
>
> Isn't the BinaryObject  not bound
> <
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-client-reads-old-metadata-even-after-cache-is-destroyed-and-recreated-tp5800p5823.html
> >
> to any cache?
> Also note that adding the same binary object to two caches works if the
> synchronization mode of the replicated cache is PRIMARY_SYNC and not
> FULL_SYNC. Why would this be working?
>
> > Do you have a real case then you need to put a lot of binary object keys
> > to multiple caches?
>
> I was trying to simulate a workaround for the  IGNITE-1897
> <https://issues.apache.org/jira/browse/IGNITE-1897>   where I maintain a
> replicated cache for all the entries that are added/updated in a
> transaction. Hence I am adding the same key/value pair to two caches.
>
> > BTW, if you are using BinaryObject key with only single standard java
> type
> > it's simpler to just use the type as a cache key.
>
> No I do have multiple fields as part of the BinaryObject. Its just for
> reproducing the issue I added one field
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Adding-a-binary-object-to-two-caches-fails-with-FULL-SYNC-write-mode-configured-for-the-replicated-ce-tp6343p6366.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Adding a binary object to two caches fails with FULL_SYNC write mode configured for the replicated cache

2016-07-18 Thread Alexei Scherbakov
Hi,

Currently you have to make a copy of BinaryObject for each cache operation
because it's not immutable and internally caches some information for
performance reasons.

Do you have a real case then you need to put a lot of binary object keys to
multiple caches?

BTW, if you are using BinaryObject key with only single standard java type
it's simpler to just use the type as a cache key.

2016-07-18 16:07 GMT+03:00 pragmaticbigdata :

> I am using ignite version 1.6. In my use case I have two caches with the
> below configuration
>
> CacheConfiguration cfg1 = new
> CacheConfiguration<>("Cache 1");
> cfg1.setCacheMode(CacheMode.PARTITIONED);
> cfg1.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>
> IgniteCache cache1 =
> ignite.getOrCreateCache(cfg1).withKeepBinary();
>
> CacheConfiguration cfg2 = new
> CacheConfiguration<>("Cache 2");
> cfg2.setCacheMode(CacheMode.REPLICATED);
>
> cfg2.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> //using the default PRIMARY_SYNC write synchronization works fine
> IgniteCache cache2 =
> ignite.getOrCreateCache(cfg2);
>
>
> When adding a BinaryObject to the second cache, Ignite *fails when calling
> cache2.put()*. The code to add data to the cache is
>
> BinaryObjectBuilder keyBuilder =
> ignite.binary().builder("keyType")
> .setField("F1", "V1").hashCode("V1".hashCode());
>
> BinaryObjectBuilder valueBuilder =
> ignite.binary().builder("valueType)
> .setField("F2", "V2")
> .setField("F3", "V3");
>
> BinaryObject key = keyBuilder.build();
> BinaryObject value = valueBuilder.build();
>
> cache1.put(key, value);
> cache2.put(key, value);
>
> If FULL_SYNC write synchronization is turned off (default PRIMARY_SYNC),
> the
> write works fine. Also if a copy of the BinaryObject is made before adding
> to cache2, the put method succeeds. Can someone have a look and let me know
> what could be missing?
>
> The exception is as below.
>
> java.lang.AssertionError: Affinity partition is out of range [part=667,
> partitions=512]
> at
>
> org.apache.ignite.internal.processors.affinity.GridAffinityAssignment.get(GridAffinityAssignment.java:149)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.nodes(GridDhtPartitionTopologyImpl.java:827)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapKey(GridNearAtomicUpdateFuture.java:1031)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapUpdate(GridNearAtomicUpdateFuture.java:867)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.map(GridNearAtomicUpdateFuture.java:689)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapOnTopology(GridNearAtomicUpdateFuture.java:544)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:202)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$22.apply(GridDhtAtomicCache.java:1007)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$22.apply(GridDhtAtomicCache.java:1005)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:703)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAsync0(GridDhtAtomicCache.java:1005)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.putAsync0(GridDhtAtomicCache.java:475)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.putAsync(GridCacheAdapter.java:2506)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put(GridDhtAtomicCache.java:452)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2180)
>     at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1165)
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Adding-a-binary-object-to-two-caches-fails-with-FULL-SYNC-write-mode-configured-for-the-replicated-ce-tp6343.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Any plans to enhance support for subselects on partitioned caches?

2016-07-18 Thread Alexei Scherbakov
Hi Cristi,

There is a work in progress on supporting distributed joins without
necessary data collocation, but it will work only for top level joins.

What means you'll need to rewrite a query containing subselect to the
identical query with join.



2016-07-18 17:01 GMT+03:00 Cristi C :

> Hello,
>
> I see that subselects on partitioned caches don't return the correct
> results
> in the current version (the subselect is ran only on the partition on the
> current node)[1]. Are there any plans to enhance this support in the near
> future?
>
> Thanks,
>Cristi
>
> [1]
>
> http://apache-ignite-users.70518.x6.nabble.com/Does-Ignite-support-nested-SQL-Queries-td1714.html#d1445954868220-448
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Any-plans-to-enhance-support-for-subselects-on-partitioned-caches-tp6344.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Store-by-reference

2016-07-18 Thread Alexei Scherbakov
Hi Andi,

Ignite doesn't support storing values by references without serializing
them first because of it's distributed nature.

So setting the property storeByValue has no effect.

To avoid deserialization on every read you should use:

CacheConfiguration.setCopyOnRead(false)

2016-07-18 11:20 GMT+03:00 AHof :

> Hi,
>
> is there any way in Ignite, to store values by reference instead of always
> serializing cached values directly on insertion?
>
> When setting CacheConfiguration.storeByValue(false) and creating a new
> cache, nothing happens. I looked a bit through the JCache documentation and
> it does not say what should happen when trying to activate the optional
> store-by-reference feature when it is not implemented by the current cache
> provider.
>
> We need store-by-reference mostly because of performance reasons. Quite
> small caches with immutable objects that are accessed very often. Always
> deserializing the values has a significant performance impact on the
> application.
>
> Kind Regards
>
> -Andi
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Store-by-reference-tp6341.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Cache Partitioned Mode

2016-07-08 Thread Alexei Scherbakov
Hi,


*1. If I have 2 nodes,and do twice puts on existing  empty cache,data
isdivides on nodes.(1-1 peer node)?*

Data partitioning is covered by affinity function, which works on cache key
[1].

So the answer is it may or may not depending on the key.


*2. Can I manually choose  node on which want to  do put,and how?*

Yes you can. Use AffinityKeyMapped annotation on some field in key, or
AffinityKey as cache key,  or implement AffinityKeyMapper [2].

[1]
https://apacheignite.readme.io/docs/affinity-collocation#affinity-function
[2] https://apacheignite.readme.io/docs/affinity-collocation


2016-07-08 13:25 GMT+03:00 daniel07 :

> Hi
> As far as I understood from following (on case of PARTITIONED mode)
>  http://apacheignite.gridgain.org/docs/cache-modes#partitioned-mode
>  my data is divided equally into partitions .
> Now questions
> 1. If I have 2 nodes,and do twice puts on existing  empty cache,data is
> divides on nodes.(1-1 peer node)?
> 2. Can I manually choose  node on which want to  do put,and how?
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Cache-Partitioned-Mode-tp6172.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: 答复: 答复: ignite group indexing not work problem

2016-07-08 Thread Alexei Scherbakov
Hi, Kevin.

1. I've created an issue https://issues.apache.org/jira/browse/IGNITE-3453
for this and started a discussion on dev list.
You can watch the issue  to be immediately notified on task progress.

As a workaround try to avoid common fields in index definitions.

2. How do you measure a performance ?

3. You should run query several time before measuring a result to allow JVM
to "warm up". It's common test practice.

2016-07-08 5:19 GMT+03:00 Zhengqingzheng :

> Hi Alexei,
>
> I have tried to use only one group of index, and it works for me. However,
> there are 3 issues:
>
> 1.   I still need to define 3 different groups of indexes to speed up
> all possible queries. But  in this case, only one group index is allowed.
> How to define other group indexes without problems?
>
> 2.   The sql query time is about 16ms on local machine. It seems too
> slow, as I have tested other sql queries within local machine, and the
> query time is about 1 ~ 2 ms
>
> 3.   It seems that no matter using get or sql query methods, the
> first query time is always higher than the rest of other queries.
>
>
>
>
>
> And the explain information of my sql with only one group index is listed
> as follows (call sql query twice and the first time shows explain info):
>
>
>
> Time:78;  result1:[[SELECT
>
> GUID AS __C0
>
> FROM "com.huawei.soa.ignite.model.IgniteMetaIndex_Cache".IGNITEMETAINDEX
>
> /*
> "com.huawei.soa.ignite.model.IgniteMetaIndex_Cache"."objId_fieldNum_stringValue":
> STRINGVALUE = ?3
>
> AND OBJID = ?1
>
> AND FIELDNUM = ?2
>
>  */
>
> WHERE (STRINGVALUE = ?3)
>
> AND ((OBJID = ?1)
>
> AND (FIELDNUM = ?2))], [SELECT
>
> __C0 AS GUID
>
> FROM PUBLIC.__T0
>
> /* "com.huawei.soa.ignite.model.IgniteMetaIndex_Cache"."merge_scan"
> */]]
>
> ---
>
> result size:1
>
> ---
>
> Time used in index cache query: 16ms
>
> Time:16;  result2:[50116926]
>
>
>
>
>
> Best regards,
>
> Kevin
>
>
>
> *发件人:* Alexei Scherbakov [mailto:alexey.scherbak...@gmail.com]
> *发送时间:* 2016年7月7日 0:59
> *收件人:* user
> *主题:* Re: 答复: ignite group indexing not work problem
>
>
>
> Hi, Kevin.
>
>
>
> Have you defined other composite indexes including objid, fieldnum fields ?
>
> If yes, try to disable them and check if query picks up correct index.
>
>
>
>
>
> 2016-07-05 15:45 GMT+03:00 Zhengqingzheng :
>
> Hi Alexei,
>
> Indexing not work again. This time I print the query plan:
>
>
>
> [SELECT
>
> GUID AS __C0
>
> FROM "com.huawei.soa.ignite.model.IgniteMetaIndex_Cache".IGNITEMETAINDEX
>
> /*
> "com.huawei.soa.ignite.model.IgniteMetaIndex_Cache"."objId_fieldNum_numValue":
> OBJID = ?1
>
> AND FIELDNUM = ?2
>
>  */
>
> WHERE (STRINGVALUE = ?3)
>
> AND ((OBJID = ?1)
>
> AND (FIELDNUM = ?2))], [SELECT
>
> __C0 AS GUID
>
> FROM PUBLIC.__T0
>
> /* "com.huawei.soa.ignite.model.IgniteMetaIndex_Cache"."merge_scan" */]
>
>
>
> It seems that the group index is broken, as I can see only objId and
> fieldnum are used to check the index, however, I defined three columns:
> objid, fieldnum and stringvalue
>
> And the order is incorrect.
>
> My query is :
>
> String qryStr = " explain select guid from IgniteMetaIndex where  objid=?
> and fieldnum=? and stringvalue=? ";
>
> …
>
> fieldQry.setArgs( "a1162", 7 ,"18835473249");
>
>
>
> hope you can help me.
>
>
>
>
>
> Best regards,
>
> Kevin
>
> *发件人:* Alexei Scherbakov [mailto:alexey.scherbak...@gmail.com]
> *发送时间:* 2016年6月29日 0:30
> *收件人:* user
> *主题:* Re: ignite group indexing not work problem
>
>
>
> Hi,
>
>
>
> I've tried the provided sample and found what instead of using
> oId_fNum_num index H2 engine prefers oId_fNum_date,
>
> thus preventing condition on num field to use index.
>
>
>
> I think it's incorrect behavior.
>
>
>
> Could you disable oId_fNum_date, execute the query again and provide me
> with the query plan and execution time ?
>
>
>
> You can get query plan from build-in H2 console. Read more about how to
> setup console here [1]
>
>
>
> [1] https://apacheignite.readme.io/docs/sql-queries#using-h2-debug-console
>
>
>
>
>
>
>
>
>
> 2016-06-28 6:08 GMT+03:00 Zhengqingzheng :
>
> Hi there,
>
>

Re: Count distinct not working?

2016-07-07 Thread Alexei Scherbakov
Cristi,

I was able to reproduce it as well and haven't heart about anything similar.

Created an issue: https://issues.apache.org/jira/browse/IGNITE-3448

You can watch it to be notified when it will be fixed.

One possible workaround can be like running the query
SELECT 1 from T1 group by C1 and counting result set items.


2016-07-07 15:44 GMT+03:00 Cristi C :

> Hey Alexei,
> Thanks for the quick response.
> Yes, I confirmed that the result is incorrect. The result of the query with
> count is different than the number of elements returned for the same query
> without count (and the later query is split correctly into map/reduce
> queries - performs distinct in both phases)
>
> Thanks,
>Cristi
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Count-distinct-not-working-tp6144p6147.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Count distinct not working?

2016-07-07 Thread Alexei Scherbakov
Hi,

Have you confirmed incorrect query results or it's just your suggestion
based on query plan ?

2016-07-07 14:33 GMT+03:00 Cristi C :

> Hello,
> I was looking into how Ignite performs SQL queries across multiple nodes
> and
> I ran into a possible issue. It looks to me like COUNT DISTINCT is not
> working properly. I haven't found anything in Jira or in the mailing list
> about this so I decided to ask here first.
> My code is something like:
> QueryCursor> results = cache.query(new SqlFieldsQuery("SELECT
> COUNT(DISTINCT C1) from T1"));
> results.getAll();
>
> This is split into the following map and reduce queries:
> SELECT COUNT(DISTINCT REGION) __C0 FROM "S".T1
> SELECT CAST(SUM(__C0) AS BIGINT) __C0 FROM PUBLIC.__T0
> As I see it, for this to be correct, the queries should be something like:
> SELECT DISTINCT REGION __C0 FROM "S".T1
> SELECT COUNT(DISTINCT __C0) __C0 FROM PUBLIC.__T0
>
> Is this a known issue? (Has it been reported and I just missed it?)
> Do you know of any other cases where queries across a distributed cache
> would not be split into the correct map/reduce queries?
>
> Thanks,
>Cristi
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Count-distinct-not-working-tp6144.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Handling external updates to persistence store

2016-07-07 Thread Alexei Scherbakov
Hi,

To force Ignite to reload value from CacheStore it's enough to remove
corresponding key.
Commit changes to database and remove corresponding keys, what should do
the job.

But the better and more reliable way would be in my opition to do all
writes to store using only Ignite.






2016-07-07 13:49 GMT+03:00 pragmaticbigdata :

> We are using ignite version 1.6. Our cache is configured as write through
> for
> data consistency reasons. What are the different ways to update the ignite
> cache when the rdbms store is updated by other applications?
> One option would be write a hook that calls the data load api's of ignite
> (CacheStore or IgniteDataStreamer ?) to update the ignite cache. In this
> case would ignite again try to update the rdbms store since the cache is
> configured for write through? If so, is there a way to avoid it?
>
> Thanks
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Handling-external-updates-to-persistence-store-tp6143.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: 答复: ignite group indexing not work problem

2016-07-06 Thread Alexei Scherbakov
Hi, Kevin.

Have you defined other composite indexes including objid, fieldnum fields ?
If yes, try to disable them and check if query picks up correct index.


2016-07-05 15:45 GMT+03:00 Zhengqingzheng :

> Hi Alexei,
>
> Indexing not work again. This time I print the query plan:
>
>
>
> [SELECT
>
> GUID AS __C0
>
> FROM "com.huawei.soa.ignite.model.IgniteMetaIndex_Cache".IGNITEMETAINDEX
>
> /*
> "com.huawei.soa.ignite.model.IgniteMetaIndex_Cache"."objId_fieldNum_numValue":
> OBJID = ?1
>
> AND FIELDNUM = ?2
>
>  */
>
> WHERE (STRINGVALUE = ?3)
>
> AND ((OBJID = ?1)
>
> AND (FIELDNUM = ?2))], [SELECT
>
> __C0 AS GUID
>
> FROM PUBLIC.__T0
>
> /* "com.huawei.soa.ignite.model.IgniteMetaIndex_Cache"."merge_scan" */]
>
>
>
> It seems that the group index is broken, as I can see only objId and
> fieldnum are used to check the index, however, I defined three columns:
> objid, fieldnum and stringvalue
>
> And the order is incorrect.
>
> My query is :
>
> String qryStr = " explain select guid from IgniteMetaIndex where  objid=?
> and fieldnum=? and stringvalue=? ";
>
> …
>
> fieldQry.setArgs( "a1162", 7 ,"18835473249");
>
>
>
> hope you can help me.
>
>
>
>
>
> Best regards,
>
> Kevin
>
> *发件人:* Alexei Scherbakov [mailto:alexey.scherbak...@gmail.com]
> *发送时间:* 2016年6月29日 0:30
> *收件人:* user
> *主题:* Re: ignite group indexing not work problem
>
>
>
> Hi,
>
>
>
> I've tried the provided sample and found what instead of using
> oId_fNum_num index H2 engine prefers oId_fNum_date,
>
> thus preventing condition on num field to use index.
>
>
>
> I think it's incorrect behavior.
>
>
>
> Could you disable oId_fNum_date, execute the query again and provide me
> with the query plan and execution time ?
>
>
>
> You can get query plan from build-in H2 console. Read more about how to
> setup console here [1]
>
>
>
> [1] https://apacheignite.readme.io/docs/sql-queries#using-h2-debug-console
>
>
>
>
>
>
>
>
>
> 2016-06-28 6:08 GMT+03:00 Zhengqingzheng :
>
> Hi there,
>
> My  ignite in-memory sql query is very slow. Anyone can help me to figure
> out what was wrong?
>
>
>
> I am using group indexing to speed up in-memory sql queries. I notice that
> my sql query took 2274ms (data set size: 10Million, return result:1).
>
>
>
> My query is executed as:
>
> String qryStr = "select * from UniqueField where oid= ? and fnum= ? and
> num= ?";
>
>
>
> String oId="a343";
>
> int fNum = 3;
>
> BigDecimal num = new BigDecimal("51002982136");
>
>
>
> IgniteCache cache =
> igniteMetaUtils.getIgniteCache(IgniteMetaCacheType.UNIQUE_INDEX);  // to
> get selected cache ,which has been created in some other place
>
>
>
> SqlQuery qry = new SqlQuery(UniqueField.class, qryStr);
>
> qry.setArgs(objId,fieldNum, numVal);
>
> long start = System.currentTimeMillis();
>
> List result= cache.query(qry).getAll();
>
> long end = System.currentTimeMillis();
>
> System.out.println("Time used in query :"+ (end-start)+"ms");
>
>
>
> And the result shows: Time used in query :2274ms
>
>
>
> I have set group indexes, and the model is defined as:
>
> import java.io.Serializable;
>
> import java.math.BigDecimal;
>
> import java.util.Date;
>
>
>
> import org.apache.ignite.cache.query.annotations.QuerySqlField;
>
>
>
> public class UniqueField implements Serializable
>
> {
>
>
>
> @QuerySqlField
>
> private String orgId;
>
>
>
> @QuerySqlField(
>
> orderedGroups={
>
> @QuerySqlField.Group(
>
> name="oId_fNum_ msg ", order=1, descending = true),
>
> @QuerySqlField.Group(
>
> name="oId_fNum_ num ", order=1, descending =
> true),
>
> @QuerySqlField.Group(
>
> name="oId_fNum_ date ", order=1, descending = true)
>
>
>
> })
>
> private String oId;
>
>
>
> @QuerySqlField(index=true)
>
> private String gId;
>
>
>
>  @QuerySqlField(
>
> orderedGroups={
>
> @QuerySqlField.Group(
>
> name="oId_fNum_ msg ", order=2, descending = true),
>
> @QuerySqlField.Group(
>
>

Re: Iterating through a BinaryObject cache fails

2016-07-05 Thread Alexei Scherbakov
Hi,

This is because changes are not reflected in the cache without explicit put.

You must use put operation to correctly update cache entry.


2016-07-04 14:46 GMT+03:00 pragmaticbigdata :

> Ok. Thanks for sharing the internals.
>
> By specifying the withKeepBinary flag, I was able to iterate through the
> cache and add or drop fields from a BinaryObject at runtime. This was the
> original purpose of iterating through the cache.
>
> The changes (of adding and/or dropping a field) made to the cache are not
> reflecting on h2 debug console. I am able to query the newly added field
> but
> I cannot see the changes on h2.
>
> Can you suspect what could be the issue ?
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Iterating-through-a-BinaryObject-cache-fails-tp6038p6076.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Error while loading data into cache with BinaryObject as key field

2016-07-05 Thread Alexei Scherbakov
Hi,

1. Yes, Ignite works that way. Value type name is used as a table name in
querying either key or value fields.

2. I'll  try to give more detailed answer.

A query engine is also responsible for indexing.

Then data is saved, the query engine first decides which fields to index
using information provided in QueryEntity.

Second, field values from previous step are extracted from BinaryObject and
used for updating indexes.

Yes, every table has _key_PK index based on cache key.


2016-07-05 15:08 GMT+03:00 pragmaticbigdata :

> Any comments here?
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Error-while-loading-data-into-cache-with-BinaryObject-as-key-field-tp6014p6105.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Iterating through a BinaryObject cache fails

2016-07-04 Thread Alexei Scherbakov
Hi,

You should always use withKeepBinary if you plan to work with a cache
containing BInaryObject key or value (including queries). The flag is used
internally to prevent deserialization from binary representation to actual
class instance.

So the answers are:

1. yes if you want to load binary objects into a cache.
2. Ignite will try to deserialize.
3. because Ignite has to deserialize key/value on each iteration without
specifying withKeepBinary.
4. internally Ignite stores all data as binary objects and only does
deserialization on demand, like get operation.
5. no, storeKeepBinary property is only related to CacheStore
implementation.

I recommend you to read [1] to learn more about BinaryObjects

[1] https://apacheignite.readme.io/docs/binary-marshaller



2016-07-03 15:17 GMT+03:00 pragmaticbigdata :

> Ok. The test case worked after I specified "withKeepBinary()" when fetching
> the cache instance before iterating through it.
>
> I have following questions in order to understand to make sure I understand
> the internals of how it works.
>
> 1. Should I specify withKeepBinary() when creating the cache instance for
> loading it?
> 2. I didn't follow the reason why I should specify withKeepBinary() when
> fetching the cache for querying since the original cache instance itself is
> of the type IgniteCache. Does ignite try to
> deserialize it when I don't specify withKeepBinary()? If so, do you mean
> the
> deserialization of the field & type names (since they are converted to hash
> values) and the values?
> 3. Why does iterating through the cache fail when I don't specify
> withKeepBinary()? Shouldn't iterating through the cache succeed
> irrespective
> of whether the cache is serialized or deserialized? I am specifically
> talking about the case when the cache is of type BinaryObject.
> 4. Does ignite maintain two copies of the cache internally - one serialized
> and another deserialized and return the appropriate one based on the
> withKeepBinary flag?
> 5. If I specify config.setStoreKeepBinary(true); at the time of cache
> creation I would not have to specify withKeepBinary() every time I retrieve
> the cache instance. Is my understanding right?
>
>
> Thanks!
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Iterating-through-a-BinaryObject-cache-fails-tp6038p6061.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Error while loading data into cache with BinaryObject as key field

2016-07-04 Thread Alexei Scherbakov
Hi,

1. I see nothing strange here. Please explain in more details.

2. Because query engine uses binary type descriptor to update SQL indices
for incoming data.


2016-07-03 20:16 GMT+03:00 pragmaticbigdata :

> Thanks for pinpointing the issue. The issue got resolved after setting a
> custom key type. The data got loaded into the cache.
>
> 1. The strange part is that even if I set different key and value types the
> below query executes correctly
>
> SqlQuery query = new
> SqlQuery<>(table.getCacheValueType(), "F1 = 'ABC' AND F2 = 'XYZ'");
> try(QueryCursor> cursor
> =
> cache.query(query)) {
> logger.info("No of entries : {}", cursor.getAll().size());
>
> }
>
> Here F1 is a field from the key object while F2 is a field from the value
> object. The cache is of type IgniteCache
>
> 2. Even though the previous data load got resolved on setting the correct
> keytype of the querytype, I didn't follow why does the query engine kick in
> when I am try to load the data using the data streamer api? How does
> deserialization come in, isn't it suppose to just do serialization when
> loading the data into the cache?
>
> Kindly let me your inputs. It will help in understanding whats happening
> behind the scenes.
>
> Thanks.
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Error-while-loading-data-into-cache-with-BinaryObject-as-key-field-tp6014p6062.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Iterating through a BinaryObject cache fails

2016-07-01 Thread Alexei Scherbakov
Hi,

Is the question somehow related to your previous quiestion, entitled Error
while loading data into cache with BinaryObject as key field ?

I see the same eexception here.

How do you fill the cache with data ?



2016-07-01 11:39 GMT+03:00 pragmaticbigdata :

> I am using ignite version 1.6 and I have a replicated cache of  BinaryObject> pre-loaded. In order to try out the dynamic structure change
> ability with BinaryObjects, I tried iterating through the cache with
> different approaches. All of them fail with an error
>
> Caused by: class org.apache.ignite.IgniteCheckedException: Class definition
> was not found at marshaller cache and local file.
>
> Iterator Approach
>
> IgniteCache cache =
> ignite.getOrCreateCache(cacheName);
>
> Iterator> iterator =
> cache.iterator();
> while (iterator.hasNext()) {
>
> }
>
>
> ForEach
>
> cache.forEach(entry -> {
> BinaryObjectBuilder builder = entry.getValue().toBuilder();
> builder.setField(fieldToBeAdded, 0);
> updatedCache.put(entry.getKey(), builder.build());
> });
>
> cache.putAll(updatedCache);
>
>
> Query Approach
>
> SqlQuery query = new SqlQuery BinaryObject>(table.getCacheValueType(), "CUSTOMER <> 'X'");
> try(QueryCursor> cursor =
> cache.query(query)) {
> logger.info("No of entries : {}", cursor.getAll().size());
>
> }
>
> The error trace is  here <http://pastebin.com/AV6kd2Ka>
>
> If I tried to query specific objects I am able to do it with the query
> approach but iterating through all the elements of the cache is failing.
>
> What am I missing?
>
> Thanks
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Iterating-through-a-BinaryObject-cache-fails-tp6038.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: SQL Query on REST APIs

2016-06-30 Thread Alexei Scherbakov
Hi, Francesco.

Sniffing web console traffic won't help you, because an agent directly
connects to web console server.

Do you have any antivirus or firewall software? If yes try to disable them.

You could also try to sniff what happens then you sending a query request
to rest server.

2016-06-29 9:29 GMT+03:00 Francesco :

> Hi Alexei,
>
> I should be now properly subscribed.
>
> About your answer:
>
> 1) My cache has about 3000 entries (3 thousand).
>
> 2) I've not tried the client mode. Meanwhile I've tried the H2 console and,
> from remote, it doesn't work. But using it on a local machine it works. (My
> program is running on an AWS server, so I've configured H2 server to allow
> external connections).
>
> 3) There are no errors in the server log.
>
>
> Anyway I have a new:
>
> I've updated my cache configuration with 1 backup level per-node, and I've
> accidentally noticed that now I can perform sql queries from gridgain
> webconsole properly.
> But I'm still not able to do this using rest APIs.
>
> Maybe is crazy, but I'm thinking I could analyse with a packet sniffer my
> network traffic to see exactly what request is sent from webconsole to my
> ignite cluster (or to the webagent, I don't know how do the webconsole
> requests work).
>
> Do you think it could be usefull?
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/SQL-Query-on-REST-APIs-tp4815p5985.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Error while loading data into cache with BinaryObject as key field

2016-06-30 Thread Alexei Scherbakov
Hi,

1. Type name can be the actual class name if you plan to work with cache
value as java object.
2. Avoid names reserved to SQL keywords.
3. Put key columns into key binary object. To fetch everything
use SqlFieldsQuery like:
select _key,_val from ...
or use SqlQuery

2016-06-30 16:11 GMT+03:00 pragmaticbigdata :

> I realized that the error disappears if I hard code the typeName while
> constructing the BinaryObjectBuilder.
>   ignite.binary().builder("ConstantString");
>
> Few questions based on this
> 1. What is the significance of the typeName except the fact that it is used
> while querying?
> 2. What are the guidelines for specifying typeNames?
> 3. In my use case I have few columns that are keyColumns while others are
> not (I am loading a table into ignite). Do I need to have all column values
> part of the value object in order to have the ability to execute a single
> sql query to fetch them? Is there a better way to build the cache object in
> order to have a single query to fetch all the column values of the object
> (that are part of the cache key and value)?
>
> Thanks!
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Error-while-loading-data-into-cache-with-BinaryObject-as-key-field-tp6014p6019.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Ignite work directory usage?

2016-06-29 Thread Alexei Scherbakov
Hi,

Ignite uses work directory as a relative directory for internal write
activities, on example, logging.

Every Ignite node (server or client) has it's own work directory
independently of other nodes.

Generally you should not change it without reasons.

Was it helpful ?

Maybe you got more specific problem related to the topic title ?


2016-06-28 3:40 GMT+03:00 bintisepaha :

> Hi,
>
> Could someone explain the use of work directory? How does it work for
> client
> and server?
> Do they need to have access to the same directory?
>
> There is not much documentation on it.
>
> Thanks,
> Binti
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-work-directory-usage-tp5936.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: ignite group indexing not work problem

2016-06-28 Thread Alexei Scherbakov
> }
>
>
>
> public void setOId(String oId)
>
> {
>
> this.oId = oId;
>
> }
>
>
>
> public String getGid()
>
>     {
>
> return gId;
>
> }
>
>
>
> public void setGuid(String gId)
>
> {
>
> this.gId = gId;
>
> }
>
>
>
> public int getFNum()
>
> {
>
> return fNum;
>
> }
>
>
>
> public void setFNum(int fNum)
>
> {
>
> this.fNum = fNum;
>
> }
>
>
>
> public String getMsg()
>
> {
>
> return msg;
>
> }
>
>
>
> public void setMsg(String msg)
>
> {
>
> this.msg = msg;
>
> }
>
>
>
> public BigDecimal getNum()
>
> {
>
> return num;
>
> }
>
>
>
> public void setNum(BigDecimal num)
>
> {
>
> this.num = num;
>
> }
>
>
>
> public Date getDate()
>
> {
>
> return date;
>
> }
>
>
>
> public void setDate(Date date)
>
> {
>
> this.date = date;
>
> }
>
>
>
> }
>
>
>
>
>
>
>
>
>



-- 

Best regards,
Alexei Scherbakov


Re: SQL Query on REST APIs

2016-06-28 Thread Alexei Scherbakov
Hi, Francesco.

Please properly subscribe to the user mailing list so community members can
see your questions
as soon as possible and provide answers on them quicker. All you need to do
is send an email to user-subscr...@ignite.apache.orgî and follow simple
instructions in the reply.

How many entries do you have in cache?
Have you tried to run a query using client mode or in H2 console ? [1]
Have you any errors in the server log?

[1] https://apacheignite.readme.io/docs/sql-queries#using-h2-debug-console




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SQL-Query-on-REST-APIs-tp4815p5964.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to call loadCache before node is started ?

2016-06-24 Thread Alexei Scherbakov
Hi, Kristian.

It looks like an issue which was recently fixed.
Try set CacheRebalanceMode.ASYNC.



2016-06-22 14:54 GMT+03:00 Kristian Rosenvold :

> If I fill the newly created Replicated cache with IgniteDataStreamer,
> replication works as expected. If I use loadCache, the result is
> arbitrary on other nodes. This has to be a bug... ??
>
>
> Complete miniature easy-to-run test project at
> https://github.com/krosenvold/ignite-loadCache-failure-demo, should be
> reproducable in 5 minutes or less :)
>
> Kristian
>
>
> 2016-06-22 9:03 GMT+02:00 Kristian Rosenvold :
> > I have created a testcase that quite consistently reproduces this
> > problem on my mac;
> > https://gist.github.com/krosenvold/fa20521ad121a0cbb4c6ed6be91452e5
> >
> > If you start two processes with this main method, and stagger startup
> > of the second process by about 10 seconds, the second process will
> > almost never achieve any kind of consistent cache. This just does not
> > make any sense with a cache that is replicated ?!?
> >
> > Kristian
> >
> >
> > 2016-06-21 9:28 GMT+02:00 Kristian Rosenvold :
> >> 2016-06-20 10:27 GMT+02:00 Alexei Scherbakov <
> alexey.scherbak...@gmail.com>:
> >>> Hi,
> >>>
> >>> You should not rely on cache.size method for checking data consistency.
> >>> cache.size skips keys which is not currently loaded by read-through
> >>> behavior.
> >>
> >> Sorry for not mentioning that this is CacheMode.REPLICATED and
> >> CacheRebalanceMode.SYNC. I really do expect the sizes to be fairly
> >> consistent, or is there some other mechanism I should be using to get
> >> a consistent view of the cache ?
> >>
> >> Kristian
>



-- 

Best regards,
Alexei Scherbakov


Re: schema import example ClassNotFoundException: org.apache.ignite.schema.H2DataSourceFactory

2016-06-23 Thread Alexei Scherbakov
Hi,

I was not able to reproduce your problem using simple code like:
public static void main(String[] args) throws ClassNotFoundException,
SQLException {

// Register JDBC driver.
Class.forName("org.apache.ignite.IgniteJdbcDriver");

Connection conn =
DriverManager.getConnection("jdbc:ignite:cfg://cache=PersonCache@config/ignite-jdbc.xml");

PreparedStatement stmt = conn.prepareStatement("select firstName,
lastName from Person");

//stmt.setInt(1, 30);

ResultSet rs = stmt.executeQuery();

while (rs.next()) {
String firstName = rs.getString("firstName");
System.out.println(firstName);
}
}

I suppose you somehow created dependency in your code to the schema-import
demo.
Working with jdbc driver requires only basic dependencies to be in
classpath.



2016-06-19 14:18 GMT+03:00 xoraxax :

> In order to reproduce it you can run the schema-import example.
> Then, let's say, you can download JasperServer and add IgniteJdbcDatasource
> to the context.xml.
> In web UI you add jndi DataSource and test connection. It starts the ignite
> node and throws the exception unless I manually copy ignite-examples.jar to
> the tomcat lib folder.
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/schema-import-example-ClassNotFoundException-org-apache-ignite-schema-H2DataSourceFactory-tp5629p5741.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: SQL Queries - propagate "new" CacheConfiguration.queryEntities over the cluster on an already started cache

2016-06-23 Thread Alexei Scherbakov
Hi,

Please properly subscribe to the user list. All you need to do is send an
email to ì
user-subscr...@ignite.apache.orgî and follow simple instructions in the
reply.

Unfortunately, DDL is not currently supported by Ignite, but may appear in
next releases.

Probably someone on dev list knows more about that.





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SQL-Queries-propagate-new-CacheConfiguration-queryEntities-over-the-cluster-on-an-already-started-cae-tp5802p5833.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Multiple column filtering and sorting in Apache Ignite

2016-06-22 Thread Alexei Scherbakov
Mrinal,

You must create group index for query like:
ordery by field1 asc, field2 desc

You can define such index using annotations or QueryEntry [1]

I prefer QueryEntry for defining complex indexes.

Don't forget to check actual index usage by issuing EXPLAIN command from
the H2 console or code.

[1]
https://apacheignite.readme.io/docs/sql-queries#configuring-sql-indexes-using-queryentity



2016-06-22 13:08 GMT+03:00 mrinalkamboj :

> Hello Alexei,
>
> This operation is via user input, we don't directly control it. It may not
> be efficient, but still user can do the operation.
>
> Thanks,
>
> Mrinal
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Multiple-column-filtering-and-sorting-in-Apache-Ignite-tp5783p5788.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Multiple column filtering and sorting in Apache Ignite

2016-06-22 Thread Alexei Scherbakov
Hi,

I suppose OrderId is unique field, right ?

If yes, what's the point in sorting on both OrderId and OrderName ?


2016-06-22 10:53 GMT+03:00 mrinalkamboj :

> Following is my Poco class, my query include a set of filters on few
> columns,
> which is facilitated using the QuerySqlField attribute, further there's
> multiple column sorting, for which the fields are indexed, now my
> understanding is indexing has a role only in sorting, but in this case user
> can ask for Ascending or Descending and the combination of columns like
> "OrderId Asc,OrderName desc"
>
> - What kind of index can facilitate this kind of query ?
> - Can a field be indexed for both ascending and descending ?
> - Are the group indexes required in this case  ?
>
> In my view it might not be practical to create index for all the
>
> public class OrderEntity
> {
> [QuerySqlField(IsIndexed = true)]
> public int OrderId { get; set; }
>
> [QuerySqlField(IsIndexed = true)]
> public string OrderName { get; set; }
>
> [QuerySqlField(IsIndexed = true)]
> public DateTime OrderDateTime { get; set; }
>
> [QuerySqlField(IsIndexed = true)]
> public double OrderValue { get; set; }
>
> [QuerySqlField(IsIndexed = true)]
> public string OrderAddress { get; set; }
> }
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Multiple-column-filtering-and-sorting-in-Apache-Ignite-tp5783.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: SqlQuery and removing/evict items from the cache

2016-06-20 Thread Alexei Scherbakov
Yes.

2016-06-20 15:33 GMT+03:00 zshamrock :

> When the items are removed from the cache explicitly or due to the eviction
> or expiration policies, does Ignite adjust the number of entries in the
> in-memory H2 database, so to keep its size in sync with the actual items in
> the cache?
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/SqlQuery-and-removing-evict-items-from-the-cache-tp5754.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Behind cache ScanQuery

2016-06-20 Thread Alexei Scherbakov
It does full scan without using indexes.

2016-06-20 15:31 GMT+03:00 zshamrock :

> Hi, how the ScanQuery is implemented? Does it, as it says, scans all the
> entries in the cache, so if the there are a lot of entries in the cache
> this
> operation could take a while to complete. Correct? And H2 is not used for
> ScanQuery, as it is not SqlQuery, correct?
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Behind-cache-ScanQuery-tp5753.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: How deploy Ignite workers in a Spark cluster

2016-06-20 Thread Alexei Scherbakov
Hi,

Have you tried to provide Spark master URL in the SparkConf instance ?

I'm not very big expert on Spark, so you better follow Spark docs for
troubleshooting
Spark configuration problems.




2016-06-15 23:12 GMT+03:00 Paolo Di Tommaso :

> Hi,
>
> I'm using a local spark cluster made up one master and one worker.
>
> Using the version of the script that exception is not raised. But it is
> confusing me even more because that application run is not reported in the
> Spark console. It looks like it is running on the master node. Does it make
> sense? You can find the output produced at this link
> <http://pastebin.com/hAN0iWr6>.
>
> My goal is to deploy an Ignite worker in *each* Spark node available in
> the Spark cluster, deploy an hybrid application based on Spark+Ignite and
> shutdown the Ignite workers on completion.
>
> What is supposed to be the best approach to implement that.
>
>
> Thanks,
> Paolo
>
>
> On Wed, Jun 15, 2016 at 6:13 PM, Alexei Scherbakov <
> alexey.scherbak...@gmail.com> wrote:
>
>> Your understanding is correct.
>>
>> How many nodes do you have?
>>
>> Please provide full logs from the started Ignite instances.
>>
>>
>>
>> 2016-06-15 18:34 GMT+03:00 Paolo Di Tommaso :
>>
>>> OK, using `ic.close(false)` instead of `ic.close(true)` that exception
>>> is not reported.
>>>
>>> However I'm a bit confused. The close argument is named
>>> `shutdownIgniteOnWorkers` so I was thinking that is required to set it true
>>> to shutdown the Ignite daemon when the app is terminated.
>>>
>>> How it is supposed to be used that flag?
>>>
>>>
>>> Cheers,
>>> Paolo
>>>
>>>
>>> On Wed, Jun 15, 2016 at 5:06 PM, Paolo Di Tommaso <
>>> paolo.ditomm...@gmail.com> wrote:
>>>
>>>> The version is 1.6.0#20160518-sha1:0b22c45b and the following is the
>>>> script I'm using.
>>>>
>>>>
>>>> https://github.com/pditommaso/gspark/blob/master/src/main/groovy/org/apache/ignite/examples/JavaIgniteSimpleApp.java
>>>>
>>>>
>>>>
>>>> Cheers, p
>>>>
>>>>
>>>> On Wed, Jun 15, 2016 at 5:00 PM, Alexei Scherbakov <
>>>> alexey.scherbak...@gmail.com> wrote:
>>>>
>>>>> I don't think it's OK.
>>>>>
>>>>> Which Ingite's version do you use?
>>>>>
>>>>> 2016-06-15 15:35 GMT+03:00 Paolo Di Tommaso >>>> >:
>>>>>
>>>>>> Great, now it works! Thanks a lot.
>>>>>>
>>>>>>
>>>>>> I have only a NPE during the application shutdown (you can find the
>>>>>> stack trace at this link <http://pastebin.com/y0EM7qXU>). Is this
>>>>>> normal? and in any case is there a way to avoid it?
>>>>>>
>>>>>>
>>>>>> Cheers,
>>>>>> Paolo
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Jun 15, 2016 at 1:25 PM, Alexei Scherbakov <
>>>>>> alexey.scherbak...@gmail.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> To automatically start Ignite nodes you must pass false parameter to
>>>>>>> 3-d IgniteContext argument like:
>>>>>>>
>>>>>>> // java
>>>>>>> SparcContext sc = ...
>>>>>>> new JavaIgniteContext<>(sc, new IgniteConfigProvider(), false);;
>>>>>>>
>>>>>>> or
>>>>>>>
>>>>>>> // scala
>>>>>>> SparcContext sc = ...
>>>>>>> new IgniteContext[String, String](sc,() ⇒ configurationClo(), false)
>>>>>>>
>>>>>>> 2016-06-15 13:31 GMT+03:00 Paolo Di Tommaso <
>>>>>>> paolo.ditomm...@gmail.com>:
>>>>>>>
>>>>>>>> Hi all,
>>>>>>>>
>>>>>>>> I'm struggling deploying an Ignite application in a Spark (local)
>>>>>>>> cluster using the Embedded deploying described at this link
>>>>>>>> <https://apacheignite-fs.readme.io/docs/installation-deployment#embedded-deployment>.
>>>>>>>>
>>>>>>>>
>>>>>>>> The documentation seems suggesting that Ignite workers are
>>>>>>>> automatically instantiated at runtime when submitting the Ignite app.
>>>>>>>>
>>>>>>>> Could you please confirm that this is the expected behaviour?
>>>>>>>>
>>>>>>>>
>>>>>>>> In my tests the when the application starts it simply hangs,
>>>>>>>> reporting this warning message:
>>>>>>>>
>>>>>>>> WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed
>>>>>>>> to connect to any address from IP finder (will retry to join topology 
>>>>>>>> every
>>>>>>>> 2 secs): [/192.168.1.36:47500, /192.168.99.1:47500]
>>>>>>>>
>>>>>>>> It looks like there are not ignite daemons to which connect to.
>>>>>>>> Also inspecting the Spark worker log I'm unable to find any message
>>>>>>>> produced by Ignite. I'm expecting instead to find the log messages 
>>>>>>>> produced
>>>>>>>> by the ignite daemon startup.
>>>>>>>>
>>>>>>>>
>>>>>>>> Any idea what's wrong?
>>>>>>>>
>>>>>>>>
>>>>>>>> Cheers,
>>>>>>>> Paolo
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>>
>>>>>>> Best regards,
>>>>>>> Alexei Scherbakov
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Best regards,
>>>>> Alexei Scherbakov
>>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>>
>> Best regards,
>> Alexei Scherbakov
>>
>
>


-- 

Best regards,
Alexei Scherbakov


Re: How to call loadCache before node is started ?

2016-06-20 Thread Alexei Scherbakov
Hi,

You should not rely on cache.size method for checking data consistency.
cache.size skips keys which is not currently loaded by read-through
behavior.



2016-06-18 15:14 GMT+03:00 Kristian Rosenvold :

> 2016-06-18 13:02 GMT+02:00 Alexei Scherbakov  >:
> > You should be safe calling loadCache just after getOrCreate.
>
> I am testing various disaster recovery scenarios here, and I'm not
> entirely convinced this is the case.
> Our system starts 6 replicated caches, the script below outputs the
> size from each node:
>
> $ cacheStatus
> 1663837 1528370 1038204 1663837 1528370 1663759
> 1663837 1528370 1038204 95089 1528370 1663759
>
> $ cacheStatus
> 1663882 1528410 1038204 1663882 1528410 1663804
> 1663882 1528410 1038204 1663882 1101911 1663804
>
> $ cacheStatus
> 1663893 1528420 1038208 1663893 1528420 1663815
> 1634199 1528420 1038208 1663893 1528420 1663815
>
> Both nodes run the same code, getOrCreate followed by a conditional
> load. In all of the above 3 cases, the cache that is being loaded on
> the first node at the moment the second node starts does not
> synchronize properly; I can in fact control which cache gets corrupted
> by delaying the start of node 2 by X seconds. The second node does not
> enter the "load" block, presumably because it has received data from
> the first during getOrCreate.
>
> I'm using ignite-core-1.7.0-20160616.230251-11.jar.
>
> We're not using any kind of staggered startup, which might be a
> sort-of workaround.
>
> Kristian
>



-- 

Best regards,
Alexei Scherbakov


Re: How to call loadCache before node is started ?

2016-06-18 Thread Alexei Scherbakov
Hi, Kristian.

I haven't heard about anything like atomic loading of cache.

You should be safe calling loadCache just after getOrCreate.

Every node will start loading data, skipping keys which are not belong to
them.

There is no such thing as partial replication, data is always consistent.

To improve efficiency of initial load you may use partition aware loading
as described here [1]

Another approach may be using DataStreamer on dedicated data loading node
and marking some
state (on example field in database) when loading is completed.

Cache users must wait until the field is set to true.

You can also use Ignite's distributed locking for that purpose [2]

[1]
https://apacheignite.readme.io/docs/data-loading#section-partition-aware-data-loading
[2] https://apacheignite.readme.io/docs/distributed-locks


2016-06-18 11:55 GMT+03:00 Kristian Rosenvold :

> Our current node startup logic includes a simple heuristic like this:
>
> final IgniteCache cache = ignite.getOrCreateCache(configuration);
> if (cache.localSize(CachePeekMode.ALL) == 0) {
>LOGGER.info("Empty cache or No-one else around in the ignite cloud,
> loading cache {} from database", configuration.getName() );
>cache.loadCache((k, v) -> true, Integer.MAX_VALUE);
> }
>
> The problem with this logic is that that cache is started after
> getOrCreateCache, and other cluster members coming up while the first
> node is starting can get partial replication. How can I make loadCache
> happen during getOrCreateCache to ensure atomic startup/loading of the
> cache ?
>
> Kristian
>



-- 

Best regards,
Alexei Scherbakov


Re: schema import example ClassNotFoundException: org.apache.ignite.schema.H2DataSourceFactory

2016-06-17 Thread Alexei Scherbakov
Hi,

Please provide the source code throwing an exception you mentioned
so I can reproduce it.

As for peerClassLoading meaning, the flag is only related to task execution
using the IgniteCompute API.


2016-06-17 9:42 GMT+03:00 xoraxax :

> I am not quite sure what you mean.
> I followed Readme file from schema-import example which is in the ignite
> examples folder. I haven't change anything.
> I run Demo class. - Cache has been populated from database.
> Now, let's say, I want to access the cache using JDBC driver. The node
> starts, though it throws the exception.
> I have to put ignite-examples.jar to the classpath in order to get it to
> work.
> But as I understand, if peerClassLoading is turned on, it has to distribute
> my classes with no need to copy them manually.
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/schema-import-example-ClassNotFoundException-org-apache-ignite-schema-H2DataSourceFactory-tp5629p5705.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: schema import example ClassNotFoundException: org.apache.ignite.schema.H2DataSourceFactory

2016-06-16 Thread Alexei Scherbakov
Hi

I suppose you try to start Ignite with generated pojos?
If yes attach the code responsible for running the output.

2016-06-14 15:36 GMT+03:00 xoraxax :

> I am trying to run schema import example and it throws me this exception on
> remote nodes.
> peerClassLoadingEnabled is set to True though it doesn't work anyway.
> Nodes work on the same PC. Two of them is started from command line, third
> is started from eclipse. They can see each other.
>
> The exception is:
> class org.apache.ignite.IgniteCheckedException: Failed to find class with
> given class loader for unmarshalling (make sure same versions of all
> classes
> are available on all nodes or enable peer-class-loading):
> sun.misc.Launcher$AppClassLoader@764c12b6
> Caused by: java.lang.ClassNotFoundException:
> org.apache.ignite.schema.H2DataSourceFactory
>
> How do you enable Classloading properly?
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/schema-import-example-ClassNotFoundException-org-apache-ignite-schema-H2DataSourceFactory-tp5629.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: SQL queries with BinaryObject

2016-06-16 Thread Alexei Scherbakov
Hi,

It's possible to do SQL queries with binary objects directly.
You need to set every field using builder.

The fully working example below:

public static void main(String[] args) throws IgniteException {

Ignite start = Ignition.start("examples/config/example-ignite.xml");
CacheConfiguration cfg = new CacheConfiguration<>();
cfg.setQueryEntities(new ArrayList() {{
QueryEntity e = new QueryEntity();
e.setKeyType("java.lang.Integer");
e.setValueType("BinaryTest");
e.setFields(new LinkedHashMap(){{
put("name", "java.lang.String");
}});
add(e);
}});
IgniteCache cache =
start.getOrCreateCache(cfg).withKeepBinary();
BinaryObjectBuilder builder = start.binary().builder("BinaryTest");
builder.setField("name", "Test");
cache.put(1, builder.build());

QueryCursor> query = cache.query(new
SqlFieldsQuery("select name from BinaryTest"));
System.out.println(query.getAll());
}


2016-06-15 10:50 GMT+03:00 Eduardo Julian :

> Hi, everyone.
>
> I found out about the BinaryObject functionality for storing and
> manipulating data on Ignite (
> http://apacheignite.gridgain.org/docs/binary-marshaller).
>
> I decided to adopt it, in order to reduce the hassle during
> data-migrations.
>
> However, when I try to do SQL queries on a cache that contains data
> created by building BinaryObjects, I get errors saying that the columns I'm
> querying are missing.
>
> For example:
> Caused by: org.h2.jdbc.JdbcSQLException: Column "IS_ACTIVE" not found; SQL
> statement:
> SELECT "Cache".BinaryObject._key, "Cache".BinaryObject._val FROM
> "Cache".BinaryObject WHERE is_active = true
>
> I imagined that it was because I had migrated from using Java classes, to
> directly using BinaryObject, so I set up a QueryEntity with
> key java.lang.String, and value org.apache.ignite.binary.BinaryObject, and
> added query-fields for all of the fields I'll be using (including
> is_active), and yet, I still get the same error.
>
> I looked up on Google, but I've only found examples of ScanQuery with
> BinaryObject, but no examples of SqlQuery with BinaryObject (
> https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheQueryExample.java
> ).
>
> So I want to ask if it's at all possible to do SQL queries with
> BinaryObject, and if so, what might I be missing.
>
> Oh, and I forgot to mention that I'm using setStoreKeepBinary
> and withKeepBinary, so I don't think the problem might be with that.
>
> Thanks in advance for the help!
>
> --
>
> Eduardo Julian
> *Chief Science Officer*
> Mobile.1.809.330.4184
> Skype.  eejp1991
> www.bontix.com
>
>
>
>
> This electronic mail  and its attachments may contain confidential or
> priviledged information intended only for the address and may be protected
> by law; they should not be distributed, used or copied without
> authorization. If you have received this email in error,please notify
> immediately at 1(809) 330-4184 or send an email to our address:
> eduardo...@bontix.com  the sender and delete this message
> and its attachments. As emails may be altered, Bontix is not liable for
> messages that have been modified, changed or falsified. Thank you
>
> Este correo electrónico y sus archivos adjuntos pueden contener
> información confidencial o privilegiada destinado únicamente para la
> dirección y puede ser protegido por la ley y no deben ser distribuidos,
> utilizados o copiados sin autorización. Si usted ha recibido este mensaje
> por error, por favor notifique inmediatamente al 1(809) 330-4184 o envíe
> un correo electrónico a nuestra dirección: eduardo...@bontix.com
>  al remitente y borre este mensaje y sus anexos. Como
> correos electrónicos pueden ser alterado, Bontix no se hace responsable de
> los mensajes que se han modificado, cambiado o falsificados. Gracias
>
>


-- 

Best regards,
Alexei Scherbakov


Re: How deploy Ignite workers in a Spark cluster

2016-06-15 Thread Alexei Scherbakov
Your understanding is correct.

How many nodes do you have?

Please provide full logs from the started Ignite instances.



2016-06-15 18:34 GMT+03:00 Paolo Di Tommaso :

> OK, using `ic.close(false)` instead of `ic.close(true)` that exception is
> not reported.
>
> However I'm a bit confused. The close argument is named
> `shutdownIgniteOnWorkers` so I was thinking that is required to set it true
> to shutdown the Ignite daemon when the app is terminated.
>
> How it is supposed to be used that flag?
>
>
> Cheers,
> Paolo
>
>
> On Wed, Jun 15, 2016 at 5:06 PM, Paolo Di Tommaso <
> paolo.ditomm...@gmail.com> wrote:
>
>> The version is 1.6.0#20160518-sha1:0b22c45b and the following is the
>> script I'm using.
>>
>>
>> https://github.com/pditommaso/gspark/blob/master/src/main/groovy/org/apache/ignite/examples/JavaIgniteSimpleApp.java
>>
>>
>>
>> Cheers, p
>>
>>
>> On Wed, Jun 15, 2016 at 5:00 PM, Alexei Scherbakov <
>> alexey.scherbak...@gmail.com> wrote:
>>
>>> I don't think it's OK.
>>>
>>> Which Ingite's version do you use?
>>>
>>> 2016-06-15 15:35 GMT+03:00 Paolo Di Tommaso :
>>>
>>>> Great, now it works! Thanks a lot.
>>>>
>>>>
>>>> I have only a NPE during the application shutdown (you can find the
>>>> stack trace at this link <http://pastebin.com/y0EM7qXU>). Is this
>>>> normal? and in any case is there a way to avoid it?
>>>>
>>>>
>>>> Cheers,
>>>> Paolo
>>>>
>>>>
>>>>
>>>> On Wed, Jun 15, 2016 at 1:25 PM, Alexei Scherbakov <
>>>> alexey.scherbak...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> To automatically start Ignite nodes you must pass false parameter to
>>>>> 3-d IgniteContext argument like:
>>>>>
>>>>> // java
>>>>> SparcContext sc = ...
>>>>> new JavaIgniteContext<>(sc, new IgniteConfigProvider(), false);;
>>>>>
>>>>> or
>>>>>
>>>>> // scala
>>>>> SparcContext sc = ...
>>>>> new IgniteContext[String, String](sc,() ⇒ configurationClo(), false)
>>>>>
>>>>> 2016-06-15 13:31 GMT+03:00 Paolo Di Tommaso >>>> >:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> I'm struggling deploying an Ignite application in a Spark (local)
>>>>>> cluster using the Embedded deploying described at this link
>>>>>> <https://apacheignite-fs.readme.io/docs/installation-deployment#embedded-deployment>.
>>>>>>
>>>>>>
>>>>>> The documentation seems suggesting that Ignite workers are
>>>>>> automatically instantiated at runtime when submitting the Ignite app.
>>>>>>
>>>>>> Could you please confirm that this is the expected behaviour?
>>>>>>
>>>>>>
>>>>>> In my tests the when the application starts it simply hangs,
>>>>>> reporting this warning message:
>>>>>>
>>>>>> WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed
>>>>>> to connect to any address from IP finder (will retry to join topology 
>>>>>> every
>>>>>> 2 secs): [/192.168.1.36:47500, /192.168.99.1:47500]
>>>>>>
>>>>>> It looks like there are not ignite daemons to which connect to. Also
>>>>>> inspecting the Spark worker log I'm unable to find any message produced 
>>>>>> by
>>>>>> Ignite. I'm expecting instead to find the log messages produced by the
>>>>>> ignite daemon startup.
>>>>>>
>>>>>>
>>>>>> Any idea what's wrong?
>>>>>>
>>>>>>
>>>>>> Cheers,
>>>>>> Paolo
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Best regards,
>>>>> Alexei Scherbakov
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> Best regards,
>>> Alexei Scherbakov
>>>
>>
>>
>


-- 

Best regards,
Alexei Scherbakov


Re: How deploy Ignite workers in a Spark cluster

2016-06-15 Thread Alexei Scherbakov
This example (slightly modified) works fine for me.
https://gist.github.com/ascherbakoff/7c19da8ba568076221598d9d97e96b77

In logs I see a problem with multicast IP discovery.

Check if the mulitcast is enabled on your machine or better use static IP
discovery[1]

[1]
https://apacheignite.readme.io/docs/cluster-config#static-ip-based-discovery

2016-06-15 18:06 GMT+03:00 Paolo Di Tommaso :

> The version is 1.6.0#20160518-sha1:0b22c45b and the following is the
> script I'm using.
>
>
> https://github.com/pditommaso/gspark/blob/master/src/main/groovy/org/apache/ignite/examples/JavaIgniteSimpleApp.java
>
>
>
> Cheers, p
>
>
> On Wed, Jun 15, 2016 at 5:00 PM, Alexei Scherbakov <
> alexey.scherbak...@gmail.com> wrote:
>
>> I don't think it's OK.
>>
>> Which Ingite's version do you use?
>>
>> 2016-06-15 15:35 GMT+03:00 Paolo Di Tommaso :
>>
>>> Great, now it works! Thanks a lot.
>>>
>>>
>>> I have only a NPE during the application shutdown (you can find the
>>> stack trace at this link <http://pastebin.com/y0EM7qXU>). Is this
>>> normal? and in any case is there a way to avoid it?
>>>
>>>
>>> Cheers,
>>> Paolo
>>>
>>>
>>>
>>> On Wed, Jun 15, 2016 at 1:25 PM, Alexei Scherbakov <
>>> alexey.scherbak...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> To automatically start Ignite nodes you must pass false parameter to
>>>> 3-d IgniteContext argument like:
>>>>
>>>> // java
>>>> SparcContext sc = ...
>>>> new JavaIgniteContext<>(sc, new IgniteConfigProvider(), false);;
>>>>
>>>> or
>>>>
>>>> // scala
>>>> SparcContext sc = ...
>>>> new IgniteContext[String, String](sc,() ⇒ configurationClo(), false)
>>>>
>>>> 2016-06-15 13:31 GMT+03:00 Paolo Di Tommaso 
>>>> :
>>>>
>>>>> Hi all,
>>>>>
>>>>> I'm struggling deploying an Ignite application in a Spark (local)
>>>>> cluster using the Embedded deploying described at this link
>>>>> <https://apacheignite-fs.readme.io/docs/installation-deployment#embedded-deployment>.
>>>>>
>>>>>
>>>>> The documentation seems suggesting that Ignite workers are
>>>>> automatically instantiated at runtime when submitting the Ignite app.
>>>>>
>>>>> Could you please confirm that this is the expected behaviour?
>>>>>
>>>>>
>>>>> In my tests the when the application starts it simply hangs, reporting
>>>>> this warning message:
>>>>>
>>>>> WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed to
>>>>> connect to any address from IP finder (will retry to join topology every 2
>>>>> secs): [/192.168.1.36:47500, /192.168.99.1:47500]
>>>>>
>>>>> It looks like there are not ignite daemons to which connect to. Also
>>>>> inspecting the Spark worker log I'm unable to find any message produced by
>>>>> Ignite. I'm expecting instead to find the log messages produced by the
>>>>> ignite daemon startup.
>>>>>
>>>>>
>>>>> Any idea what's wrong?
>>>>>
>>>>>
>>>>> Cheers,
>>>>> Paolo
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Best regards,
>>>> Alexei Scherbakov
>>>>
>>>
>>>
>>
>>
>> --
>>
>> Best regards,
>> Alexei Scherbakov
>>
>
>


-- 

Best regards,
Alexei Scherbakov


Re: How deploy Ignite workers in a Spark cluster

2016-06-15 Thread Alexei Scherbakov
I don't think it's OK.

Which Ingite's version do you use?

2016-06-15 15:35 GMT+03:00 Paolo Di Tommaso :

> Great, now it works! Thanks a lot.
>
>
> I have only a NPE during the application shutdown (you can find the stack
> trace at this link <http://pastebin.com/y0EM7qXU>). Is this normal? and
> in any case is there a way to avoid it?
>
>
> Cheers,
> Paolo
>
>
>
> On Wed, Jun 15, 2016 at 1:25 PM, Alexei Scherbakov <
> alexey.scherbak...@gmail.com> wrote:
>
>> Hi,
>>
>> To automatically start Ignite nodes you must pass false parameter to 3-d
>> IgniteContext argument like:
>>
>> // java
>> SparcContext sc = ...
>> new JavaIgniteContext<>(sc, new IgniteConfigProvider(), false);;
>>
>> or
>>
>> // scala
>> SparcContext sc = ...
>> new IgniteContext[String, String](sc,() ⇒ configurationClo(), false)
>>
>> 2016-06-15 13:31 GMT+03:00 Paolo Di Tommaso :
>>
>>> Hi all,
>>>
>>> I'm struggling deploying an Ignite application in a Spark (local)
>>> cluster using the Embedded deploying described at this link
>>> <https://apacheignite-fs.readme.io/docs/installation-deployment#embedded-deployment>.
>>>
>>>
>>> The documentation seems suggesting that Ignite workers are automatically
>>> instantiated at runtime when submitting the Ignite app.
>>>
>>> Could you please confirm that this is the expected behaviour?
>>>
>>>
>>> In my tests the when the application starts it simply hangs, reporting
>>> this warning message:
>>>
>>> WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed to
>>> connect to any address from IP finder (will retry to join topology every 2
>>> secs): [/192.168.1.36:47500, /192.168.99.1:47500]
>>>
>>> It looks like there are not ignite daemons to which connect to. Also
>>> inspecting the Spark worker log I'm unable to find any message produced by
>>> Ignite. I'm expecting instead to find the log messages produced by the
>>> ignite daemon startup.
>>>
>>>
>>> Any idea what's wrong?
>>>
>>>
>>> Cheers,
>>> Paolo
>>>
>>>
>>
>>
>> --
>>
>> Best regards,
>> Alexei Scherbakov
>>
>
>


-- 

Best regards,
Alexei Scherbakov


Re: How deploy Ignite workers in a Spark cluster

2016-06-15 Thread Alexei Scherbakov
Hi,

To automatically start Ignite nodes you must pass false parameter to 3-d
IgniteContext argument like:

// java
SparcContext sc = ...
new JavaIgniteContext<>(sc, new IgniteConfigProvider(), false);;

or

// scala
SparcContext sc = ...
new IgniteContext[String, String](sc,() ⇒ configurationClo(), false)

2016-06-15 13:31 GMT+03:00 Paolo Di Tommaso :

> Hi all,
>
> I'm struggling deploying an Ignite application in a Spark (local) cluster
> using the Embedded deploying described at this link
> <https://apacheignite-fs.readme.io/docs/installation-deployment#embedded-deployment>.
>
>
> The documentation seems suggesting that Ignite workers are automatically
> instantiated at runtime when submitting the Ignite app.
>
> Could you please confirm that this is the expected behaviour?
>
>
> In my tests the when the application starts it simply hangs, reporting
> this warning message:
>
> WARN  org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi  - Failed to
> connect to any address from IP finder (will retry to join topology every 2
> secs): [/192.168.1.36:47500, /192.168.99.1:47500]
>
> It looks like there are not ignite daemons to which connect to. Also
> inspecting the Spark worker log I'm unable to find any message produced by
> Ignite. I'm expecting instead to find the log messages produced by the
> ignite daemon startup.
>
>
> Any idea what's wrong?
>
>
> Cheers,
> Paolo
>
>


-- 

Best regards,
Alexei Scherbakov


Re: How can i get igniteCache data with timeout?

2016-06-15 Thread Alexei Scherbakov
Hi,

You can use withAsync Ignite's feature.

Basic usage:

IgniteCache cache= ignite.cache(null).withAsync();
cache.get(key);
cache.future().get(timeout)

By default where is a limitation on max async operations to prevent system
overload,
which is defined by CacheConfiguration.setMaxConcurrentAsyncOperations
More info here [1]

[1] https://apacheignite.readme.io/docs/async-support


2016-06-15 6:41 GMT+03:00 Jone Zhang :

> I'm looking forward to use ignite as below:
>
> IgniteCache icache = ...
>
> icache .get(key1, timeout)
> ...
> icache .get(keyn, timeout)
>
> Unfortunately it is no way.
> How can i get igniteCache data with timeout?
>
> Best Wishes.
> Thanks.
>



-- 

Best regards,
Alexei Scherbakov


Re: Simulating Graph Dependencies With Ignite

2016-06-10 Thread Alexei Scherbakov
1. Affinity function is always present.
By default it's RendezvousAffinityFunction with 1024 partitions.

2 OutOfMemory error is only possible when you are running out of free space
in heap
and GC was not able to clean some on allocation request.

Make sure you tune the GC properly as described here [1]

[1] https://apacheignite.readme.io/docs/jvm-and-system-tuning


2016-06-10 14:40 GMT+03:00 pragmaticbigdata :

> Thanks Alexei for your inputs.
>
> 1. How does the EntryProcessor detect which node does the data reside given
> the key? I question it because I have configured I have PARTITIONED cache
> for which I haven't set any affinity function. It is partitioning the cache
> based on the hash function of the cached object. I didn't not follow how
> does ignite detect the partition just given the cache key?
>
> 2. I doubt that the EntryProcessor code is failing because memory is
> insufficient. I pulled out the node statistics before starting the test and
> verified that there is sufficient heap space available. Please find the
> statistics as below.
>
> visor> node
> Select node from:
>
> +===+
> | # |   Node ID8(@), IP| Node Type | Up Time  | CPUs | CPU Load
> | Free Heap |
>
> +===+
> | 0 | 9B989E4C(@n0),  | Server| 01:59:48 | 2| 0.33 %   | 89.00
> %   |
> | 1 | 1ED58F00(@n2),   | Server| 01:59:12 | 4| 0.17 %   |
> 77.00
> %   |
> | 2 | 56214422(@n3),   | Server| 01:57:42 | 2| 0.33 %   |
> 92.00
> %   |
> | 3 | A57219B6(@n1),   | Server| 01:57:25 | 4| 0.50 %   |
> 87.00
> %   |
>
>
> Could you detail on how do EntryProcessor's work? If it was suppose to
> execute on the node where the data resides why does crash or take
> exponentially more time than it would take with executing any affinity
> code?
>
> Thanks.
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Simulating-Graph-Dependencies-With-Ignite-tp5282p5572.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Simulating Graph Dependencies With Ignite

2016-06-10 Thread Alexei Scherbakov
Hi, Amit.

1. Usually we are talking about affinity collocation in the context of
multiple caches having related data on the same node.
Nothing wrong in your understanding of how the EntryProcessor works, it's a
just a special case.

2. Try to increase heap size.

3. Yes.

4. invoke operations are atomic by default. Either value is updated or not.

2016-06-08 21:00 GMT+03:00 pragmaticbigdata :

> 1. The code attempts to fetch the cache entry, update it and return an
> attribute of that cache entry. Assuming it would be faster to perform this
> operation on the node where the data resides, I was trying out affinity
> collocation. Kindly correct me if my assumption is wrong.
>
> 2. I added the if check as you suggested and the code executes successfully
> if the cache is small. When I preload the cache with 1 million entries, one
> of the nodes in the cluster crashes with a "java.lang.OutOfMemoryError: GC
> overhead limit exceeded" error and after that the node on which the main
> thread was running also crashes. The logs for the node which crashes with
> OOME are  ignite-d5c3ec0c.log
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/n5538/ignite-d5c3ec0c.log
> >
> shared.
>
>
> 3. "I don't see any "affinity code" in the provided sample."
>
> I didn't follow what you meant to say here. Is my understanding of affinity
> from point 1 correct?
>
> 4. The code that updates the cache needs to be executed in a transaction. I
> was planning to add transactions as a second step in my POC after compute
> collocation works.
>
> Kindly let me know your inputs.
>
> Thanks.
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Simulating-Graph-Dependencies-With-Ignite-tp5282p5538.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: "Select count(*) " returns incorrect records for distributed cache.

2016-06-10 Thread Alexei Scherbakov
Hi Rupinder

This is possible if you ran the query in the local mode
(query.setLocal(true))
Check if it's not the case.



2016-06-09 11:24 GMT+03:00 Rupinder Virk :

> Hi,
>
> We are using Ignite-1.5.0-final version. On a 3 node cluster we created a
> cache containing 10K records with *PARTITIONED * and *REPLICATED * mode.
>
> Why the following query returns *1/3rd records* in PARTITIONED  mode
> and *3x records* in REPLICATED  mode.
>
> SqlFieldsQuery qry11 =  new SqlFieldsQuery("select count(*) from  \"" +
> TABLE_CACHE_NAME + "\".Table t ");
>
>
> Thanks,
> Rupinder Virk
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Select-count-returns-incorrect-records-for-distributed-cache-tp5546.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Ignite : Slow Client

2016-06-07 Thread Alexei Scherbakov
1. If any client is disconnected event queue is dropped ( except some
internal discovery events ).
2. Which query? Event queue is per connection.
3. No.
4. No.

You can read more about slow clients here [1]

[1]
https://apacheignite.readme.io/docs/clients-vs-servers#managing-slow-clients

2016-06-07 15:21 GMT+03:00 M Singh :

> Yes Alexie, I should have mentioned that (my bad).
>
>
> On Tuesday, June 7, 2016 5:17 AM, Alexei Scherbakov <
> alexey.scherbak...@gmail.com> wrote:
>
>
> Hi,
>
> Do you mean Ignite's Event delivery described here [1] ?
>
> [1] https://apacheignite.readme.io/docs/events
>
> 2016-06-06 13:21 GMT+03:00 M Singh :
>
> Hi:
>
> I have a few questions about slow clients:
>
> 1. If a slow client is disconnected, what happens to it's event queue ?
> 2. If there are multiple client using same query - do they share the same
> queue and if so, does each client get all the events or are the events
> shared across all clients of that queue (just like a jms queue) ?
> 3. When a slow client reconnects - does it get events from it's previous
> queue ?
> 4. If #3 is true, then during the time slow client is disconnected, is
> it's queue still gathering events while the client is trying to reconnect ?
>
> Thanks
>
>
>
>
> --
>
> Best regards,
> Alexei Scherbakov
>
>
>


-- 

Best regards,
Alexei Scherbakov


Re: Ignite : Slow Client

2016-06-07 Thread Alexei Scherbakov
Hi,

Do you mean Ignite's Event delivery described here [1] ?

[1] https://apacheignite.readme.io/docs/events

2016-06-06 13:21 GMT+03:00 M Singh :

> Hi:
>
> I have a few questions about slow clients:
>
> 1. If a slow client is disconnected, what happens to it's event queue ?
> 2. If there are multiple client using same query - do they share the same
> queue and if so, does each client get all the events or are the events
> shared across all clients of that queue (just like a jms queue) ?
> 3. When a slow client reconnects - does it get events from it's previous
> queue ?
> 4. If #3 is true, then during the time slow client is disconnected, is
> it's queue still gathering events while the client is trying to reconnect ?
>
> Thanks
>



-- 

Best regards,
Alexei Scherbakov


Re: Ignite faster with startup order

2016-06-07 Thread Alexei Scherbakov
Hi

Already answered to you in gitter.

You should profile the application to understand the source of slowdown.

I doubt it's Inite related.

2016-06-06 16:08 GMT+03:00 amitpa :

> Hi,
>
> I have an application which embeds a Apache Ignite instance in a TCP
> server.
> I have another process which starts another ignite instance.
>
> All clients request to the TCP server, which starts an Ignite Transactions
> does some inserts.
>
> I have observed a peculiar thing, when I start the TCP process first, then
> the Ignite main the application is 80% faster to do the puts.
>
> The other way round, makes the application 80% slower?
>
> Why is this so?
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-faster-with-startup-order-tp5459.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Simulating Graph Dependencies With Ignite

2016-06-07 Thread Alexei Scherbakov
Hi, Amit.

I think the object size is OK for performance.

As for NPE, I think you should check if the entry exists in case you passed
the key which is not contained in cache:

if (mutableEntry.exists()) {
   mutableEntry.setValue(...)
}

Check the javadoc for igniteCache.invoke about surrogate entries.

I don't see any "affinity code" in the provided sample.
Read here about affinity [1]

BTW, if you need to load many entries into cache and don't require
transactions, you should use DataStreamer API [2]

[1] https://apacheignite.readme.io/docs/affinity-collocation
[2] https://apacheignite.readme.io/docs/data-streamers


2016-06-07 12:56 GMT+03:00 pragmaticbigdata :

> I tuned the application by batching the cache updates and making the query
> use the index. Wasn't able to make affinity calls work.
>
> Alexei, can you please provide your inputs on the affinity code?
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Simulating-Graph-Dependencies-With-Ignite-tp5282p5478.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: cluster node attribute based authorization

2016-06-07 Thread Alexei Scherbakov
Hi,

Subscribe to

EventType.EVT_NODE_FAILED
EventType.EVT_NODE_LEFT

events [1]

[1] https://apacheignite.readme.io/docs/events

2016-06-06 20:18 GMT+03:00 Anand Kumar Sankaran 
:

> All
>
>
>
> I implemented a ClusterNode.userAttributes() based authorization
> mechanism.  I used a JWT Token to establish trust (passed via the
> userAttribute). This works fine.
>
>
>
> The problem I have is that the JWT Token expires shortly after initial
> use.  Now, if the node leaves the cluster and joins it again, the JWT Token
> in that node would be invalid.
>
>
>
> How should I fix this?  Is there a callback I can implement when a node
> leaves a cluster that I can use to create a new JWT token and attach to it?
>
>
>
> Any guidance would be appreciated.
>
>
>
> --
>
> anand
>



-- 

Best regards,
Alexei Scherbakov


Re: Self Join Query As An Alternative To IN clause

2016-06-07 Thread Alexei Scherbakov
Hi

Because the H2 engine works this way.

Look carefully into the documentation [1]

[1]
https://apacheignite.readme.io/docs/sql-queries#performance-and-usability-considerations

2016-06-07 7:37 GMT+03:00 pragmaticbigdata :

> Great. The query worked now and it is 50% faster than the in clause query.
> Could you detail on the internals of why passing the object array directly
> didn't work and how did an [] inside an [] worked out?
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Self-Join-Query-As-An-Alternative-To-IN-clause-tp5448p5473.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Running an infinite job? (use case inside) or alternatives

2016-06-06 Thread Alexei Scherbakov
Hi,

What about the following solution:

Create cache: IgniteCache, where key is growing integer.
Asssign keys using IgniteAtomicSequence [1]
Listen for cache put events
When put is done and event group id is "next", process all entries from
cache where id < event.key

[1] https://apacheignite.readme.io/docs/id-generator

2016-06-05 15:19 GMT+03:00 zshamrock :

> Are there features in Ignite which would support running an infinite (while
> the cluster is up and running) job? For example, continuously reading
> values
> from the distributed queue? So to implement producer/consumer pattern,
> where
> there could be multiple producers, but I want to limit number of consumers,
> ideally per specific key/group or if it is not possible, just to have one
> consumer per queue.
>
> If I asynchronously submit affinity ignite job with `queue.affinityRun()`
> what is the implication of the this job never to finish? Will it consume
> the
> thread from the ExecutorService thread pool on the running node forever
> then?
>
> To give a better a context, this is the problem I am trying to solve (maybe
> there are even other approaches to  solve it, and I am looking into the
> completely wrong direction?):
> - there are application events coming periodically (based on the
> application
> state changes)
> - I have to accumulate these events until the block of the events is
> "complete" (completion is defined by an application rule), as until the
> group is complete nothing can be done/processed
> - when the group is complete I have to process all of the events in the
> group (as one complete chunk), while still accepting new events coming for
> now another "incomplete" group
> - and repeat since the beginning
>
> So, far I came with the following solution:
> - collect and keep all the events in the distributed IgniteQueue
> - when the application notifies the completion of the group, I trigger
> `queue.affinityRun()` (as I have to do a peek before removing the event
> from
> the queue, so I want to run the execution logic on the node where the queue
> is stored, they are small and run in collocated mode, and so peek will not
> do an unnecessary network call)
> [the reason for a peek, is that even if I receive the application event of
> the group completion, due to the way events are stored (in the queue), I
> don't know where the group ends, only where it starts (head of the queue),
> but looking into the event itself, I can detect whether it is still from
> the
> same group, or already from a new incomplete group, this is why I have to
> do
> peek, as if I do poll/take first then I have to the put the element back
> into the head of the queue (which obviously is not possible, as it is a
> queue and not a dequeue), then I have to store this element/event somewhere
> else, and on the next job submitted start with this stored event as a
> "head"
> of the queue, and only then switch back to the real queue. As I don't want
> this extra complexity, I am ready to pay a price for an extra peek before
> the take]
> - implement custom CollisionSpi which will understand whether there is
> already a running job for the given queue, and if so, keeps the newly
> submitted job in the waiting list
> [here again due to the fact how events are stored (in the queue) I don't
> allow multiple jobs running against same queue at the same time, as taking
> the element from the middle of one group already processing group is
> obviously an error, so I have to limit (to 1) the number of parallel jobs
> against the given queue]
> - it also requires to submit a new ignite job (distributed closure) on the
> queue every time the application triggers/generates a completion group
> event, which requires/should schedule a queue processing (also see above on
> the overall number of the simultaneous jobs)
>
> I thought about other alternative solutions, but all of them turned out to
> be more complex, and involve more moving parts (as for example, for the
> distributed queue Ignite manages atomicity, and consistency, with other
> approaches I have to do it all manually, which I just want to minimize) and
> more logic to maintain and ensure correctness.
>
> Is there any other suitable alternative for the problem described above?
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Running-an-infinite-job-use-case-inside-or-alternatives-tp5430.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Unexpected performance issue with SQL query followed by error

2016-06-06 Thread Alexei Scherbakov
Of course you should.

But have in mind what currently Ignite will download full result set to
client node to produce final result set
if query is executed in distributed mode (having setLocal(false)),
and you have very large result set.

You should consider adding something like limit clause to the query.





2016-06-03 9:28 GMT+03:00 jan.swaelens :

> Sure I can give that a try, so I start multiple nodes and define a
> collocation key on the tables being joined together (from the many object
> to
> the single cardinality ones)?
>
> The data is being pulled to populate a work list for users, sort of a jira
> issue list for example. So the data goes to the client app eventually.
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Unexpected-performance-issue-with-SQL-query-followed-by-error-tp4726p5398.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Self Join Query As An Alternative To IN clause

2016-06-06 Thread Alexei Scherbakov
a:1338)
> at
> org.h2.jdbc.JdbcPreparedStatement.setObject(JdbcPreparedStatement.java:451)
>     at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.bindObject(IgniteH2Indexing.java:502)
> ... 45 more
>
> What could be missing?
>
> Thanks.
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Self-Join-Query-As-An-Alternative-To-IN-clause-tp5448.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Unexpected performance issue with SQL query followed by error

2016-06-02 Thread Alexei Scherbakov
No, I think you wouldn't get increase for this particular query, because it
contains big result set,
which has to be downloaded to client over network.
But you can split the task between multiple nodes, run every job locally
and get result faster.
To achive what you can use affinity collocation [1]
What are you planning to do with the query result ?

[1] https://apacheignite.readme.io/docs/affinity-collocation



2016-06-02 9:23 GMT+03:00 jan.swaelens :

> I see what you are saying, so if I would start lets say 4 nodes on the
> machine instead of 1 I should get a drastic increase of response time? I'd
> be happy to give that a try, anything special I need to cater for based on
> the case we are running here?
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Unexpected-performance-issue-with-SQL-query-followed-by-error-tp4726p5368.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Simulating Graph Dependencies With Ignite

2016-06-02 Thread Alexei Scherbakov
1. What's the memory size of ProductDetail?

2. Possibly the coding error. Share the code, I'll take a look.

2016-06-02 13:55 GMT+03:00 pragmaticbigdata :

> Thanks Alexei for the responses
>
> 1. Ok I will try out the GC settings and off heap memory usage.
>
> I have a cache of IgniteCache where ProductDetails
> is my custom model. I have implemented custom logic using directed acyclic
> graphs.
>
> 2. I tried executing it with cache.invokeAll. The first run failed with an
> NPE where the code that is to be executed on remotely on the node where the
> cache entry resides got the cache entry null. Wonder what could be wrong?
> Also doesn't  EntryProcessor
> <http://apacheignite.gridgain.org/docs/affinity-collocation>  slow
> performance wise when compared to the affinity.call() method since it takes
> locks on the keys before executing the custom code.
>
> Thanks.
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Simulating-Graph-Dependencies-With-Ignite-tp5282p5378.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: About benchmark and scalability testing

2016-06-02 Thread Alexei Scherbakov
Hi,

Some benchmarks are available on GridGain site:

http://www.gridgain.com/resources/benchmarks/ignite-vs-hazelcast-benchmarks/

I think Ignite can't be fully tested using SQL related benchmarks, because
Ignite only supports
SELECTs.

Therefore, I recommend to measure Ignite's performance using your actual
use-case.

Start from reading Ignite's documentation [1].

[1] http://apacheignite.readme.io


2016-06-02 11:34 GMT+03:00 hu...@neusoft.com :

> Hi,
>
> Is there somebody making the performance testing for Ignite?
> Who used the Benchmark Factory™ or BenchmarkSQL which supports TPC-C
> to test Ignite?
>
> I want to know how does Ignite compare with mysql or oracle in OLTP.
>
> Or, where can I find the related article?
>
> Any guidance about this would be much appreciated - thanks!
>
> --
>
> Bob
>
>
> ---
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> -------
>



-- 

Best regards,
Alexei Scherbakov


Re: Simulating Graph Dependencies With Ignite

2016-06-02 Thread Alexei Scherbakov
Hi Amit.

1) Performance degradation may be caused by long GC pauses.
Please check [1] for hints on how to set up GC settings properly.
If you plan to have very large caches, I recommend using OFFHEAP_TIERED
mode [2]

I assume you take the aproach like IgniteCache>
How many cells do you have in single row ?

2) Try to use cache.invokeAll on all keys at once.

[1] https://apacheignite.readme.io/docs/jvm-and-system-tuning
[2] https://apacheignite.readme.io/docs/off-heap-memory

2016-06-02 9:26 GMT+03:00 pragmaticbigdata :

> Hi Alexei, I was able to implement this custom logic based on your guidance
> below. Thanks for that. I do experience a couple of performance issues.
>
> 1. With a caching having 1 million entries, updating 11k entities 5 times
> (cached entities are updated multiple times in the application) took 1 min.
> The cluster configuration includes 5 server nodes with 16 cpu cores and 15
> GB RAM. The cache is a partitioned cache with 0 backups. Can these timings
> be improved?
>
> 2. I tried using the compute colocation feature by having a affinityRun()
> but that seems to be degrading the performance otherwise. Making an
> affinity
> call through affinityRun() method by passing a Callable resulted into a
> poor
> performance compared to the default execution. The code does a localPeek
> and
> updates that cache entry along with returning one of the properties from
> the
> cached object. Would you have inputs on what could be the problem? The code
> looks like below
>
> for(String key : keyValues) {
>
> outputValues.add(ignite.compute().affinityCall(productCache.getName(), key,
> () -> updateCacheEntry(key, productCache.localPeek(key;
> }
>
> Kindly let me know your suggestions.
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Simulating-Graph-Dependencies-With-Ignite-tp5282p5369.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: listening for events on transactions

2016-06-01 Thread Alexei Scherbakov
You can easily implement such a thing as a part of transaction logic.


2016-06-01 14:52 GMT+03:00 limabean :

> Hi Alexi,
>
> Thank you for the clarification.
>
> My final goal is to notify other processes not related to the grid
> application about changes to the data caches.  For example, it would be
> nice
> to have a Kafka publisher registered as a transaction listener and then
> when
> it gets transaction events, publish this information to a Kafka topic.
>
> Ignite is good at pulling data in, but it needs to be equally good at
> sharing data to work well in my environment.
>
> What do you think ?
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/listening-for-events-on-transactions-tp5346p5357.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Unexpected performance issue with SQL query followed by error

2016-06-01 Thread Alexei Scherbakov
RACCOUNTROLE0
> /* "Activity"."Activityuseraccountrole_idx": ACTIVITY_ID =
> ACTIVITY0.ACTIVITY_ID */
> ON ACTIVITYUSERACCOUNTROLE0.ACTIVITY_ID = ACTIVITY0.ACTIVITY_ID
> /* scanCount: 1148898 */
> LEFT OUTER JOIN "Activity".ACTIVITYHISTORYUSERACCOUNT
> ACTIVITYHISTORYUSERACCOUNT0
> /* "Activity"."Activityhistoryuseraccount_idx": ACTIVITYHISTORY_ID =
> ACTIVITYHISTORY0.ACTIVITYHISTORY_ID */
> ON ACTIVITYHISTORYUSERACCOUNT0.ACTIVITYHISTORY_ID =
> ACTIVITYHISTORY0.ACTIVITYHISTORY_ID
> /* scanCount: 1060391 */
> WHERE ((NOT (ACTIVITYHISTORY0.ACTIVITYSTATE_ENUMID IN(37, 30, 463, 33,
> 464)))
> AND ((ACTIVITY0.KERNEL_ID IS NULL)
> AND (ACTIVITY0.REALIZATION_ID IS NULL)))
> AND ((ACTIVITYHISTORYUSERACCOUNT0.USERACCOUNT_ID = 600301)
> OR ((ACTIVITYUSERACCOUNTROLE0.USERACCOUNTROLE_ID IN(1, 3))
> AND ((ACTIVITY0.REMOVEFROMWORKLIST = 0)
> OR (ACTIVITYHISTORYUSERACCOUNT0.USERACCOUNT_ID IS NULL/
>
> Hmm so yeah looks like we are seeing strange numbers in the explain, even
> though the starting data and queries are the same.
>
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Unexpected-performance-issue-with-SQL-query-followed-by-error-tp4726p5355.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: listening for events on transactions

2016-06-01 Thread Alexei Scherbakov
Hi,

I think the documentation is not very precise on tx topic.
Currently where is no way to listen for transaction events.
The only exception is CacheStoreSessionListener interface which works in
conjunction with CacheStore.
I suppose you can implement "fake" CacheStore and track transaction
activity in corresponding methods.
What's your final goal?


2016-05-31 22:49 GMT+03:00 limabean :

> Hi,
>
> The Ignite documentation implies that listeners can be set on transactions:
>
> "IgniteTransactions interface contains functionality for starting and
> completing transactions, as well as subscribing listeners or getting
> metrics."
> http://apacheignite.gridgain.org/v1.6/docs/transactions#ignitetransactions
>
>
> However, when I look at the interfaces/classes, it is not clear to me how
> to
> set transaction listeners.
> The behavior I am looking for is the following:
>
> I'd like to make a call like this:
> IgniteTransactionSubsystem.registerListener(myListener...)
>
> When a transaction commits, I would like to have the listener notified
> (callback) with the full contents of the transaction, even if it spanned
> multiple cachesor at a minimum provide the cache keys and cache names
> that were updated in a transaction.
>
> I looked at the CacheJdbcStoreSessionListener example and saw this piece of
> code in the sample persistence code:
> // Configure JDBC session listener.
> cacheCfg.setCacheStoreSessionListenerFactories(new
> Factory() {
> @Override public CacheStoreSessionListener create() {
> CacheJdbcStoreSessionListener lsnr = new
> CacheJdbcStoreSessionListener();
>
>
>
> lsnr.setDataSource(JdbcConnectionPool.create("jdbc:h2:tcp://localhost/mem:ExampleDb",
> "sa", ""));
>
> return lsnr;
> }
> });
>
> but it wasn't clear to me how these were supposed to work if they were the
> correct way to listen for transaction.commits.
>
> Continuous queries appear to offer event capability, but it isn't clear how
> they work with transactions.
>
> Thanks for the clarifications/help on how to do transaction listeners.
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/listening-for-events-on-transactions-tp5346.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Unexpected performance issue with SQL query followed by error

2016-05-31 Thread Alexei Scherbakov
>From your plan I see you have different data sets in Oracle and Ignite.
activiy scan in Oracle results in 117k scans, in Ignite 120k
activityuseraccountrole in Oracle results in 144k, in Ignite 1147k
etc

Please run performance test on the exactly same dataset.

2016-05-31 9:27 GMT+03:00 jan.swaelens :

> Hi,
>
> Please fine the explain plan attached.
>
> explainplan.png
> <http://apache-ignite-users.70518.x6.nabble.com/file/n5338/explainplan.png
> >
>
> br
> jan
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Unexpected-performance-issue-with-SQL-query-followed-by-error-tp4726p5338.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: How to update Ignite config in Production Environment

2016-05-30 Thread Alexei Scherbakov
1. Yes, for open source Ignite version I see no other way.
2. Use node predicate [1] [2] to create cache on desired node.
After that make the simple program reading from the first cache and writing
to the second.
You can instead do backup on disk or into database first.

3. I don't know if such example exists.

[1] CacheConfiguration cfg = new CacheConfiguration<>();
 cfg.setNodeFilter(predicate);
 
// start cache dynamically

[2] https://apacheignite.readme.io/docs/cluster-groups


2016-05-30 18:25 GMT+03:00 张鹏鹏 :

> Do you mean I need a separate program to manage the cache?How can I backed
> up data to other topology node?Where can I find an example?
>
>
> Thanks
>
> 2016-05-30 22:28 GMT+08:00 Alexei Scherbakov  >:
>
>> Hi,
>>
>> 1. You can destroy cache (IgniteCache.destroy) and recreate it
>> dynamically with new configuration.
>> Data must be backed up somethere(on example in the other cache) until the
>> process is finished.
>> Don't forget to update Ignite's startup configurations on all server
>> nodes or you will lose changes on restart.
>>
>> 2. Ignite validates cluster configuration on the node join. If
>> configuration of the node is not compatible with the current
>> it not will not be allowed to join topology.
>>
>> 2016-05-30 15:45 GMT+03:00 张鹏鹏 :
>>
>>>  Hi,I have some questions about using Ignite in the  production
>>> environment.
>>>
>>> 1、I have 3 Ignite nodes as Server,My Java application uses Ignite as
>>> client.
>>>  Now,I just use Ignite as JCache implements.
>>>
>>>  When I want to update the Cache config,like adding indexs,What's
>>> the best way to do it?
>>>
>>>  I don't want to lost data in the server,So,I replace server@1's
>>> config,then restart it.I must wait data rebalancing finish.Then I do the
>>> same to the server@2 and so on!
>>>
>>>  I want to do it automatic.But How can I get
>>> the rebalancing finishing event in Linux console?
>>> And Is it the only way to update the config If I don't want to lose the
>>> cache data?
>>>
>>>
>>>
>>> 2、If the server config is different,which one is valid?Is it the last
>>> started one?
>>>
>>>  Today,I restarted Ignite server by mistake.I used old config to
>>> restart one node in the Clusters.
>>> The scenes is:
>>> server@1  -- old config  --restart
>>> server@2  -- old config
>>> server@3  -- new config
>>>
>>> server@1 and server@2 are using old config,server@3 is using new
>>> config.
>>>   I restart server@1 by mistake.
>>>
>>>
>>>  Then My application appeard
>>> "Cannot find metadata for object with compact footer: 1236746791"
>>>  exception.
>>>
>>>  The server occured exceptions too,and the cache couldn't use anymore.
>>>
>>>  Finally,I killed all the application used Ignite client,then update all
>>> the Ignite config  and restart all the Ignite Server.
>>>
>>>
>>> I don't know why I must kill all the Ignite client so I can restart the
>>> Ignite server  correctly.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>> --
>>
>> Best regards,
>> Alexei Scherbakov
>>
>
>


-- 

Best regards,
Alexei Scherbakov


Re: Affinity Key Mapping

2016-05-30 Thread Alexei Scherbakov
I have another idea.
Try to define country cache as atomic.
It shouldn't block on read when.
Of course it can't be used with transactional semantics.

2016-05-30 18:50 GMT+03:00 amitpa :

> Alexei,
>
> yes I understand with Ignite Optmistic means lock acquisition on write for
> READCOMMITED.
>
> With PESSIMISTIC it would be on read.
>
> However consider a cache like Country (contrived example )...many people
> may
> live in USA. But if someone tries to do a transaction for a entry on write
> all will lock on the same entry.
>
> This is a contrived example, and I can ofcourse find multiple ways to
> circumvent this particular scenario.
>
> But for some cases this can not be done.
>
> And if there is any cache config which says a cache is read only that
> should
> be fast enough.
>
> However I wil try your work around.
>
> Many thanks for your replies, it is really helpful.
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Affinity-Key-Mapping-tp5212p5327.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: A question about ignite-indexing and H2

2016-05-30 Thread Alexei Scherbakov
Hi,

AFAIK, you cannot use 1.4.X version with Ignite.
If you want to use another version of H2 in your project,
you can try to use different classloader to load it separately.
Google it for details.

2016-05-30 9:20 GMT+03:00 王庆 :

> Hi,I am a developer from china .My English leaves much to be desired.
> Please ingore these syntax errors.When I use the spatial index module in
> ignite1.6.0 , I found it depends 1.3.175 version of the H2, but I need to
> use the 1.4.X h2 version. 
>   com.h2database
>   h2
>   1.3.175
>   compile
> This method
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing # start
> will call org.h2.constant.SysProperties and org.h2.util.Utils, in front of
> the class in the 1.3.176 version of the above have been It does not exist,
> the latter class is missing serializer
> variables.  if (SysProperties.serializeJavaObject) {
>
> U.warn(log, "Serialization of Java objects in H2 was enabled.");
>
> SysProperties.serializeJavaObject = false;
> }
>
> if (Utils.serializer != null)
>
> U.warn(log, "Custom H2 serialization is already configured, will 
> override.");
>
> Utils.serializer = h2Serializer();
> Is there any way to solve it? I am looking forward to your replay.Thank
> you.




-- 

Best regards,
Alexei Scherbakov


Re: Affinity Key Mapping

2016-05-30 Thread Alexei Scherbakov
Ignite's lock acquisition depends on the transaction isolation.
AFAIK, read_commited and optimistic modes do not lock entries on read.
Ignte has no meaning of read-only cache and I doubt it will have in future,
but you can try to ask on the dev list.

As a workaround, you can create local ConcurrentHashMap and populate it
with static data using the broadcast job [1]
on cluster startup.

[1] See javadoc for ignite.compute().broadcast()

2016-05-30 17:52 GMT+03:00 amitpa :

> Ignore my latest question . This seems to work fine.
>
> I have one kind of feature request though.
>
> It seems with transaction, Ignite always tries to get the lock for the keys
> accessed.
> However some caches can be read only. Multiple transactions can access the
> same key, but they will never  modify it.
>
> Is there any way to mark a cache as read only and hence exempt for
> transaction lock acquiring, this I believe is a common case for things like
> reference data.  This will also help in throughput for most applications
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Affinity-Key-Mapping-tp5212p5321.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Unexpected performance issue with SQL query followed by error

2016-05-30 Thread Alexei Scherbakov
cDriverJob.execute(GridCacheQueryJdbcTask.java:240)
> at
>
> org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:509)
> at
>
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6455)
> at
>
> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:503)
> at
>
> org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:456)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: javax.cache.CacheException: Failed to execute map query on the
> node: f256b204-cb49-4f35-8adb-7b4b575bac77, class
> org.apache.ignite.IgniteCheckedException:Failed to execute SQL query.
> at
>
> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.fail(GridReduceQueryExecutor.java:257)
> at
>
> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onFail(GridReduceQueryExecutor.java:247)
> at
>
> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onMessage(GridReduceQueryExecutor.java:228)
> at
>
> org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.sendError(GridMapQueryExecutor.java:525)
> at
>
> org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest(GridMapQueryExecutor.java:501)
> at
>
> org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onMessage(GridMapQueryExecutor.java:184)
> at
>
> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.send(GridReduceQueryExecutor.java:1065)
> at
>
> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:572)
> ... 11 more
> /
>
> Not sure you need to spend time in this though, for me the UNION path is
> not
> feasible anyhow but in case you like to go deeper on these errors in terms
> of the product I'll be happy to run additional tests.
>
> br
> jan
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Unexpected-performance-issue-with-SQL-query-followed-by-error-tp4726p5311.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: How to update Ignite config in Production Environment

2016-05-30 Thread Alexei Scherbakov
Hi,

1. You can destroy cache (IgniteCache.destroy) and recreate it dynamically
with new configuration.
Data must be backed up somethere(on example in the other cache) until the
process is finished.
Don't forget to update Ignite's startup configurations on all server nodes
or you will lose changes on restart.

2. Ignite validates cluster configuration on the node join. If
configuration of the node is not compatible with the current
it not will not be allowed to join topology.

2016-05-30 15:45 GMT+03:00 张鹏鹏 :

>  Hi,I have some questions about using Ignite in the  production
> environment.
>
> 1、I have 3 Ignite nodes as Server,My Java application uses Ignite as
> client.
>  Now,I just use Ignite as JCache implements.
>
>  When I want to update the Cache config,like adding indexs,What's the
> best way to do it?
>
>  I don't want to lost data in the server,So,I replace server@1's
> config,then restart it.I must wait data rebalancing finish.Then I do the
> same to the server@2 and so on!
>
>  I want to do it automatic.But How can I get the rebalancing finishing
> event in Linux console?
> And Is it the only way to update the config If I don't want to lose the
> cache data?
>
>
>
> 2、If the server config is different,which one is valid?Is it the last
> started one?
>
>  Today,I restarted Ignite server by mistake.I used old config to
> restart one node in the Clusters.
> The scenes is:
> server@1  -- old config  --restart
> server@2  -- old config
> server@3  -- new config
>
> server@1 and server@2 are using old config,server@3 is using new
> config.
>   I restart server@1 by mistake.
>
>
>  Then My application appeard
> "Cannot find metadata for object with compact footer: 1236746791"
>  exception.
>
>  The server occured exceptions too,and the cache couldn't use anymore.
>
>  Finally,I killed all the application used Ignite client,then update all
> the Ignite config  and restart all the Ignite Server.
>
>
> I don't know why I must kill all the Ignite client so I can restart the
> Ignite server  correctly.
>
>
>
>
>
>
>
>
>


-- 

Best regards,
Alexei Scherbakov


Re: Issue of ignite run in docker

2016-05-30 Thread Alexei Scherbakov
' to 0)
> [23:54:35]
> [23:54:35] To start Console Management & Monitoring run
> ignitevisorcmd.{sh|bat}
> [23:54:35]
> [23:54:35] Ignite node started OK (id=5bb10d6a)
> [23:54:35] Topology snapshot [ver=1, servers=1, clients=0, CPUs=2,
> heap=1.0GB]
>
> Ignite0 and Ignite1 does not communicate to each other normally.
>
> # docker -H :4000 exec -it ignite1 bash
> root@8578c039e823:/opt/ignite# ping
> ignite0
>
> PING ignite0 (10.0.1.2): 56 data bytes
> 64 bytes from 10.0.1.2: icmp_seq=0 ttl=64 time=0.367 ms
> 64 bytes from 10.0.1.2: icmp_seq=1 ttl=64 time=0.364 ms
> ^C--- ignite0 ping statistics ---
> 2 packets transmitted, 2 packets received, 0% packet loss
> round-trip min/avg/max/stddev = 0.364/0.365/0.367/0.000 ms
> root@8578c039e823:/opt/ignite# ping ignite1
> PING ignite1 (10.0.1.3): 56 data bytes
> 64 bytes from 10.0.1.3: icmp_seq=0 ttl=64 time=0.061 ms
> 64 bytes from 10.0.1.3: icmp_seq=1 ttl=64 time=0.053 ms
> ^C--- ignite1 ping statistics ---
> 2 packets transmitted, 2 packets received, 0% packet loss
> round-trip min/avg/max/stddev = 0.053/0.057/0.061/0.000 ms
> root@8578c039e823:/opt/ignite#  open || echo Port is closed
> Port is open
> root@8578c039e823:/opt/ignite#  open || echo Port is closed
> Port is open
> root@8578c039e823:/opt/ignite#  open || echo Port is closed
> bash: connect: Connection refused
> bash: /dev/tcp/ignite1/: Connection refused
> Port is closed
>
> Test port accessibility and it seems work find. But Ignote0 and Ignite1 is
> not clustering well.
> Anybody help?
>
> Thanks a lot.
>
> 
> Dockerfile
> 
> FROM base/java8
>
> RUN set -x \
>  && apt-get update \
>  && apt-get -y install zip
>
> # Ignite version
> ENV IGNITE_VERSION 1.6.0
> COPY files/apache-ignite-fabric-${IGNITE_VERSION}-bin.zip /opt/ignite.zip
> # Ignite home
> ENV IGNITE_HOME /opt/ignite
> RUN set -x\
>  && cd /opt \
>  && unzip ignite.zip \
>  && mv /opt/apache-ignite-fabric-${IGNITE_VERSION}-bin ${IGNITE_HOME} \
>  && rm ignite.zip
> COPY files/ignite.xml $IGNITE_HOME
> WORKDIR ${IGNITE_HOME}
> ENTRYPOINT ["bin/ignite.sh"]
> CMD ignite.xml
>
> 
> ignite.xml
> 
> 
> http://www.springframework.org/schema/beans";
> xmlns:security="http://www.springframework.org/schema/security";
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
> xmlns:context="http://www.springframework.org/schema/context";
> xsi:schemaLocation="http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans-4.1.xsd
>   http://www.springframework.org/schema/context
> http://www.springframework.org/schema/context/spring-context-4.1.xsd
>   http://www.springframework.org/schema/security
> http://www.springframework.org/schema/security/spring-security-4.0.xsd";>
>
> 
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
> 
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
>  value="partitioned" />
> 
>  value="PARTITIONED" />
>  />
>
> 
> 
> 
> 
> 
>
> 
> 
>          class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> 
> 
> 
> 
> 
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
> 
> 
>
> 10.0.1.2:47500..47509
>
> 10.0.1.3:47500..47509
> 
> 
> 
> 
> 
> 
> 
> 
>
> --
> scsi
>



-- 

Best regards,
Alexei Scherbakov


Re: Unexpected performance issue with SQL query followed by error

2016-05-30 Thread Alexei Scherbakov
How much time does it take to iterate all results for that query ?

My test shows it take about 9 seconds to fully iterate result set on warmed
up cache.
Is this OK for you?

Considering initial complex query optimization, as I already told it should
be split to 3 sub-queries to avoid OR conditions.
When you can execute them using UNION ALL or as 3 separate queries.

My attempt of splitting:

SELECT * FROM "activity".activity activity0

LEFT OUTER JOIN "activityuseraccountrole".activityuseraccountrole
activityuseraccountrole0
ON activityuseraccountrole0.activityId = activity0.activityId

LEFT OUTER JOIN "activityhistory".activityhistory activityhistory0
ON activityhistory0.activityhistoryId = activity0.lastactivityId
AND NOT activityhistory0.activitystateEnumid IN (37, 30, 463, 33, 464)

LEFT OUTER JOIN
"activityhistoryuseraccount".activityhistoryuseraccount
activityhistoryuseraccount0
ON activityhistoryuseraccount0.activityHistoryId =
activityhistory0.activityhistoryId
AND activityuseraccountrole0.useraccountroleId IN (1, 3)

WHERE activity0.kernelId IS NULL
AND activity0.realizationId IS NULL
AND activity0.removefromworklist = 0

UNION ALL

SELECT * FROM "activity".activity activity0
LEFT OUTER JOIN "activityuseraccountrole".activityuseraccountrole
activityuseraccountrole0
ON activityuseraccountrole0.activityId = activity0.activityId

LEFT OUTER JOIN "activityhistory".activityhistory activityhistory0
ON activityhistory0.activityhistoryId = activity0.lastactivityId
AND NOT activityhistory0.activitystateEnumid IN (37, 30, 463, 33, 464)

LEFT OUTER JOIN
"activityhistoryuseraccount".activityhistoryuseraccount
activityhistoryuseraccount0
ON activityhistoryuseraccount0.activityHistoryId =
activityhistory0.activityhistoryId
AND activityuseraccountrole0.useraccountroleId IN (1, 3)

WHERE activity0.kernelId IS NULL
AND activity0.realizationId IS NULL
AND activity0.removefromworklist = 0
AND activityhistoryuseraccount0.userAccountId IS NULL

UNION ALL

SELECT * FROM "activity".activity activity0
LEFT OUTER JOIN
"activityhistoryuseraccount".activityhistoryuseraccount
activityhistoryuseraccount0
ON activityhistoryuseraccount0.activityHistoryId = activity0.activityId
AND activityhistoryuseraccount0.UserAccountId = 600301

LEFT OUTER JOIN "activityuseraccountrole".activityuseraccountrole
activityuseraccountrole0
ON activityuseraccountrole0.activityId = activity0.activityId
AND activityuseraccountrole0.useraccountroleId IN (1, 3)

LEFT OUTER JOIN "activityhistory".activityhistory activityhistory0
ON activityhistory0.activityhistoryId = activity0.lastactivityId
AND NOT activityhistory0.activitystateEnumid IN (37, 30, 463, 33, 464)

WHERE activity0.kernelId IS NULL
AND activity0.realizationId IS NULL


Re: Affinity Key Mapping

2016-05-28 Thread Alexei Scherbakov
1. Increasing number of partitions might improve throughput of Ignite
cluster, because more partitions means less contention on cache locks.
It's application dependant.

2. Affinity mapping will improve performance if puts are invoked from
affinity node ( using affinityCall [1] ), because puts need not to be
transferred over wire ( except for backups ).

3. If you have unexpected performance problems with Ignite, you can share
you code and someone from community may give you a hint.

[1]
https://apacheignite.readme.io/docs/affinity-collocation#collocating-compute-with-data

2016-05-28 12:03 GMT+03:00 amitpa :

> Does Affinity Key imporve performance of multiple cache puts in the same
> transaction?
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Affinity-Key-Mapping-tp5212p5290.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Simulating Graph Dependencies With Ignite

2016-05-28 Thread Alexei Scherbakov
1.  I don't undestand "Table1.Column1 depends on Table2.Column2"
In Excel one cell depends on another, not column or row.
You can define table per cache as follows

IgniteCache> table1 = ... // key is the row number of
table

and have cell dependencies cache as described earlier: IgniteCache deps = ...

Ignite fully supports ACID transactions [1] so keeping data in sync is not
a problem.
Formula should be stored inside Cell.

2. I already proposed a solution for resolving cell dependencies.

3. Ignite supports collocation processing. Refer [2] for details.

[1] https://apacheignite.readme.io/docs/transactions
[2] https://apacheignite.readme.io/docs/affinity-collocation




2016-05-27 20:56 GMT+03:00 Amit Shah :

> Thanks for the replies.
>
> 1. With IgniteCache I guess you have data dependencies in
> mind. By data dependencies I mean the Cell instance would contain actual
> data. Maintaining a graph of data dependencies like Excel does would not be
> practical because our use case could have millions on rows in a table and
> there could be many such tables. Keeping the graph in sync with the data
> updates would be one of the challenges. Creating and maintaining the graph
> would be another challenge. Hence I mentioned about having a graph of
> metadata in my initial post. For e.g. the graph could look like
> Table1.Column1 depends on Table2.Column2. The formula needs to be stored
> somewhere, somehow?
>
> 2. The other challenge would be determining the next set of rows to be
> updated i.e. assume that 5 rows of table 1 cause an update on another 50
> rows of table 2. The row keys of table 1 of those 5 rows determine which
> rows of table 2 need to be updated. How do we handle this in ignite
> efficiently?
>
> 3. How can I take the advantage of co-located processing with Ignite?
> Assuming that the subsequent table to be updated is on the same node as the
> previous it would be a good optimization to ship the update query on that
> node.
>
> Thank you,
> Amit.
>
>
> On Fri, May 27, 2016 at 9:08 PM, Alexei Scherbakov <
> alexey.scherbak...@gmail.com> wrote:
>
>> Of course you can store result of recalculation in the temp variable if
>> it's needed for next recalculation.
>>
>>
>> 2016-05-27 18:34 GMT+03:00 Alexei Scherbakov <
>> alexey.scherbak...@gmail.com>:
>>
>>> Hi,
>>>
>>> You can store cell dependencies as Ignite's data grid of
>>>
>>> IgniteCache cells = ...
>>>
>>> where relation between key and value is interpreted like: value depends
>>> on key.
>>>
>>> When the cell is updated you do the following:
>>>
>>> Cell cell = updated;
>>> do {
>>>  recalculate(cell);
>>> } while( (cell = cells.get(cell)) != null);
>>>
>>> Did it help ?
>>>
>>>
>>>
>>>
>>>
>>> 2016-05-27 16:49 GMT+03:00 pragmaticbigdata :
>>>
>>>> Hello,
>>>>
>>>> I have started exploring apache ignite by following the introductory
>>>> videos.
>>>> It looks quite promising and I wanted to understand if it will be well
>>>> suited for the use case I brief out below. If so, I would be glad to
>>>> hear
>>>> out on how could I approach it
>>>>
>>>> The use case is
>>>>
>>>> We are trying to implement cell level dependencies within multiple
>>>> tables.
>>>> It's similar to the functionality excel offers just that in our case it
>>>> could span across multiple tables. Imagine multiple cells in an excel
>>>> worksheet having interdependent formula's where updating a value in one
>>>> cell
>>>> causes another cell value to change and that cell update causes another
>>>> cell
>>>> value to update and so on and so forth. It is kind of graph of
>>>> dependencies
>>>> that determines what is the next row cell that needs to be updated.
>>>>
>>>> With apache ignite, what api's and data structures could I use to
>>>> maintain a
>>>> graph of inter-dependencies between multiple tables? Note that these
>>>> would
>>>> be metadata dependencies and not data.
>>>>
>>>> I would appreciate to get inputs on this.
>>>>
>>>> Thanks,
>>>> Amit
>>>>
>>>>
>>>>
>>>> --
>>>> View this message in context:
>>>> http://apache-ignite-users.70518.x6.nabble.com/Simulating-Graph-Dependencies-With-Ignite-tp5282.html
>>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>>
>>>
>>>
>>>
>>> --
>>>
>>> Best regards,
>>> Alexei Scherbakov
>>>
>>
>>
>>
>> --
>>
>> Best regards,
>> Alexei Scherbakov
>>
>
>


-- 

Best regards,
Alexei Scherbakov


Re: Simulating Graph Dependencies With Ignite

2016-05-27 Thread Alexei Scherbakov
Of course you can store result of recalculation in the temp variable if
it's needed for next recalculation.


2016-05-27 18:34 GMT+03:00 Alexei Scherbakov :

> Hi,
>
> You can store cell dependencies as Ignite's data grid of
>
> IgniteCache cells = ...
>
> where relation between key and value is interpreted like: value depends on
> key.
>
> When the cell is updated you do the following:
>
> Cell cell = updated;
> do {
>  recalculate(cell);
> } while( (cell = cells.get(cell)) != null);
>
> Did it help ?
>
>
>
>
>
> 2016-05-27 16:49 GMT+03:00 pragmaticbigdata :
>
>> Hello,
>>
>> I have started exploring apache ignite by following the introductory
>> videos.
>> It looks quite promising and I wanted to understand if it will be well
>> suited for the use case I brief out below. If so, I would be glad to hear
>> out on how could I approach it
>>
>> The use case is
>>
>> We are trying to implement cell level dependencies within multiple tables.
>> It's similar to the functionality excel offers just that in our case it
>> could span across multiple tables. Imagine multiple cells in an excel
>> worksheet having interdependent formula's where updating a value in one
>> cell
>> causes another cell value to change and that cell update causes another
>> cell
>> value to update and so on and so forth. It is kind of graph of
>> dependencies
>> that determines what is the next row cell that needs to be updated.
>>
>> With apache ignite, what api's and data structures could I use to
>> maintain a
>> graph of inter-dependencies between multiple tables? Note that these would
>> be metadata dependencies and not data.
>>
>> I would appreciate to get inputs on this.
>>
>> Thanks,
>> Amit
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-ignite-users.70518.x6.nabble.com/Simulating-Graph-Dependencies-With-Ignite-tp5282.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
>
> Best regards,
> Alexei Scherbakov
>



-- 

Best regards,
Alexei Scherbakov


Re: Simulating Graph Dependencies With Ignite

2016-05-27 Thread Alexei Scherbakov
Hi,

You can store cell dependencies as Ignite's data grid of

IgniteCache cells = ...

where relation between key and value is interpreted like: value depends on
key.

When the cell is updated you do the following:

Cell cell = updated;
do {
 recalculate(cell);
} while( (cell = cells.get(cell)) != null);

Did it help ?





2016-05-27 16:49 GMT+03:00 pragmaticbigdata :

> Hello,
>
> I have started exploring apache ignite by following the introductory
> videos.
> It looks quite promising and I wanted to understand if it will be well
> suited for the use case I brief out below. If so, I would be glad to hear
> out on how could I approach it
>
> The use case is
>
> We are trying to implement cell level dependencies within multiple tables.
> It's similar to the functionality excel offers just that in our case it
> could span across multiple tables. Imagine multiple cells in an excel
> worksheet having interdependent formula's where updating a value in one
> cell
> causes another cell value to change and that cell update causes another
> cell
> value to update and so on and so forth. It is kind of graph of dependencies
> that determines what is the next row cell that needs to be updated.
>
> With apache ignite, what api's and data structures could I use to maintain
> a
> graph of inter-dependencies between multiple tables? Note that these would
> be metadata dependencies and not data.
>
> I would appreciate to get inputs on this.
>
> Thanks,
> Amit
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Simulating-Graph-Dependencies-With-Ignite-tp5282.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Unexpected performance issue with SQL query followed by error

2016-05-27 Thread Alexei Scherbakov
I have exactly same indexes.
Do you see created indexes in H2 console under corresponding tables?

2016-05-27 16:39 GMT+03:00 jan.swaelens :

> Okey so we can indeed conclude that the indexes are not registered. This is
> the output:
>
> /SELECT
> ACTIVITY0.ACTIVITY_ID,
> ACTIVITYUSERACCOUNTROLE0.ACTIVITY_ID
> FROM "Activity".ACTIVITY ACTIVITY0
> /* "Activity".ACTIVITY.__SCAN_ */
> LEFT OUTER JOIN "Activity".ACTIVITYUSERACCOUNTROLE ACTIVITYUSERACCOUNTROLE0
> /* "Activity".ACTIVITYUSERACCOUNTROLE.__SCAN_ */
> ON ACTIVITY0.ACTIVITY_ID = ACTIVITYUSERACCOUNTROLE0.ACTIVITY_ID
> (1 row, 5 ms)/
>
> So lets take a step back, in the 'Activityuseraccountrole' I have this:
>
> //** Value for activityId. */
> @QuerySqlField(orderedGroups={@QuerySqlField.Group(name =
> "Activityuseraccountrole_idx", order = 0)})
> private long activityId;
>
> /** Value for useraccountroleId. */
> @QuerySqlField(orderedGroups={@QuerySqlField.Group(name =
> "Activityuseraccountrole_idx", order = 1)})
> private long useraccountroleId;/
>
> in the 'Activity' I have this:
>
> /@QuerySqlField(orderedGroups={@QuerySqlField.Group(name =
> "Activity_idx", order = 0)})
> private long activityId;
>
> /** Value for realizationId. */
> @QuerySqlField(orderedGroups={@QuerySqlField.Group(name =
> "Activity_cond_idx", order = 1)})
> private Long realizationId;
>
> /** Value for kernelId. */
> @QuerySqlField(orderedGroups={@QuerySqlField.Group(name =
> "Activity_cond_idx", order = 0)})
> private Long kernelId;
>
> /** Value for removefromworklist. */
> @QuerySqlField(orderedGroups={@QuerySqlField.Group(name =
> "Activity_cond_idx", order = 2)})
> private boolean removefromworklist;/
>
> Does this match with what you have sent me?
>
> If so, maybe they are not applied by how I load up the cache?
>
> I have this code:
>
> /CacheConfiguration activityCacheCfg =
> CacheConfig.cache("Activity", new SofIgniteDSStoreFactory Activity>());
> cfg.setCacheConfiguration(activityCacheCfg);
> /
>
> The CacheConfig also defines the fields ... and not the indexes we add from
> annotations - is this potentially clashing?
>
> best regards
> jan
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Unexpected-performance-issue-with-SQL-query-followed-by-error-tp4726p5281.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Unexpected performance issue with SQL query followed by error

2016-05-27 Thread Alexei Scherbakov
EXPLAIN SELECT activity0._VAL ,  activityuseraccountrole0 ._VAL FROM
"activity".ACTIVITY activity0 LEFT OUTER JOIN
"activityuseraccountrole".ACTIVITYUSERACCOUNTROLE
activityuseraccountrole0
ON activity0.activityId=activityuseraccountrole0.activityId;

will show query execution plan without actually executing it.
Look for strings like
 /* "activityuseraccountrole"."Activityuseraccountrole_idx": ACTIVITYID =
ACTIVITY0.ACTIVITYID */


2016-05-27 15:48 GMT+03:00 jan.swaelens :

> Hello,
>
> Yes indeed, I am using those classes. Can I verify in the H2 console that
> the indexes are there as expected?
>
> br
> jan
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Unexpected-performance-issue-with-SQL-query-followed-by-error-tp4726p5279.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: MArshaller question

2016-05-27 Thread Alexei Scherbakov
Hi,

The BinaryMarshaller (which is used by default ) is optimal.

2016-05-27 12:11 GMT+03:00 amitpa :

> Sorry for polluting the mailing list with sometimes useless questions.
>
> Which is the recommended marshaller for ignite now:-
>
> Binary Marshaller, JDkMarshaller or OptimizedMarshaller purely from
> performance standpoints
>
>
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/MArshaller-question-tp5273.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Unexpected performance issue with SQL query followed by error

2016-05-27 Thread Alexei Scherbakov
Hi,

The query doesn't use index.
Did you correctly apply my changes to indexes in the model ?
Here my output using similar cardinalities (100k activities, 1M roles):

Query:
EXPLAIN  ANALYZE SELECT activity0._VAL ,  activityuseraccountrole0 ._VAL
FROM "activity".ACTIVITY activity0 LEFT OUTER JOIN
"activityuseraccountrole".ACTIVITYUSERACCOUNTROLE activityuseraccountrole0
ON activity0.activityId=activityuseraccountrole0.activityId;

Output:
SELECT
ACTIVITY0._VAL,
ACTIVITYUSERACCOUNTROLE0._VAL
FROM "activity".ACTIVITY ACTIVITY0
/* "activity".ACTIVITY.__SCAN_ */
/* scanCount: 11 */
LEFT OUTER JOIN "activityuseraccountrole".ACTIVITYUSERACCOUNTROLE
ACTIVITYUSERACCOUNTROLE0
/* "activityuseraccountrole"."Activityuseraccountrole_idx": ACTIVITYID
= ACTIVITY0.ACTIVITYID */
ON ACTIVITY0.ACTIVITYID = ACTIVITYUSERACCOUNTROLE0.ACTIVITYID
/* scanCount: 110 */

(1 row, 1190 ms)


2016-05-27 9:23 GMT+03:00 jan.swaelens :

> Hello,
>
> This one has been running for 10 minutes now without producing results - so
> rather the join.
>
> /EXPLAIN ANALYZE SELECT DISTINCT * FROM "Activity".activity activity0
> LEFT OUTER JOIN "Activity".activityuseraccountrole activityuseraccountrole0
> ON activity0.activity_Id=activityuseraccountrole0.activity_Id/
>
> This one works fine though:
>
> /EXPLAIN ANALYZE SELECT DISTINCT * FROM "Activity".activity activity0,
> "Activity".activityuseraccountrole activityuseraccountrole0
> WHERE activity0.activity_Id=activityuseraccountrole0.activity_Id/
>
> /SELECT DISTINCT
> ACTIVITY0._KEY,
> ACTIVITY0._VAL,
> ACTIVITY0.ACTIVITY_ID,
> ACTIVITY0.TIMESTAMP,
> ACTIVITY0.CONTAINER_ID,
> ACTIVITY0.ACTIVITYTYPE_ID,
> ACTIVITY0.REALIZATION_ID,
> ACTIVITY0.KERNEL_ID,
> ACTIVITY0.PREDECESSORTYPE_ENUMID,
> ACTIVITY0.SUCCESSORTYPE_ENUMID,
> ACTIVITY0.DURATIONUNIT_ENUMID,
> ACTIVITY0.NAME,
> ACTIVITY0.NAME_MLID,
> ACTIVITY0.DESCRIPTION,
> ACTIVITY0.DESCRIPTION_MLID,
> ACTIVITY0.DURATION,
> ACTIVITY0.REQUIRED,
> ACTIVITY0.ESTIMSTARTDATE,
> ACTIVITY0.ESTIMSTARTHOUR,
> ACTIVITY0.ESTIMENDHOUR,
> ACTIVITY0.ESTIMENDDATE,
> ACTIVITY0.REMOVEFROMWORKLIST,
> ACTIVITY0.SEQUENCENR,
> ACTIVITY0.SESSION_ID,
> ACTIVITY0.LASTACTIVITY_ID,
> ACTIVITY0.SYSREPOPERATION_ID,
> ACTIVITY0.LIFECYCLEREPORTING,
> ACTIVITY0.DUEDATE,
> ACTIVITY0.PRIORITY_ENUMID,
> ACTIVITY0.NOTIFY,
> ACTIVITYUSERACCOUNTROLE0._KEY,
> ACTIVITYUSERACCOUNTROLE0._VAL,
> ACTIVITYUSERACCOUNTROLE0.ACTIVITY_ID,
> ACTIVITYUSERACCOUNTROLE0.USERACCOUNTROLE_ID
> FROM "Activity".ACTIVITYUSERACCOUNTROLE ACTIVITYUSERACCOUNTROLE0
> /* "Activity".ACTIVITYUSERACCOUNTROLE.__SCAN_ */
> /* scanCount: 1027840 */
> INNER JOIN "Activity".ACTIVITY ACTIVITY0
> /* "Activity".PK_ACTIVITY: ACTIVITY_ID =
> ACTIVITYUSERACCOUNTROLE0.ACTIVITY_ID */
> ON 1=1
> /* scanCount: 2055678 */
> WHERE ACTIVITY0.ACTIVITY_ID = ACTIVITYUSERACCOUNTROLE0.ACTIVITY_ID/
>
> So looks like the LEFT OUTER is the culprit, or at least one of them.
>
> br
> jan
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Unexpected-performance-issue-with-SQL-query-followed-by-error-tp4726p5267.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Unexpected performance issue with SQL query followed by error

2016-05-26 Thread Alexei Scherbakov
I'll try to reproduce your case tomorrow using these cardinalities.
Meanwhile  try to execute the following query.

EXPLAIN ANALYZE SELECT DISTINCT * FROM "Activity".activity activity0
LEFT OUTER JOIN "Activity".activityuseraccountrole activityuseraccountrole0
ON activity0.activity_Id=activityuseraccountrole0.activity_Id

I suspect the IN condition is the culprit.

2016-05-26 17:26 GMT+03:00 jan.swaelens :

> And indeed, I am running with these options:
>
> /MEM_ARGS="-XX:PermSize=256m -Xms1g -Xmx8g -XX:+UseParNewGC
> -XX:+UseConcMarkSweepGC -XX:+UseTLAB -XX:NewSize=128m -XX:MaxNewSize=128m
> -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=1024
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60
> -XX:+DisableExplicitGC -DIGNITE_H2_DEBUG_CONSOLE=true"/
>
> thanks
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Unexpected-performance-issue-with-SQL-query-followed-by-error-tp4726p5240.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: off heap indexes - setSqlOnheapRowCacheSize - how does it improve efficiency ?

2016-05-26 Thread Alexei Scherbakov
You should measure performance on the real-life cases and see if it's
enough for you.
Ignite performs good in both modes.
If you really want to use ONHEAP_TIERED, you must tune GC and heap size, as
described here [1]
Make sure you have enough memory for your dataset.
The goal is to avoid long GC pauses.

[1] https://apacheignite.readme.io/docs/jvm-and-system-tuning

2016-05-26 19:40 GMT+03:00 Tomek W :

> Ok, I will try it. However, Why OFF_HEAP_TIERED ?  It seem to be not fast
> as ON HEAP
>
> 2016-05-26 18:32 GMT+02:00 Alexei Scherbakov  >:
>
>> We are talking about count(*) query performance, right ?
>> WriteBehind is for writing to CacheStore in the async mode.
>>
>> If yes, do the following:
>>
>> 1) Set OFFHEAP_TIERED mode and reduce max heap memory on example to 4Gb.
>> 2) Update to Ignite 1.6
>> 3) Measure query performance. Run the query several times and use average
>> value as the estimation.
>> 4) If it's not as expected, show me GC logs.
>>
>>
>>
>> 2016-05-26 18:28 GMT+03:00 Tomek W :
>>
>>> No, I am using ON_HEAP_TIERED.
>>>
>>> Maybe WriteBehind should be turned on ?
>>> My App do exactly one thing:  initialize hot loading.
>>>
>>> When it comes to JDBC client, I did show fragment of code in previous
>>> post.
>>>
>>> 2016-05-26 16:15 GMT+02:00 Alexei Scherbakov <
>>> alexey.scherbak...@gmail.com>:
>>>
>>>> I see long pauses in your GC log (> 3 seconds)
>>>> This means your app have high pressure on the heap.
>>>> It's hard to tell why without knowing what your app is doing.
>>>>
>>>> Are you using OFFHEAP_TIERED?
>>>> If yes, try to reduce sqlOnheapRowCacheSize value.
>>>>
>>>>
>>>>
>>>>
>>>> 2016-05-26 14:57 GMT+03:00 Tomek W :
>>>>
>>>>> Ok,
>>>>> i am going to add new machines to ignite cluster. Firstly, please look
>>>>> at my gc file log - previous message.
>>>>>
>>>>> 2016-05-26 13:39 GMT+02:00 Alexei Scherbakov <
>>>>> alexey.scherbak...@gmail.com>:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> The initial question was about setSqlOnheapRowCacheSize and I think
>>>>>> now it is clear how to improve SQL performance using with parameter.
>>>>>>
>>>>>> If you dissatisfied with the Ignite performance, I suggest you to
>>>>>> start a new thread on this,
>>>>>> providing detailed info about your performance test like
>>>>>> cluster configuration, server GC settings, and test sources.
>>>>>>
>>>>>> As already mentioned, Ignite SQL engine(H2) has the same(or slightly)
>>>>>> less performance when Postresql.
>>>>>> Ignite really starts to shine when used as distributed data grid
>>>>>> having large amount of data in memory on several nodes.
>>>>>>
>>>>>> SELECT count(*) from table is not very good test query.
>>>>>> Postgres may have the result cached, whereas Ignite always do the
>>>>>> full table traversal.
>>>>>> Recently I implemented an improvement for this case.
>>>>>> See https://issues.apache.org/jira/browse/IGNITE-2751 for details.
>>>>>>
>>>>>> I strongly recommend to test Ignite performance on the real case.
>>>>>> Dont' forget to configure GC properly [1]
>>>>>>
>>>>>> [1] https://apacheignite.readme.io/docs/jvm-and-system-tuning
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> 2016-05-26 2:09 GMT+03:00 Tomek W :
>>>>>>
>>>>>>> | Also it would be interesting to see result of
>>>>>>> | SELECT count(*) from the query above in both cases.
>>>>>>> (number of rows = 2 798 685)
>>>>>>> SELECT count(*) FROM postgresTable;
>>>>>>>  456 ms
>>>>>>> SELECT count(*) FROM postgresTable;
>>>>>>> 314 ms
>>>>>>>
>>>>>>> SELECT count(*) FROM igniteTable;
>>>>>>> 9746 ms
>>>>>>> SELECT count(*) FROM igniteTable;
>>>>>>> 9664 ms
>>>>>>>
>>>>>>>
>>

Re: Connect H2 Console to a running cluster

2016-05-26 Thread Alexei Scherbakov
Hi,

Have you tried [1] ?

[1] https://apacheignite.readme.io/docs/sql-queries#using-h2-debug-console

2016-05-26 18:07 GMT+03:00 edwardkblk :

> Is there a way to find out url, user name, password to connect H2 Console
> to
> already running cluster?
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Connect-H2-Console-to-a-running-cluster-tp5242.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: off heap indexes - setSqlOnheapRowCacheSize - how does it improve efficiency ?

2016-05-26 Thread Alexei Scherbakov
We are talking about count(*) query performance, right ?
WriteBehind is for writing to CacheStore in the async mode.

If yes, do the following:

1) Set OFFHEAP_TIERED mode and reduce max heap memory on example to 4Gb.
2) Update to Ignite 1.6
3) Measure query performance. Run the query several times and use average
value as the estimation.
4) If it's not as expected, show me GC logs.



2016-05-26 18:28 GMT+03:00 Tomek W :

> No, I am using ON_HEAP_TIERED.
>
> Maybe WriteBehind should be turned on ?
> My App do exactly one thing:  initialize hot loading.
>
> When it comes to JDBC client, I did show fragment of code in previous post.
>
> 2016-05-26 16:15 GMT+02:00 Alexei Scherbakov  >:
>
>> I see long pauses in your GC log (> 3 seconds)
>> This means your app have high pressure on the heap.
>> It's hard to tell why without knowing what your app is doing.
>>
>> Are you using OFFHEAP_TIERED?
>> If yes, try to reduce sqlOnheapRowCacheSize value.
>>
>>
>>
>>
>> 2016-05-26 14:57 GMT+03:00 Tomek W :
>>
>>> Ok,
>>> i am going to add new machines to ignite cluster. Firstly, please look
>>> at my gc file log - previous message.
>>>
>>> 2016-05-26 13:39 GMT+02:00 Alexei Scherbakov <
>>> alexey.scherbak...@gmail.com>:
>>>
>>>> Hi,
>>>>
>>>> The initial question was about setSqlOnheapRowCacheSize and I think
>>>> now it is clear how to improve SQL performance using with parameter.
>>>>
>>>> If you dissatisfied with the Ignite performance, I suggest you to start
>>>> a new thread on this,
>>>> providing detailed info about your performance test like
>>>> cluster configuration, server GC settings, and test sources.
>>>>
>>>> As already mentioned, Ignite SQL engine(H2) has the same(or slightly)
>>>> less performance when Postresql.
>>>> Ignite really starts to shine when used as distributed data grid having
>>>> large amount of data in memory on several nodes.
>>>>
>>>> SELECT count(*) from table is not very good test query.
>>>> Postgres may have the result cached, whereas Ignite always do the full
>>>> table traversal.
>>>> Recently I implemented an improvement for this case.
>>>> See https://issues.apache.org/jira/browse/IGNITE-2751 for details.
>>>>
>>>> I strongly recommend to test Ignite performance on the real case.
>>>> Dont' forget to configure GC properly [1]
>>>>
>>>> [1] https://apacheignite.readme.io/docs/jvm-and-system-tuning
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> 2016-05-26 2:09 GMT+03:00 Tomek W :
>>>>
>>>>> | Also it would be interesting to see result of
>>>>> | SELECT count(*) from the query above in both cases.
>>>>> (number of rows = 2 798 685)
>>>>> SELECT count(*) FROM postgresTable;
>>>>>  456 ms
>>>>> SELECT count(*) FROM postgresTable;
>>>>> 314 ms
>>>>>
>>>>> SELECT count(*) FROM igniteTable;
>>>>> 9746 ms
>>>>> SELECT count(*) FROM igniteTable;
>>>>> 9664 ms
>>>>>
>>>>>
>>>>> Code of Jdbc Drvier (the same code for Ignite and postgresql - url
>>>>> connection is given from command line):
>>>>> http://pastebin.com/mYDSjziN
>>>>> My start sh file:
>>>>> http://pastebin.com/VmRM2sPQ
>>>>>
>>>>> My gc log file (following hint Magda):
>>>>> (file generated during hot loading and query via JDBC).
>>>>> http://pastebin.com/XicnNczV
>>>>>
>>>>>
>>>>> If you would like to see something else let me know.
>>>>>
>>>>> PS How to launch H2 debug console ? I followed docs, but it doesn't
>>>>> help.
>>>>> I set enviroment variable:
>>>>> echo $IGNITE_H2_DEBUG_CONSOLE
>>>>> true
>>>>> now, ./ignite.sh conf.xml
>>>>>
>>>>> sudo netstat -tulpn | grep 61214
>>>>> No opened ports.
>>>>>
>>>>> BTW, during starting ignite it give me information:
>>>>> [01:03:02]  Performance suggestions for grid 'turbines_table_cluster'
>>>>> (fix if possible)
>>>>> [01:03:02] To disable, set
>>>>> -DIGNITE

Re: off heap indexes - setSqlOnheapRowCacheSize - how does it improve efficiency ?

2016-05-26 Thread Alexei Scherbakov
I see long pauses in your GC log (> 3 seconds)
This means your app have high pressure on the heap.
It's hard to tell why without knowing what your app is doing.

Are you using OFFHEAP_TIERED?
If yes, try to reduce sqlOnheapRowCacheSize value.




2016-05-26 14:57 GMT+03:00 Tomek W :

> Ok,
> i am going to add new machines to ignite cluster. Firstly, please look at
> my gc file log - previous message.
>
> 2016-05-26 13:39 GMT+02:00 Alexei Scherbakov  >:
>
>> Hi,
>>
>> The initial question was about setSqlOnheapRowCacheSize and I think
>> now it is clear how to improve SQL performance using with parameter.
>>
>> If you dissatisfied with the Ignite performance, I suggest you to start a
>> new thread on this,
>> providing detailed info about your performance test like
>> cluster configuration, server GC settings, and test sources.
>>
>> As already mentioned, Ignite SQL engine(H2) has the same(or slightly)
>> less performance when Postresql.
>> Ignite really starts to shine when used as distributed data grid having
>> large amount of data in memory on several nodes.
>>
>> SELECT count(*) from table is not very good test query.
>> Postgres may have the result cached, whereas Ignite always do the full
>> table traversal.
>> Recently I implemented an improvement for this case.
>> See https://issues.apache.org/jira/browse/IGNITE-2751 for details.
>>
>> I strongly recommend to test Ignite performance on the real case.
>> Dont' forget to configure GC properly [1]
>>
>> [1] https://apacheignite.readme.io/docs/jvm-and-system-tuning
>>
>>
>>
>>
>>
>>
>> 2016-05-26 2:09 GMT+03:00 Tomek W :
>>
>>> | Also it would be interesting to see result of
>>> | SELECT count(*) from the query above in both cases.
>>> (number of rows = 2 798 685)
>>> SELECT count(*) FROM postgresTable;
>>>  456 ms
>>> SELECT count(*) FROM postgresTable;
>>> 314 ms
>>>
>>> SELECT count(*) FROM igniteTable;
>>> 9746 ms
>>> SELECT count(*) FROM igniteTable;
>>> 9664 ms
>>>
>>>
>>> Code of Jdbc Drvier (the same code for Ignite and postgresql - url
>>> connection is given from command line):
>>> http://pastebin.com/mYDSjziN
>>> My start sh file:
>>> http://pastebin.com/VmRM2sPQ
>>>
>>> My gc log file (following hint Magda):
>>> (file generated during hot loading and query via JDBC).
>>> http://pastebin.com/XicnNczV
>>>
>>>
>>> If you would like to see something else let me know.
>>>
>>> PS How to launch H2 debug console ? I followed docs, but it doesn't
>>> help.
>>> I set enviroment variable:
>>> echo $IGNITE_H2_DEBUG_CONSOLE
>>> true
>>> now, ./ignite.sh conf.xml
>>>
>>> sudo netstat -tulpn | grep 61214
>>> No opened ports.
>>>
>>> BTW, during starting ignite it give me information:
>>> [01:03:02]  Performance suggestions for grid 'turbines_table_cluster'
>>> (fix if possible)
>>> [01:03:02] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
>>> [01:03:02]   ^-- Disable grid events (remove 'includeEventTypes' from
>>> configuration)
>>> [01:03:02]   ^-- Enable ATOMIC mode if not using transactions (set
>>> 'atomicityMode' to ATOMIC)
>>> [01:03:02]   ^-- Enable write-behind to persistent store (set
>>> 'writeBehindEnabled' to true)
>>>
>>>
>>> 2016-05-25 12:23 GMT+02:00 Alexei Scherbakov <
>>> alexey.scherbak...@gmail.com>:
>>>
>>>> For postgres test I mean initial jdbc query and result set traversal.
>>>> For Ignite I mean sql query and iterator traversal.
>>>> Also it would be interesting to see result of
>>>> *SELECT count(*) from the query above in both cases.*
>>>>
>>>> 2016-05-25 12:00 GMT+03:00 Tomek W :
>>>>
>>>>> [image: Obraz w treści 1]
>>>>>
>>>>> What code do you mean ? JDBC client ?
>>>>>
>>>>> 2016-05-25 10:25 GMT+02:00 Alexei Scherbakov <
>>>>> alexey.scherbak...@gmail.com>:
>>>>>
>>>>>> What's the batch size for postgresql ?
>>>>>> What's the size of one entry ?
>>>>>> Could you provide the test code for both postgres and Ignite (just
>>>>>> the query + read with the time estimation) ?
>>>>>>
&g

Re: Unexpected performance issue with SQL query followed by error

2016-05-26 Thread Alexei Scherbakov
How much data do you have in both tables?

30 minutes is too long even for full scan in memory.

Please run

EXPLAIN ANALYZE SELECT DISTINCT * FROM "Activity".activity activity0

EXPLAIN ANALYZE SELECT DISTINCT * FROM "Activity".activityuseraccountrole
activityuseraccountrole0
WHERE activityuseraccountrole0.useraccountrole_Id IN (1, 3)

Don't you forget to configure GC property, as discussed earlier in this
topic ?
Do you have any activity on cache while running the query ?


2016-05-25 16:48 GMT+03:00 jan.swaelens :

> Thanks,
>
> Added the code and executed the following:
>
> EXPLAIN ANALYZE SELECT DISTINCT * FROM "Activity".activity activity0
> LEFT OUTER JOIN "Activity".activityuseraccountrole activityuseraccountrole0
> ON activityuseraccountrole0.activity_Id = activity0.activity_Id
> AND activityuseraccountrole0.useraccountrole_Id IN (1, 3)
>
> This also keeps running until eternity, well at least for 30 minutes after
> which I give up. Should I wait it out or does it make sense to add a clause
> to limit the data?
>
> best regards
> jan
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Unexpected-performance-issue-with-SQL-query-followed-by-error-tp4726p5194.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: off heap indexes - setSqlOnheapRowCacheSize - how does it improve efficiency ?

2016-05-26 Thread Alexei Scherbakov
Hi,

The initial question was about setSqlOnheapRowCacheSize and I think
now it is clear how to improve SQL performance using with parameter.

If you dissatisfied with the Ignite performance, I suggest you to start a
new thread on this,
providing detailed info about your performance test like
cluster configuration, server GC settings, and test sources.

As already mentioned, Ignite SQL engine(H2) has the same(or slightly) less
performance when Postresql.
Ignite really starts to shine when used as distributed data grid having
large amount of data in memory on several nodes.

SELECT count(*) from table is not very good test query.
Postgres may have the result cached, whereas Ignite always do the full
table traversal.
Recently I implemented an improvement for this case.
See https://issues.apache.org/jira/browse/IGNITE-2751 for details.

I strongly recommend to test Ignite performance on the real case.
Dont' forget to configure GC properly [1]

[1] https://apacheignite.readme.io/docs/jvm-and-system-tuning






2016-05-26 2:09 GMT+03:00 Tomek W :

> | Also it would be interesting to see result of
> | SELECT count(*) from the query above in both cases.
> (number of rows = 2 798 685)
> SELECT count(*) FROM postgresTable;
>  456 ms
> SELECT count(*) FROM postgresTable;
> 314 ms
>
> SELECT count(*) FROM igniteTable;
> 9746 ms
> SELECT count(*) FROM igniteTable;
> 9664 ms
>
>
> Code of Jdbc Drvier (the same code for Ignite and postgresql - url
> connection is given from command line):
> http://pastebin.com/mYDSjziN
> My start sh file:
> http://pastebin.com/VmRM2sPQ
>
> My gc log file (following hint Magda):
> (file generated during hot loading and query via JDBC).
> http://pastebin.com/XicnNczV
>
>
> If you would like to see something else let me know.
>
> PS How to launch H2 debug console ? I followed docs, but it doesn't help.
> I set enviroment variable:
> echo $IGNITE_H2_DEBUG_CONSOLE
> true
> now, ./ignite.sh conf.xml
>
> sudo netstat -tulpn | grep 61214
> No opened ports.
>
> BTW, during starting ignite it give me information:
> [01:03:02]  Performance suggestions for grid 'turbines_table_cluster' (fix
> if possible)
> [01:03:02] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
> [01:03:02]   ^-- Disable grid events (remove 'includeEventTypes' from
> configuration)
> [01:03:02]   ^-- Enable ATOMIC mode if not using transactions (set
> 'atomicityMode' to ATOMIC)
> [01:03:02]   ^-- Enable write-behind to persistent store (set
> 'writeBehindEnabled' to true)
>
>
> 2016-05-25 12:23 GMT+02:00 Alexei Scherbakov  >:
>
>> For postgres test I mean initial jdbc query and result set traversal.
>> For Ignite I mean sql query and iterator traversal.
>> Also it would be interesting to see result of
>> *SELECT count(*) from the query above in both cases.*
>>
>> 2016-05-25 12:00 GMT+03:00 Tomek W :
>>
>>> [image: Obraz w treści 1]
>>>
>>> What code do you mean ? JDBC client ?
>>>
>>> 2016-05-25 10:25 GMT+02:00 Alexei Scherbakov <
>>> alexey.scherbak...@gmail.com>:
>>>
>>>> What's the batch size for postgresql ?
>>>> What's the size of one entry ?
>>>> Could you provide the test code for both postgres and Ignite (just the
>>>> query + read with the time estimation) ?
>>>>
>>>> 2016-05-25 11:13 GMT+03:00 Tomek W :
>>>>
>>>>> | How many entries are downloaded to the client in both cases?
>>>>> 3000 000
>>>>>
>>>>> | Do the both queries involve network I/O ?
>>>>> No, I have only local one server (for testing purpose).
>>>>>
>>>>>
>>>>> 2016-05-25 9:59 GMT+02:00 Alexei Scherbakov <
>>>>> alexey.scherbak...@gmail.com>:
>>>>>
>>>>>> SELECT * is not really a good test query.
>>>>>> It's result can be affected not only by engine performance.
>>>>>>
>>>>>> How many entries are downloaded to the client in both cases?
>>>>>> Do the both queries involve network I/O ?
>>>>>>
>>>>>> 2016-05-25 7:58 GMT+03:00 Denis Magda :
>>>>>>
>>>>>>> In general Ignite is designed to be used in a distributed
>>>>>>> environment when gigabytes or terabytes of dataset is spread across many
>>>>>>> cluster nodes and SQL queries executed across the cluster should be 
>>>>>>> faster
>>>>>>> since resources of all the machines will be used

Re: off heap indexes - setSqlOnheapRowCacheSize - how does it improve efficiency ?

2016-05-25 Thread Alexei Scherbakov
For postgres test I mean initial jdbc query and result set traversal.
For Ignite I mean sql query and iterator traversal.
Also it would be interesting to see result of
*SELECT count(*) from the query above in both cases.*

2016-05-25 12:00 GMT+03:00 Tomek W :

> [image: Obraz w treści 1]
>
> What code do you mean ? JDBC client ?
>
> 2016-05-25 10:25 GMT+02:00 Alexei Scherbakov  >:
>
>> What's the batch size for postgresql ?
>> What's the size of one entry ?
>> Could you provide the test code for both postgres and Ignite (just the
>> query + read with the time estimation) ?
>>
>> 2016-05-25 11:13 GMT+03:00 Tomek W :
>>
>>> | How many entries are downloaded to the client in both cases?
>>> 3000 000
>>>
>>> | Do the both queries involve network I/O ?
>>> No, I have only local one server (for testing purpose).
>>>
>>>
>>> 2016-05-25 9:59 GMT+02:00 Alexei Scherbakov <
>>> alexey.scherbak...@gmail.com>:
>>>
>>>> SELECT * is not really a good test query.
>>>> It's result can be affected not only by engine performance.
>>>>
>>>> How many entries are downloaded to the client in both cases?
>>>> Do the both queries involve network I/O ?
>>>>
>>>> 2016-05-25 7:58 GMT+03:00 Denis Magda :
>>>>
>>>>> In general Ignite is designed to be used in a distributed environment
>>>>> when gigabytes or terabytes of dataset is spread across many cluster nodes
>>>>> and SQL queries executed across the cluster should be faster since
>>>>> resources of all the machines will be used and as a result a query should
>>>>> be completed quicker. In your scenario you just have only a single cluster
>>>>> node and in fact comparing performance of PostgreSQL and H2 (engine that 
>>>>> is
>>>>> used by Ignite SQL) and I can consider that Ignite SQL can work slightly
>>>>> slowly but this in is not Ignite usage scenario.
>>>>>
>>>>> However if you try to create a cluster of several nodes running on
>>>>> different physical machines, pre-load gigabytes of data there and compare
>>>>> Ignite SQL and PostgresSQL you should see performance improvements on
>>>>> Ignite side.
>>>>>
>>>>> In any case taking into account the advise above do the following:
>>>>> - execute “EXPLAIN” query to see that the index is chose properly [1];
>>>>> - H2 console will allow you to see how fast a query is presently
>>>>> executed on a single node removing several Ignite layers [2];
>>>>> - check if you have any GC pauses during query execution since it can
>>>>> affect execution time [3]
>>>>>
>>>>> Also share the objects you use as keys and values.
>>>>>
>>>>> [1] https://apacheignite.readme.io/docs/sql-queries#using-explain
>>>>> [2]
>>>>> https://apacheignite.readme.io/docs/sql-queries#using-h2-debug-console
>>>>> [3]
>>>>> https://apacheignite.readme.io/v1.6/docs/jvm-and-system-tuning#section-detailed-garbage-collection-stats
>>>>>
>>>>> —
>>>>> Denis
>>>>>
>>>>> On May 25, 2016, at 3:23 AM, Tomek W  wrote:
>>>>>
>>>>>
>>>>> +==+
>>>>> | Node ID8(@), IP  | CPUs | Heap Used | CPU Load |   Up Time
>>>>> |  Size   | Hi/Mi/Rd/Wr |
>>>>>
>>>>> +==+
>>>>> | 0F0AAF99(@n0), 127.0.0.1 | 8| 54.50 %   | 3.23 %   | 00:13:13:49
>>>>> | 300 | Hi: 0   |
>>>>> |  |  |   |  |
>>>>> | | Mi: 0   |
>>>>> |  |  |   |  |
>>>>> | | Rd: 0   |
>>>>> |  |  |   |      |
>>>>> | | Wr: 0   |
>>>>>
>>>>> +----------+
>>>>>
>>>>>
>>>>> I followed your hints. Actually, client doesn't require such many
>>>>> memory as before - thanks for it.
>>>>>
>>>>>
>>>>> When it comes to configuration of server, I also followed your hints,
>>>>> results:
>>>>>
>>>>> Querying is done by JDBC Client.  In ignite and postgresql I have
>>>>> single index on column A.
>>>>>
>>>>> Ignite: SELECT * FROM table WHERE A > 1345 takes 6s.
>>>>> Postgres: SELECT * FROM table WHERE A > 1345 takes 4s.
>>>>>
>>>>> As you  can see, postgres is still bettter than Ignite.  I show you
>>>>> significant fragments of my configuration:
>>>>> http://pastebin.com/EQC4JPWR
>>>>>
>>>>> And xml for server file:
>>>>> http://pastebin.com/enR9h5J4
>>>>>
>>>>>
>>>>> Try to consider why postgresql is still better, please.
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Best regards,
>>>> Alexei Scherbakov
>>>>
>>>
>>>
>>
>>
>> --
>>
>> Best regards,
>> Alexei Scherbakov
>>
>
>


-- 

Best regards,
Alexei Scherbakov


Re: Working Directly with Binary objects

2016-05-25 Thread Alexei Scherbakov
Hi,

You can marshall and unmarshall BinaryObject using marshaller like any
other type.
BTW, what are you trying to achieve ?
If you store cache values as byte arrays, you need to deserialize them on
each data access to do something useful.
And how do you conclude "as its a lot core compact" ?


2016-05-24 19:47 GMT+03:00 guill-melo :

> Hello Alexei,
> Thank you very much for the reply, that really helped me.
>
> Just a quick follow up, how can I create a BinaryObject from a byte[]? I
> playing with a BinaryCacheStore and I want to persist the objects in the
> serialized BinaryObject format,  as its a lot core compact.
>
> Thanks again.
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Working-Directly-with-Binary-objects-tp5131p5152.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: off heap indexes - setSqlOnheapRowCacheSize - how does it improve efficiency ?

2016-05-25 Thread Alexei Scherbakov
What's the batch size for postgresql ?
What's the size of one entry ?
Could you provide the test code for both postgres and Ignite (just the
query + read with the time estimation) ?

2016-05-25 11:13 GMT+03:00 Tomek W :

> | How many entries are downloaded to the client in both cases?
> 3000 000
>
> | Do the both queries involve network I/O ?
> No, I have only local one server (for testing purpose).
>
>
> 2016-05-25 9:59 GMT+02:00 Alexei Scherbakov 
> :
>
>> SELECT * is not really a good test query.
>> It's result can be affected not only by engine performance.
>>
>> How many entries are downloaded to the client in both cases?
>> Do the both queries involve network I/O ?
>>
>> 2016-05-25 7:58 GMT+03:00 Denis Magda :
>>
>>> In general Ignite is designed to be used in a distributed environment
>>> when gigabytes or terabytes of dataset is spread across many cluster nodes
>>> and SQL queries executed across the cluster should be faster since
>>> resources of all the machines will be used and as a result a query should
>>> be completed quicker. In your scenario you just have only a single cluster
>>> node and in fact comparing performance of PostgreSQL and H2 (engine that is
>>> used by Ignite SQL) and I can consider that Ignite SQL can work slightly
>>> slowly but this in is not Ignite usage scenario.
>>>
>>> However if you try to create a cluster of several nodes running on
>>> different physical machines, pre-load gigabytes of data there and compare
>>> Ignite SQL and PostgresSQL you should see performance improvements on
>>> Ignite side.
>>>
>>> In any case taking into account the advise above do the following:
>>> - execute “EXPLAIN” query to see that the index is chose properly [1];
>>> - H2 console will allow you to see how fast a query is presently
>>> executed on a single node removing several Ignite layers [2];
>>> - check if you have any GC pauses during query execution since it can
>>> affect execution time [3]
>>>
>>> Also share the objects you use as keys and values.
>>>
>>> [1] https://apacheignite.readme.io/docs/sql-queries#using-explain
>>> [2]
>>> https://apacheignite.readme.io/docs/sql-queries#using-h2-debug-console
>>> [3]
>>> https://apacheignite.readme.io/v1.6/docs/jvm-and-system-tuning#section-detailed-garbage-collection-stats
>>>
>>> —
>>> Denis
>>>
>>> On May 25, 2016, at 3:23 AM, Tomek W  wrote:
>>>
>>>
>>> +==+
>>> | Node ID8(@), IP  | CPUs | Heap Used | CPU Load |   Up Time
>>> |  Size   | Hi/Mi/Rd/Wr |
>>>
>>> +==+
>>> | 0F0AAF99(@n0), 127.0.0.1 | 8| 54.50 %   | 3.23 %   | 00:13:13:49 |
>>> 300 | Hi: 0   |
>>> |  |  |   |  |
>>> | | Mi: 0   |
>>> |  |  |   |  |
>>> | | Rd: 0   |
>>> |  |  |   |  |
>>> | | Wr: 0   |
>>>
>>> +--+
>>>
>>>
>>> I followed your hints. Actually, client doesn't require such many
>>> memory as before - thanks for it.
>>>
>>>
>>> When it comes to configuration of server, I also followed your hints,
>>> results:
>>>
>>> Querying is done by JDBC Client.  In ignite and postgresql I have single
>>> index on column A.
>>>
>>> Ignite: SELECT * FROM table WHERE A > 1345 takes 6s.
>>> Postgres: SELECT * FROM table WHERE A > 1345 takes 4s.
>>>
>>> As you  can see, postgres is still bettter than Ignite.  I show you
>>> significant fragments of my configuration:
>>> http://pastebin.com/EQC4JPWR
>>>
>>> And xml for server file:
>>> http://pastebin.com/enR9h5J4
>>>
>>>
>>> Try to consider why postgresql is still better, please.
>>>
>>>
>>>
>>
>>
>> --
>>
>> Best regards,
>> Alexei Scherbakov
>>
>
>


-- 

Best regards,
Alexei Scherbakov


Re: off heap indexes - setSqlOnheapRowCacheSize - how does it improve efficiency ?

2016-05-25 Thread Alexei Scherbakov
SELECT * is not really a good test query.
It's result can be affected not only by engine performance.

How many entries are downloaded to the client in both cases?
Do the both queries involve network I/O ?

2016-05-25 7:58 GMT+03:00 Denis Magda :

> In general Ignite is designed to be used in a distributed environment when
> gigabytes or terabytes of dataset is spread across many cluster nodes and
> SQL queries executed across the cluster should be faster since resources of
> all the machines will be used and as a result a query should be completed
> quicker. In your scenario you just have only a single cluster node and in
> fact comparing performance of PostgreSQL and H2 (engine that is used by
> Ignite SQL) and I can consider that Ignite SQL can work slightly slowly but
> this in is not Ignite usage scenario.
>
> However if you try to create a cluster of several nodes running on
> different physical machines, pre-load gigabytes of data there and compare
> Ignite SQL and PostgresSQL you should see performance improvements on
> Ignite side.
>
> In any case taking into account the advise above do the following:
> - execute “EXPLAIN” query to see that the index is chose properly [1];
> - H2 console will allow you to see how fast a query is presently executed
> on a single node removing several Ignite layers [2];
> - check if you have any GC pauses during query execution since it can
> affect execution time [3]
>
> Also share the objects you use as keys and values.
>
> [1] https://apacheignite.readme.io/docs/sql-queries#using-explain
> [2] https://apacheignite.readme.io/docs/sql-queries#using-h2-debug-console
> [3]
> https://apacheignite.readme.io/v1.6/docs/jvm-and-system-tuning#section-detailed-garbage-collection-stats
>
> —
> Denis
>
> On May 25, 2016, at 3:23 AM, Tomek W  wrote:
>
>
> +==+
> | Node ID8(@), IP  | CPUs | Heap Used | CPU Load |   Up Time   |
> Size   | Hi/Mi/Rd/Wr |
>
> +==+
> | 0F0AAF99(@n0), 127.0.0.1 | 8| 54.50 %   | 3.23 %   | 00:13:13:49 |
> 300 | Hi: 0   |
> |  |  |   |  |
> | | Mi: 0   |
> |  |  |   |  |
> | | Rd: 0   |
> |  |  |   |  |
> | | Wr: 0   |
>
> +--+
>
>
> I followed your hints. Actually, client doesn't require such many memory
> as before - thanks for it.
>
>
> When it comes to configuration of server, I also followed your hints,
> results:
>
> Querying is done by JDBC Client.  In ignite and postgresql I have single
> index on column A.
>
> Ignite: SELECT * FROM table WHERE A > 1345 takes 6s.
> Postgres: SELECT * FROM table WHERE A > 1345 takes 4s.
>
> As you  can see, postgres is still bettter than Ignite.  I show you
> significant fragments of my configuration:
> http://pastebin.com/EQC4JPWR
>
> And xml for server file:
> http://pastebin.com/enR9h5J4
>
>
> Try to consider why postgresql is still better, please.
>
>
>


-- 

Best regards,
Alexei Scherbakov


Re: off heap indexes - setSqlOnheapRowCacheSize - how does it improve efficiency ?

2016-05-24 Thread Alexei Scherbakov
First, you have near cache configured and it consumes memory.
Second, you defined copyOnRead=true so every cache get returns a copy of
the value, increasing memory usage.
Try to remove NearCacheConfiguration bean, set copyOnRead=false and see it
it helps with the client memory problem.

2016-05-24 17:56 GMT+03:00 Tomek W :

> firstly,
> look at fragment of my config clieny file that  bother me:
>  class="org.apache.ignite.configuration.NearCacheConfiguration">
> 
>
> 
> 
>
>
>   
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
>         
> 
>
>
> 2016-05-24 15:22 GMT+02:00 Alexei Scherbakov  >:
>
>> Have you configured near cache on the client?
>> Do you buffer data somewere ?
>> Share the details of the loading process.
>>
>>
>> 2016-05-24 16:07 GMT+03:00 Tomek W :
>>
>>> Thanks for you answer.
>>>
>>>
>>> I didn't measure it. Simply, I was getting GC overflow error, so I
>>> succesively increased it.
>>> How to measure it ? Why client requires so much memory ?
>>>
>>> 2016-05-24 14:28 GMT+02:00 Alexei Scherbakov <
>>> alexey.scherbak...@gmail.com>:
>>>
>>>> Just use this:
>>>>
>>>> -server -Xms10G -Xmx10G -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
>>>> -XX:+UseTLAB -XX:NewSize=128m -XX:MaxNewSize=128m
>>>> -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=1024
>>>> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60
>>>> -XX:+DisableExplicitGC
>>>>
>>>> How do you measure client memory usage ?
>>>>
>>>> 2016-05-24 15:04 GMT+03:00 Tomek W :
>>>>
>>>>> Sorry,
>>>>> I made a mistake - I wanted to say that I am going to use ON_HEAP.
>>>>> Can you suggest my more details about tuning ?
>>>>> I have client (which run hot loading data from postgresql) and server
>>>>> node (keep cache - data).
>>>>> Now client also requires ~4GB data. Why ?   After all, it doesn't keep
>>>>> data, only run hot loading.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> 2016-05-24 13:44 GMT+02:00 Alexei Scherbakov <
>>>>> alexey.scherbak...@gmail.com>:
>>>>>
>>>>>> Try to start with some larger number, if default value is too low for
>>>>>> you.
>>>>>> On example, set it to 5, and see if the performance is OK.
>>>>>> If not, increase to 10 etc.
>>>>>> I can't help you further without knowing your data access patterns.
>>>>>>
>>>>>> BTW, for 10G heap it is probably better to use ONHEAP_TIERED, as Val
>>>>>> suggested.
>>>>>> Don't forget to tune GC as described here:
>>>>>>
>>>>>>
>>>>>> https://apacheignite.readme.io/docs/jvm-and-system-tuning#jvm-tuning-for-clusters-with-on_heap-caches
>>>>>>
>>>>>>
>>>>>> 2016-05-23 22:05 GMT+03:00 Tomek W :
>>>>>>
>>>>>>> Ok, I am going to use OFF_HEAP.
>>>>>>>
>>>>>>>
>>>>>>> On each node I am going to use about 10 GB.  (My ram is 16GB).
>>>>>>> Can you help me adjust configuration for this aim ?
>>>>>>> It is very important for me.
>>>>>>> Aim: Extremely fast sql quries.
>>>>>>>
>>>>>>>
>>>>>>> 2016-05-23 18:13 GMT+02:00 Alexei Scherbakov <
>>>>>>> alexey.scherbak...@gmail.com>:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> Generally speaking, settings setSqlOnheapRowCacheSize to larger
>>>>>>>> value increases
>>>>>>>> SQL performance in OFFHEAP_TIERED mode, but also means more job for
>>>>>>>> GC,
>>>>>>>> so it should be used with care.
>>>>>>>>
>>>>>>>> The value should be set to the size of your application's
>>>>>>>> working(frequently accessed) data set.
>>>>>>>>
>>>>>>>> 2016-05-23 13:07 GMT+03:00 vkulichenko <
>>>>>>>> valentin.kuliche...@gmail.com>:
>>>>>>>>
>>>>>>>>> Are you using offheap? What is your data size?
>>>>>>>>>
>>>>>>>>> Generally, I would recommend to use on-heap with SQL queries if
>>>>>>>>> this
>>>>>>>>> possible (unless you have a very big data sets and want to avoid
>>>>>>>>> having
>>>>>>>>> large heap sizes). If you still have to use offheap, you can try
>>>>>>>>> playing
>>>>>>>>> with this parameter and see what performance you get with
>>>>>>>>> different values.
>>>>>>>>> The optimal value depends on a particular application.
>>>>>>>>>
>>>>>>>>> -Val
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> View this message in context:
>>>>>>>>> http://apache-ignite-users.70518.x6.nabble.com/off-heap-indexes-setSqlOnheapRowCacheSize-how-does-it-improve-efficiency-tp5070p5092.html
>>>>>>>>> Sent from the Apache Ignite Users mailing list archive at
>>>>>>>>> Nabble.com.
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>>
>>>>>>>> Best regards,
>>>>>>>> Alexei Scherbakov
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Best regards,
>>>>>> Alexei Scherbakov
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Best regards,
>>>> Alexei Scherbakov
>>>>
>>>
>>>
>>
>>
>> --
>>
>> Best regards,
>> Alexei Scherbakov
>>
>
>


-- 

Best regards,
Alexei Scherbakov


Re: off heap indexes - setSqlOnheapRowCacheSize - how does it improve efficiency ?

2016-05-24 Thread Alexei Scherbakov
Have you configured near cache on the client?
Do you buffer data somewere ?
Share the details of the loading process.


2016-05-24 16:07 GMT+03:00 Tomek W :

> Thanks for you answer.
>
>
> I didn't measure it. Simply, I was getting GC overflow error, so I
> succesively increased it.
> How to measure it ? Why client requires so much memory ?
>
> 2016-05-24 14:28 GMT+02:00 Alexei Scherbakov  >:
>
>> Just use this:
>>
>> -server -Xms10G -Xmx10G -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
>> -XX:+UseTLAB -XX:NewSize=128m -XX:MaxNewSize=128m
>> -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=1024
>> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60
>> -XX:+DisableExplicitGC
>>
>> How do you measure client memory usage ?
>>
>> 2016-05-24 15:04 GMT+03:00 Tomek W :
>>
>>> Sorry,
>>> I made a mistake - I wanted to say that I am going to use ON_HEAP.
>>> Can you suggest my more details about tuning ?
>>> I have client (which run hot loading data from postgresql) and server
>>> node (keep cache - data).
>>> Now client also requires ~4GB data. Why ?   After all, it doesn't keep
>>> data, only run hot loading.
>>>
>>>
>>>
>>>
>>>
>>> 2016-05-24 13:44 GMT+02:00 Alexei Scherbakov <
>>> alexey.scherbak...@gmail.com>:
>>>
>>>> Try to start with some larger number, if default value is too low for
>>>> you.
>>>> On example, set it to 5, and see if the performance is OK.
>>>> If not, increase to 10 etc.
>>>> I can't help you further without knowing your data access patterns.
>>>>
>>>> BTW, for 10G heap it is probably better to use ONHEAP_TIERED, as Val
>>>> suggested.
>>>> Don't forget to tune GC as described here:
>>>>
>>>>
>>>> https://apacheignite.readme.io/docs/jvm-and-system-tuning#jvm-tuning-for-clusters-with-on_heap-caches
>>>>
>>>>
>>>> 2016-05-23 22:05 GMT+03:00 Tomek W :
>>>>
>>>>> Ok, I am going to use OFF_HEAP.
>>>>>
>>>>>
>>>>> On each node I am going to use about 10 GB.  (My ram is 16GB).
>>>>> Can you help me adjust configuration for this aim ?
>>>>> It is very important for me.
>>>>> Aim: Extremely fast sql quries.
>>>>>
>>>>>
>>>>> 2016-05-23 18:13 GMT+02:00 Alexei Scherbakov <
>>>>> alexey.scherbak...@gmail.com>:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Generally speaking, settings setSqlOnheapRowCacheSize to larger
>>>>>> value increases
>>>>>> SQL performance in OFFHEAP_TIERED mode, but also means more job for
>>>>>> GC,
>>>>>> so it should be used with care.
>>>>>>
>>>>>> The value should be set to the size of your application's
>>>>>> working(frequently accessed) data set.
>>>>>>
>>>>>> 2016-05-23 13:07 GMT+03:00 vkulichenko >>>>> >:
>>>>>>
>>>>>>> Are you using offheap? What is your data size?
>>>>>>>
>>>>>>> Generally, I would recommend to use on-heap with SQL queries if this
>>>>>>> possible (unless you have a very big data sets and want to avoid
>>>>>>> having
>>>>>>> large heap sizes). If you still have to use offheap, you can try
>>>>>>> playing
>>>>>>> with this parameter and see what performance you get with different
>>>>>>> values.
>>>>>>> The optimal value depends on a particular application.
>>>>>>>
>>>>>>> -Val
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> View this message in context:
>>>>>>> http://apache-ignite-users.70518.x6.nabble.com/off-heap-indexes-setSqlOnheapRowCacheSize-how-does-it-improve-efficiency-tp5070p5092.html
>>>>>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Best regards,
>>>>>> Alexei Scherbakov
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Best regards,
>>>> Alexei Scherbakov
>>>>
>>>
>>>
>>
>>
>> --
>>
>> Best regards,
>> Alexei Scherbakov
>>
>
>


-- 

Best regards,
Alexei Scherbakov


Re: Unexpected performance issue with SQL query followed by error

2016-05-24 Thread Alexei Scherbakov
Hi,

I've looked closer to the query.
First, it should be splitted by 3 sub-queries.
Each sub-query must be optimized separately and joined by UNION ALL.
Let's start from the first query.
I've provided updated indexes for your model in the attachment.
Please start server with enabled H2 console[1] and run there the following
query after the data is loaded:

EXPLAIN ANALYZE SELECT DISTINCT * FROM activity activity0
LEFT OUTER JOIN "activityuseraccountrole".activityuseraccountrole
activityuseraccountrole0
ON activityuseraccountrole0.activityId = activity0.activityId
AND activityuseraccountrole0.useraccountroleId IN (1, 3)

LEFT OUTER JOIN "activityhistory".activityhistory activityhistory0
ON activityhistory0.activityhistoryId = activity0.lastactivityId
AND activityhistory0.activitystateEnumid NOT IN (37, 30, 463, 33, 464)

LEFT OUTER JOIN
"activityhistoryuseraccount".activityhistoryuseraccount
activityhistoryuseraccount0
ON activityhistoryuseraccount0.activityHistoryId =
activityhistory0.activityhistoryId

WHERE activity0.kernelId IS NULL
AND activity0.realizationId IS NULL
AND activity0.removefromworklist = 0

Grab the output and send it to me.

[1] https://apacheignite.readme.io/docs/sql-queries#using-h2-debug-console

2016-05-23 16:36 GMT+03:00 Alexei Scherbakov :

> Hi,
>
> In current state of SQL engine where is a very high probability for
> necessity of query modification for efficient use with Ignite.
> I'll look into your data soon. Was very busy last week.
>
> 2016-05-17 17:37 GMT+03:00 jan.swaelens :
>
>> Hello,
>>
>> Please find the attached pojo instances with group index annotations.
>> JoinPerfPojos.zip
>> <
>> http://apache-ignite-users.70518.x6.nabble.com/file/n4993/JoinPerfPojos.zip
>> >
>>
>> Yes I understand that there are probably better ways to retrieve the data,
>> but the point of my exercise is to see how we can slide in an in memory
>> solution without actually impacting the implementation of the business
>> logic
>> as coded today.
>>
>> Thanks for your insights!
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-ignite-users.70518.x6.nabble.com/Unexpected-performance-issue-with-SQL-query-followed-by-error-tp4726p4993.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
>
> Best regards,
> Alexei Scherbakov
>



-- 

Best regards,
Alexei Scherbakov


Re: off heap indexes - setSqlOnheapRowCacheSize - how does it improve efficiency ?

2016-05-24 Thread Alexei Scherbakov
Just use this:

-server -Xms10G -Xmx10G -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:+UseTLAB -XX:NewSize=128m -XX:MaxNewSize=128m
-XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=1024
-XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60
-XX:+DisableExplicitGC

How do you measure client memory usage ?

2016-05-24 15:04 GMT+03:00 Tomek W :

> Sorry,
> I made a mistake - I wanted to say that I am going to use ON_HEAP.
> Can you suggest my more details about tuning ?
> I have client (which run hot loading data from postgresql) and server node
> (keep cache - data).
> Now client also requires ~4GB data. Why ?   After all, it doesn't keep
> data, only run hot loading.
>
>
>
>
>
> 2016-05-24 13:44 GMT+02:00 Alexei Scherbakov  >:
>
>> Try to start with some larger number, if default value is too low for
>> you.
>> On example, set it to 5, and see if the performance is OK.
>> If not, increase to 10 etc.
>> I can't help you further without knowing your data access patterns.
>>
>> BTW, for 10G heap it is probably better to use ONHEAP_TIERED, as Val
>> suggested.
>> Don't forget to tune GC as described here:
>>
>>
>> https://apacheignite.readme.io/docs/jvm-and-system-tuning#jvm-tuning-for-clusters-with-on_heap-caches
>>
>>
>> 2016-05-23 22:05 GMT+03:00 Tomek W :
>>
>>> Ok, I am going to use OFF_HEAP.
>>>
>>>
>>> On each node I am going to use about 10 GB.  (My ram is 16GB).
>>> Can you help me adjust configuration for this aim ?
>>> It is very important for me.
>>> Aim: Extremely fast sql quries.
>>>
>>>
>>> 2016-05-23 18:13 GMT+02:00 Alexei Scherbakov <
>>> alexey.scherbak...@gmail.com>:
>>>
>>>> Hi,
>>>>
>>>> Generally speaking, settings setSqlOnheapRowCacheSize to larger value
>>>> increases
>>>> SQL performance in OFFHEAP_TIERED mode, but also means more job for GC,
>>>> so it should be used with care.
>>>>
>>>> The value should be set to the size of your application's
>>>> working(frequently accessed) data set.
>>>>
>>>> 2016-05-23 13:07 GMT+03:00 vkulichenko :
>>>>
>>>>> Are you using offheap? What is your data size?
>>>>>
>>>>> Generally, I would recommend to use on-heap with SQL queries if this
>>>>> possible (unless you have a very big data sets and want to avoid having
>>>>> large heap sizes). If you still have to use offheap, you can try
>>>>> playing
>>>>> with this parameter and see what performance you get with different
>>>>> values.
>>>>> The optimal value depends on a particular application.
>>>>>
>>>>> -Val
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> View this message in context:
>>>>> http://apache-ignite-users.70518.x6.nabble.com/off-heap-indexes-setSqlOnheapRowCacheSize-how-does-it-improve-efficiency-tp5070p5092.html
>>>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Best regards,
>>>> Alexei Scherbakov
>>>>
>>>
>>>
>>
>>
>> --
>>
>> Best regards,
>> Alexei Scherbakov
>>
>
>


-- 

Best regards,
Alexei Scherbakov


Re: Working Directly with Binary objects

2016-05-24 Thread Alexei Scherbakov
Hi,

try something like:

String dir = System.getProperty("java.io.tmpdir");

IgniteUtils.setWorkDirectory(dir, null);
IgniteConfiguration iCfg = new IgniteConfiguration();
BinaryConfiguration bCfg = new BinaryConfiguration();
iCfg.setBinaryConfiguration(bCfg);
BinaryContext ctx = new
BinaryContext(BinaryCachingMetadataHandler.create(), iCfg, new
NullLogger());
BinaryMarshaller marsh = new BinaryMarshaller();
marsh.setContext(new MarshallerContextImpl(null));
IgniteUtils.invoke(BinaryMarshaller.class, marsh, "setBinaryContext",
ctx, iCfg);

byte[] bytes = marsh.marshal(new HashMap() {
{
put("1", "1");
}
});

For second question:

((BinaryObjectImpl)bo).array()



2016-05-24 0:05 GMT+03:00 guill-melo :

> Hello All,
> I have just recently started using Ignite, and I am really enjoying it, I
> am
> trying to profile the size of some objects and compare it with other types
> of serializations, is it possible to use the serialization without starting
> a Ignite process? The only way I could do it was:
>
>IgniteConfiguration cfg = new IgniteConfiguration();
> Ignite ignite = Ignition.start(cfg);
> Entity entity = Entity
> .build()
> .withId("12345")
> .withField("value field")
> .withOtherField(33)
> .build();
> BinaryObject binaryObject = ignite.binary().toBinary(entity);
>
> Also, Is there a utility to convert the BinaryObject to byte[] ou should I
> just treat it like any other object?
>
> Thanks !!
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Working-Directly-with-Binary-objects-tp5131.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: off heap indexes - setSqlOnheapRowCacheSize - how does it improve efficiency ?

2016-05-24 Thread Alexei Scherbakov
Try to start with some larger number, if default value is too low for you.
On example, set it to 5, and see if the performance is OK.
If not, increase to 10 etc.
I can't help you further without knowing your data access patterns.

BTW, for 10G heap it is probably better to use ONHEAP_TIERED, as Val
suggested.
Don't forget to tune GC as described here:

https://apacheignite.readme.io/docs/jvm-and-system-tuning#jvm-tuning-for-clusters-with-on_heap-caches


2016-05-23 22:05 GMT+03:00 Tomek W :

> Ok, I am going to use OFF_HEAP.
>
>
> On each node I am going to use about 10 GB.  (My ram is 16GB).
> Can you help me adjust configuration for this aim ?
> It is very important for me.
> Aim: Extremely fast sql quries.
>
>
> 2016-05-23 18:13 GMT+02:00 Alexei Scherbakov  >:
>
>> Hi,
>>
>> Generally speaking, settings setSqlOnheapRowCacheSize to larger value
>> increases
>> SQL performance in OFFHEAP_TIERED mode, but also means more job for GC,
>> so it should be used with care.
>>
>> The value should be set to the size of your application's
>> working(frequently accessed) data set.
>>
>> 2016-05-23 13:07 GMT+03:00 vkulichenko :
>>
>>> Are you using offheap? What is your data size?
>>>
>>> Generally, I would recommend to use on-heap with SQL queries if this
>>> possible (unless you have a very big data sets and want to avoid having
>>> large heap sizes). If you still have to use offheap, you can try playing
>>> with this parameter and see what performance you get with different
>>> values.
>>> The optimal value depends on a particular application.
>>>
>>> -Val
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-ignite-users.70518.x6.nabble.com/off-heap-indexes-setSqlOnheapRowCacheSize-how-does-it-improve-efficiency-tp5070p5092.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>
>>
>>
>> --
>>
>> Best regards,
>> Alexei Scherbakov
>>
>
>


-- 

Best regards,
Alexei Scherbakov


Re: off heap indexes - setSqlOnheapRowCacheSize - how does it improve efficiency ?

2016-05-23 Thread Alexei Scherbakov
Hi,

Generally speaking, settings setSqlOnheapRowCacheSize to larger value
increases
SQL performance in OFFHEAP_TIERED mode, but also means more job for GC,
so it should be used with care.

The value should be set to the size of your application's
working(frequently accessed) data set.

2016-05-23 13:07 GMT+03:00 vkulichenko :

> Are you using offheap? What is your data size?
>
> Generally, I would recommend to use on-heap with SQL queries if this
> possible (unless you have a very big data sets and want to avoid having
> large heap sizes). If you still have to use offheap, you can try playing
> with this parameter and see what performance you get with different values.
> The optimal value depends on a particular application.
>
> -Val
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/off-heap-indexes-setSqlOnheapRowCacheSize-how-does-it-improve-efficiency-tp5070p5092.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Unexpected performance issue with SQL query followed by error

2016-05-23 Thread Alexei Scherbakov
Hi,

In current state of SQL engine where is a very high probability for
necessity of query modification for efficient use with Ignite.
I'll look into your data soon. Was very busy last week.

2016-05-17 17:37 GMT+03:00 jan.swaelens :

> Hello,
>
> Please find the attached pojo instances with group index annotations.
> JoinPerfPojos.zip
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/n4993/JoinPerfPojos.zip
> >
>
> Yes I understand that there are probably better ways to retrieve the data,
> but the point of my exercise is to see how we can slide in an in memory
> solution without actually impacting the implementation of the business
> logic
> as coded today.
>
> Thanks for your insights!
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Unexpected-performance-issue-with-SQL-query-followed-by-error-tp4726p4993.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Problem about cache data size

2016-05-18 Thread Alexei Scherbakov
Hi,

Non-heap memory values are not related to Ignite off-heap usage.
It reports only memory used by java native byte buffers.
If you need to estimate memory usage before data loading, please referer to
[1].
To estimate total memory usage after all data was loaded you can use, on
example, top command.
The actual used memory in the RES column value.

[1] https://apacheignite.readme.io/docs/capacity-planning


2016-05-18 11:59 GMT+03:00 ght230 :

> I am trying to estimate cache data size when setting CacheMode.PARTITIONED
> with different replications.
> I confirmed the cache data size by ignitevisorcmd and found the result is a
> bit strange.
> Following is my cache configures.
> CacheCfg.setCacheMode(CacheMode.PARTITIONED);
> CacheCfg.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
> CacheCfg.setOffHeapMaxMemory(0);
> CacheCfg.setSwapEnabled(false);
>
> case 1 : CacheCfg.setBackups(0);
> Non-heap memory initialized 23m
> Non-heap memory used 58m
> Non-heap memory committed 88m
> Non-heap memory maximum 130m
>
> case 2 : CacheCfg.setBackups(1);
> Non-heap memory initialized 23m
> Non-heap memory used 58m
> Non-heap memory committed 88m
> Non-heap memory maximum 130m
>
> case 3 : CacheCfg.setBackups(2);
> Non-heap memory initialized 23m
> Non-heap memory used 58m
> Non-heap memory committed 88m
> Non-heap memory maximum 130m
>
> I have 2 questions:
> 1. How to estimate the memory used for the cache data?
> According to "initialized ", "used", "committed" or "maximum"?
> 2. Why memory used looks same in different replications?
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Problem-about-cache-data-size-tp5016.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


Re: Using SQL to query Object field stored in Cache ?

2016-05-18 Thread Alexei Scherbakov
Hi,

Currently Ignite SQL engine does not support Object type in query
conditions.
It doesn't know how to compare any type(which Object can hold) with
Integer, on example.

Possible workarounds for that would be:
1) Use different fields for different types like:
stringValue, intVaue, etc.
This allows you to use conditions like:
AttributeCache.propertyName = " + "\'" + "height" + "\' "
+ "and AttributeCache.intFalue > 182

2) Use user defined SQL functions like:

public class ParamsComparator {
@QuerySqlFunction
public static boolean compareLongParam(Object param, Long arg) {
return param instanceof Long && ((Long)param).equals(arg);
}

@QuerySqlFunction
public static boolean compareStringParam(Object param, String arg) {
return param instanceof String && ((String)param).equals(arg);
}
}


See fully working example in the attachment. Note the function code
must present on all cluster nodes for this to work.


I recommend using first approach, because it's more elegant and allows
to use indexes on *value fields.


Did this help?




2016-05-17 20:56 GMT+03:00 David Robinson :

> With Ignite 1.5.0:
>
> I have two caches.
>
> Cache 1 stores a Person object like this:
>
> personCache.put(id, PersonObj1);
>
> The Person class has only a single field in it declared like this:
>
> @QuerySqlField(index = true)
> private int personId;
>
> Cache 2 stores a Person Attribute object like this:
> AttributeCache.put(id, PersonAttributeObj1);
>
> The Attribute class has 3 fields in it:
>
> @QuerySqlField(index = true)
> private int personId;
>
> @QuerySqlField(index = false)
> private String attributeName;
>
> @QuerySqlField(index = false)
> private Object attributeValue;
>
> A PersonAttribute value can be any object type - for example, if 
> attributeName is "height", then
>
> attributeValue could be a Float: 182.88
>
> If attributeName is "haircolor", then attributeValue could be a String: 
> "brown".
>
> I need to be able to write a SQL join query between the Person and Attribute 
> caches and find all
>
> of the people with height > 182.
>
> When I try to use a SQL join query...something like below (it doesn't matter 
> if the 182 is set
>
> as a attribute or hard coded in the query)
>
>
> SqlFieldsQuery sql = new SqlFieldsQuery(
>"select PersonCache.personId  "
>+ "\"" + personCacheName + "\"" + "from PersonCache, "
>+ "\"" + attributeCacheName + "\"" +  ".AttributeCache where "
>+ "PersonCache.personId = AttributeCache.personId "
>+ "and AttributeCache.propertyName = " + "\'" + "height" + "\' 
> "
>+ "and AttributeCache.value > 182");
>
> I received the following exception from the Ignite Server:
>
> Caused by: class org.apache.ignite.binary.BinaryObjectException: Invalid flag 
> value: -128
>   at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1632)
>   at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:292)
>   at 
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal(BinaryMarshaller.java:112)
>   at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$5.deserialize(IgniteH2Indexing.java:1491)
>   at org.h2.util.Utils.deserialize(Utils.java:392)
>
> If value > 182 is taken out of the query, it runs fine.
>
> Ignite does not appear to know how to deserialize an "Object" field correctly 
> to perform a comparison in SQL.
> What is the recommended Ignite way to store Object types like this and be 
> able to compare/query them
> in Ignite SQL ? I do not know ahead of time if something will be a Long or 
> Integer or String, etc.
>
> Thank you,
>
>


-- 

Best regards,
Alexei Scherbakov


ObjectQuery.java
Description: Binary data


ParamsComparator.java
Description: Binary data


Re: Unexpected performance issue with SQL query followed by error

2016-05-16 Thread Alexei Scherbakov
Hi,

I think your query should be rewritten for correct index usage.
Ignite has some known pitfalls concerning index usage [1].
Also, join order is sensitive to index configuration.
FIrst specify joins using indexes.

Could you provide your business model pojos and current indexes
configuration?

[1]
https://apacheignite.readme.io/docs/sql-queries#performance-and-usability-considerations

2016-05-09 14:23 GMT+03:00 jan.swaelens :

> I tried to defined the grouped indexes but I see no result on the explain
> plan nor on the actual run. This must mean that either I am doing something
> wrong or that something is going wrong.
>
> First attempt I annotated the fields on the objects, for example an extract
> of the Activity pojo:
>
> //** Value for activityId. */
> @QuerySqlField(orderedGroups={@QuerySqlField.Group(name =
> "Activity_idx", order = 0, descending = true)})
> private long activityId;
>
> /** Value for realizationId. */
> @QuerySqlField(orderedGroups={@QuerySqlField.Group(name =
> "Activity_idx", order = 1, descending = true)})
> private Long realizationId;
>
> /** Value for kernelId. */
> @QuerySqlField(orderedGroups={@QuerySqlField.Group(name =
> "Activity_idx", order = 2, descending = true)})
> private Long kernelId;
>
> /** Value for removefromworklist. */
> @QuerySqlField(orderedGroups={@QuerySqlField.Group(name =
> "Activity_idx", order = 3, descending = true)})
> private boolean removefromworklist;
>
> /** Value for lastactivityId. */
> @QuerySqlField(orderedGroups={@QuerySqlField.Group(name =
> "Activity_idx", order = 4, descending = true)})
> private Long lastactivityId;/
>
> Since that did not work I also had an attempt to define it on the cache
> configuration (query entity):
>
> /QueryIndex gIdx = new QueryIndex();
> idxs.add(gIdx);
> gIdx.setName("Activity_idx");
>
> Collection gFields = new ArrayList<>();
> gFields.add("activityId");
> gFields.add("realizationId");
> gFields.add("kernelId");
> gFields.add("removefromworklist");
> gFields.add("lastactivityId");
> gIdx.setFieldNames(gFields, true);/
>
>
> Could it be that the field aliases I use are causing this? Or something
> else
> I am missing?
>
> tx
>
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/Unexpected-performance-issue-with-SQL-query-followed-by-error-tp4726p4846.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 

Best regards,
Alexei Scherbakov


  1   2   >