Cluster can not let more than one client to do continuous queries

2016-11-21 Thread ght230
The usual method of setting remote filter in continuous queries is just like
qry.setRemoteFilterFactory(new
Factory>() {
@Override public CacheEntryEventFilter
create() {
return new CacheEntryFilter();
}
});

But I need send an additional parameter "name" to it, so I have to write a
Factory class with a parameter to implements
Factory>, just like following:

public class CacheEntryEventFilterFactory implements
Factory> {

private Stringname;

public CacheEntryEventFilterFactory() {
}

 public CacheEntryEventFilterFactory(String name){
 this.name=name;
 }

/*
 * @see javax.cache.configuration.Factory#create()
 */
@Override
public CacheEntryEventFilter create() {
// TODO Auto-generated method stub
return new CacheEntryNewAlarmFilter(name);
}

@IgniteAsyncCallback
private class CacheEntryNewAlarmFilter implements
CacheEntryEventFilter {
private String name;

public CacheEntryNewAlarmFilter(String name) {
this.name = name;
}

/** {@inheritDoc} */
@Override
public boolean evaluate(CacheEntryEvent e) throws CacheEntryListenerException {
if (e.getValue().equals(name)){
System.out.println(">>>Remote Updated entry [key=" +
e.getKey() + ", val=" + e.getValue() + "]" + "name = " + name);
return true;
}
return false;
}
}
}

And I set remote filter as 
qry.setRemoteFilterFactory(new CacheEntryEventFilterFactory(name)); 

After my modification, when one client is doing continuous queries, another
client can not join the cluster to do continuous queries.

But when I use 
qry.setRemoteFilterFactory(new Factory>()

It can let more than one client to do continuous queries at the same time.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-can-not-let-more-than-one-client-to-do-continuous-queries-tp9124.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


How to apply the distributed lock on all the ignite remote instances

2016-11-21 Thread Navneet Kumar
Hi
I need to apply distributed lock while storing some KV using my own
JobStore. Now I need to put the lock on the connected instances before
writing some records on a cache. A few line code example will be very
helpful to understand.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-apply-the-distributed-lock-on-all-the-ignite-remote-instances-tp9123.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


while updating the cache, can I get notification

2016-11-21 Thread Navneet Kumar
Suppose I have the config system (today EMS) updating the cache, can I get
notification and handle the config change dynamically..
I see the answer is yes.
Question is can I enable these notifications selectively (only on certain
tables) ?




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/while-updating-the-cache-can-I-get-notification-tp9122.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Apache Spark & Ignite Integration

2016-11-21 Thread pragmaticbigdata
Thanks for the follow up.


> Data will be in sync because it's stored in Ignite cache. IgniteRDD uses
> Ignite API to update it and you can do this as well in your code. 
> 
> There is no copy of the data maintained in Spark, it's always stored in
> Ignite caches. Spark runs Ignite client(s) that can fetch the data for
> computation, but it doesn't store it. 

I think I missed on clarifying what I wanted to say in my earlier comment.
When I earlier said that "I will have to discard the spark
rdd/dataset/dataframe every time the data is updated in ignite through the
Ignite API" what I also meant was I could not cache the dataset in spark's
memory for future transformations (using dataset.cache() spark api) because
if the ignite cache gets updated simultaneously by another user, my dataset
in spark would be stale. This happens because spark acts as an ignite client
and fetches the data instead of a tight integration where in it (spark)
could have worked with the same copy of data on the ignite server. 

If what I have understood is true I wanted to confirm that behavior is no
different when ignite runs in embedded mode with spark. Kindly let me know.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Apache-Spark-Ignite-Integration-tp8556p9121.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How does Ignite treat equal Keys?

2016-11-21 Thread rjdamore
Very good, you are correct about concurrentHashMap of course. I didn't 
frame that statement correctly. 5 stars for the quick response, you folks are 
doing great stuff! On Nov 21, 2016 4:07 PM, "vkulichenko [via Apache 
Ignite Users]"  wrote: 

Hi,
If you insert twice for the same key, you have the latest inserted value in 
cache. So yes, the value will be overwritten and that's actually similar to 
the ConcurrentHashMap, but Ignite cache is distributed.
-Val








If you reply to this email, your message will be added to the 
discussion below: 

http://apache-ignite-users.70518.x6.nabble.com/How-does-Ignite-treat-equal-Keys-tp9118p9119.html
 



To unsubscribe from How does Ignite treat equal Keys?, click 
here . 
NAML 




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-does-Ignite-treat-equal-Keys-tp9118p9120.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: How does Ignite treat equal Keys?

2016-11-21 Thread vkulichenko
Hi,

If you insert twice for the same key, you have the latest inserted value in
cache. So yes, the value will be overwritten and that's actually similar to
the ConcurrentHashMap, but Ignite cache is distributed.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-does-Ignite-treat-equal-Keys-tp9118p9119.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


How does Ignite treat equal Keys?

2016-11-21 Thread rjdamore
I've read the documentation on equality and the BinaryObjectMarshaller.

However, I'm still not clear on the equality of Keys.

If an insert happens where the keys are equal, will Ignite treat the
insertion as a bucket does in ConcurrentHashMap? Or, will the value for the
key simply be overwritten?

Sorry for the lack of understanding, but an answer would be greatly
appreciated. Thank you.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-does-Ignite-treat-equal-Keys-tp9118.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cluster hung after a node killed

2016-11-21 Thread javastuff....@gmail.com
Issue resolved for me. There was a typo which caused one of the lock to stay.
Corrected typo which allows unlocking.

However in real production node can crash before releasing lock, hence there
must be something for locks to timeout or locks need to auto unlock when
responsible node fails.

I have reproduced this with simple program -
1. Node 1 - run example ExampleNodeStartup
2. Node 2 - Run a program which create a transaction cache and add 100K
entries.
cfg.setCacheMode(CacheMode.PARTITIONED);
cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
cfg.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
cfg.setSwapEnabled(false);
cfg.setBackups(0);
3. Node 3 - Run a program which takes a lock (cache.lock(key))
4. Kill Node 3
5. Node 4 - Run a program which tries to get cached data. 

Node4 hung. In-fact complete cluster is hung, only solution I could able to
make work is to restart whole cluster. 

-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p9117.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Swap space

2016-11-21 Thread vkulichenko
An entry will go to swap only if it's evicted from the cache [1]. If eviction
policy is not configured, this will never happen.

[1] https://apacheignite.readme.io/docs/evictions

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Swap-space-tp8156p9116.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: SELECT with ORDER BY and pagination

2016-11-21 Thread Denis Magda
Sergi,

Do we already have a ticket for that optimization?

—
Denis

> On Nov 21, 2016, at 4:28 AM, Sergi Vladykin  wrote:
> 
> Right now all the results will be loaded to the client (the reducer node) and 
> will be resorted. We already working on this issue and hopefully it will be 
> resolved soon.
> 
> Sergi
> 
> 2016-11-17 16:40 GMT+03:00 vdpyatkov  >:
> Hi,
> 
> Lock at the method (o.a.i.cache.query.SqlFieldsQuery#setPageSize), that
> allow to set  sql page size.
> If you are going through cursor (result of cache.query()) or executing
> "getAll" then you get all result data.
> 
> Client gets data into page range, if used cursor (and sent request to the
> next part of data if needed).
> 
> [1]: https://apacheignite.readme.io/docs/sql-queries 
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/SELECT-with-ORDER-BY-and-pagination-tp9037p9046.html
>  
> 
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
> 



Re: Apache Spark & Ignite Integration

2016-11-21 Thread vkulichenko
pragmaticbigdata wrote
> Yes but the data would not be in sync when both(updates and analytics) are
> done concurrently, right? I will have to discard the spark
> rdd/dataset/dataframe every time the data is updated in ignite through the
> Ignite API. As I understand the data remains in sync only when we use the
> IgniteRDD api. Correct me if my understanding is wrong.

Data will be in sync because it's stored in Ignite cache. IgniteRDD uses
Ignite API to update it and you can do this as well in your code.

pragmaticbigdata wrote
> I have an additional question on the same topic - Even when ignite runs in
> an embedded mode with spark, the memory footprint behavior is the same as
> it is when ignite runs in standalone mode, right? i.e When spark  fetches
> the ignite cache through the IgniteRDD api (val igniteRDD =
> igniteContext.fromCache("
> 
> ") a copy of data is created in the spark worker's memory.

There is no copy of the data maintained in Spark, it's always stored in
Ignite caches. Spark runs Ignite client(s) that can fetch the data for
computation, but it doesn't store it.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Apache-Spark-Ignite-Integration-tp8556p9114.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Swap space

2016-11-21 Thread Vladislav Pyatkov
Hi Kevin,

Check your log, is any OutOfMemory exception appear in a log?

On Mon, Nov 21, 2016 at 5:11 PM, Kevin Daly  wrote:

> I think what he means is that no files are being created.. We are doing
> some
> tests with Ignite and we don't see any use of the cache, even when we load
> more keys than fit in physical memory.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Swap-space-tp8156p9112.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Swap space

2016-11-21 Thread Kevin Daly
I think what he means is that no files are being created.. We are doing some
tests with Ignite and we don't see any use of the cache, even when we load
more keys than fit in physical memory.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Swap-space-tp8156p9112.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: SELECT with ORDER BY and pagination

2016-11-21 Thread akaptsan
This is very bad news for us :(
Could you please give me reference to jira ticket. I need to track it 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SELECT-with-ORDER-BY-and-pagination-tp9037p9111.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: SELECT with ORDER BY and pagination

2016-11-21 Thread Sergi Vladykin
Right now all the results will be loaded to the client (the reducer node)
and will be resorted. We already working on this issue and hopefully it
will be resolved soon.

Sergi

2016-11-17 16:40 GMT+03:00 vdpyatkov :

> Hi,
>
> Lock at the method (o.a.i.cache.query.SqlFieldsQuery#setPageSize), that
> allow to set  sql page size.
> If you are going through cursor (result of cache.query()) or executing
> "getAll" then you get all result data.
>
> Client gets data into page range, if used cursor (and sent request to the
> next part of data if needed).
>
> [1]: https://apacheignite.readme.io/docs/sql-queries
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/SELECT-with-ORDER-BY-and-pagination-tp9037p9046.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Memory consumption in apache ignite

2016-11-21 Thread dkarachentsev
Approximate calculation for custom object overhead on heap will look like:
entries = N
fields number = flds
header size = 24
fields id size = 4
BinaryObjectImpl = 40
byte arr header 16 + len 4 = 20 (at least, omitted padding)

N*(24 + flds*4 + 40 + 20) = 4*N*(21 + flds),

so for one entry with one field you'll get ~88 bytes overhead. But take in
account that this value is for user custom object, for primitives or
collections it will be much smaller.

Also, in CSV data stored as a strings, and the same value may take different
amount of memory:
"1" in CSV is 1 byte, when integer in memory takes 4 bytes and
"1234567" in CSV is 7 bytes, when in memory is still 4 bytes.

NOTE. Do not reference on that calculations as precise, because there a lot
of other objects and links associated with entry, result depends on actual
entry size, no optimizations are taken into account, for 32 bit system
values will differ, etc.

BinaryObjectImpl may hold deserialized value to cache and decrease number of
deserializations, and always holds serialized (byte array).



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Memory-consumption-in-apache-ignite-tp9035p9109.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Apache Spark & Ignite Integration

2016-11-21 Thread pragmaticbigdata

> use Ignite to update the data in transactional manner and Spark for
> analytics. 

Yes but the data would not be in sync when both(updates and analytics) are
done concurrently, right? I will have to discard the spark
rdd/dataset/dataframe every time the data is updated in ignite through the
Ignite API. As I understand the data remains in sync only when we use the
IgniteRDD api. Correct me if my understanding is wrong.

I have an additional question on the same topic - Even when ignite runs in
an embedded mode with spark, the memory footprint behavior is the same as it
is when ignite runs in standalone mode, right? i.e When spark  fetches the
ignite cache through the IgniteRDD api (val igniteRDD =
igniteContext.fromCache("") a copy of data is created in the
spark worker's memory.

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Apache-Spark-Ignite-Integration-tp8556p9108.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.