Re: Exception - Method rawReader can be called only once

2019-10-10 Thread javastuff....@gmail.com
The interface does not have 2 sets of methods, so second method with "raw"
reader is extra code with special programmatic changes in extending class to
call "raw" method from superclass. This is how I am using it for now,
however it needs extra code and special handling in extended class which
seems like workaround solution to traverse class chain and does not like
conceptually. 

I have raised ticket to remove this limitation IGNITE-12280. Hope this gets
through and gets fixed soon.

Thanks,
-Sam



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Exception - Method rawReader can be called only once

2019-10-04 Thread javastuff....@gmail.com
Hi Ilya,

Each object is also eligible for separate caching so if class level chaining
is not working/allowed then its too much unnecessary code needs to be
written and maintained throughout the class chains.  Or separate value
objects need to be created only for caching with Ignite.
In my experience with other tools, serialization/de-serialization does not
have limitation for class chaining. I believe restricting class chaining is
a big limitation.

Thanks,
-Sam  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Exception - Method rawReader can be called only once

2019-10-03 Thread javastuff....@gmail.com
Hi Ilya, 

Thanks for your reply. I tried to program it around and I do not see
exception anymore, but still need to verify object is built correctly or
not. 

For complex or deep class hierarchy this seems a bit of hassle and not
maintainable. Could you please throw some light on why this limitation is
needed? and why it is not enforced for rawWriter? 

Thanks, 
-Sam



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Exception - Method rawReader can be called only once

2019-10-03 Thread javastuff....@gmail.com
Hi Ilya,

Thanks for your reply. I tried to program it around and I do not see
exception anymore, but still need to verify object is built correctly or
not. 

For complex or deep class hierarchy this seems a bit of hassle and not
maintainable. Could you please throw some light on why this limitation is
needed? and why it is not enforced for rawWriter?

Thanks,
-Sam 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: isolated cluster configuration

2019-03-13 Thread javastuff....@gmail.com
Thank you for the response Ilya.

Using 2 different DB schema is a way out here, but I was trying to see if
there is any other way to achieve this. Maybe property or configuration
resulting in different table names for the isolated cluster or table details
having cluster details to isolate from each other. 

Please let me know if this can be done without having 2 separate DB schemas.

Thanks,
-Sam 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: isolated cluster configuration

2019-03-12 Thread javastuff....@gmail.com
Any pointers for this?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re: TPS does not increase even though new server nodes added

2019-03-08 Thread javastuff....@gmail.com
Checkout below -

   java.lang.Thread.State: BLOCKED (on object monitor)
at java.io.PrintStream.println(PrintStream.java:805)
- waiting to lock <0x85d6d798> (a java.io.PrintStream)
at
com.example.ignite.service.IgniteCacheImpl.doInTransaction(IgniteCacheImpl.java:57)
at
com.example.ignite.cotroller.ItemIgniteController.cache(ItemIgniteController.java:34)

Stop writing to the output stream, probably synchronized operations like
System.out

Thanks,
-Sam 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


isolated cluster configuration

2019-03-08 Thread javastuff....@gmail.com
Hi,

We have a requirement for 2 separate cache cluster isolated from each other.
We have 2 separate configuration file and java program to initialize.  
We achieved it by using non-intersecting IP and Port for different cluster
while using TCP discovery.

However, we need to achieve the same using DB discovery. Is there a way to
configure 2 separate cache cluster using DB discovery? 

Thanks,
Sam  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Upgrading from 1.9 .... need help

2018-11-15 Thread javastuff....@gmail.com
Hello Ilya,

Can you please elaborate on "You would need to use some other means to limit
sizes of your caches"?

Are there any guidelines or best practices for defining data region? 

Thanks,
-Sam



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Upgrading from 1.9 .... need help

2018-11-14 Thread javastuff....@gmail.com
Hi,

We are using Ignite 1.9 OffHeapCache with swap space disabled, we have about
80 different caches, all are defined programmatically. Some have the expiry,
some have eviction policy. 

We are trying to upgrade to 2.6 and need help -

1. Many API from 1.9 is no longer supported or deprecated in 2.6? 
Example - cache configuration no longer seems to have the ability to define
EvictionPolicy, startSize, offHeapMaxMemory, swap space disable. Most of
these settings are available on Data Region instead, does that mean each
cache needs to be defined as a separate data region to have that kind of
control? We tried to do same by defining data region programmatically and
use that region while programmatic cache definition, however, it fails with
data region not found.

2. What strategy or best practice to be used for defining data region?
Should we use data region to group then based on usecase or volatility or
other aspects? In that case how to control size for each type of cache?

Thanks,
-Sam






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Cache metrics

2018-04-25 Thread javastuff....@gmail.com
Can we reset cache metrics, without destroying or restarting a cache?

Thanks,
-Sam



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Slow invoke call

2018-04-23 Thread javastuff....@gmail.com
Sorry, I do not have trace for scan query. 
We moved away from the earlier implementation, as of now it is not showing
big latencies like earlier. 

Thank you for help.

Thanks,
-Sam





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Slow invoke call

2018-04-17 Thread javastuff....@gmail.com
Thanks Val.

I understand Array copy is heavy operation and probably lots of memory
allocations too, however, my profiler showing complete logic of copy and
append taking 50% of the total time taken by Invoke call. that's why the
question, does invoke should take this much time or its the concurrency
killing it to have the atomic operation?

I have already tried putting separate entries instead of appending to single
byte array. However, this approach needs more logic to keep sequence,
locking or synchronizing during fetch or remove. 
During the quick implementation of this new approach, I used scan query
filter on key for fetch and remove calls. As expected put was faster (no
entry-processor, no array copy), however, faced issue with scan query.
Probably one thread iterating on scan query and other tried to put, thats
where scan query bails out with an exception.



I am going to tweak this usecase further to get better results, any
ideas/input will be appreciated.

Thanks,
-Sambhav



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Slow invoke call

2018-04-05 Thread javastuff....@gmail.com
Thanks Pavel for reply.

/"When you store entries into the off-heap, any update operation requires
copying value from the off-heap to the heap." /
I am updating using remote entry processor, does that also need to copy
value to calling heap? If yes then no benefit using entry processor here, I
can fetch and put, probably with a lock.

Why does invoking remote method is taking 50% of time? Is it because of
concurrency and entry processor will execute under a lock internally? 

Coping existing and incoming bytes, must be generating a lot for GC.

Is there any better way to deal with this usecase? right now it seems slower
than DB updates.

Thanks,
-Sam



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Slow invoke call

2018-04-04 Thread javastuff....@gmail.com
Hi,

I have a usecase where we are storing byte array into one of the OFFHEAP
ATOMIC cache. Multiple threads keep on appending bytes to it using remote
operation (Entry processor / invoke call). 

Below is the logic for remote operation.

if (mutableEntry.exists()) {
MyObject m = (MyObject)mutableEntry.getValue();
byte[] bytes = new byte[m.getContent().length +
newContent.getContent().length];
System.arraycopy(m.getContent(), 0, bytes, 0, m.getContent().length);
System.arraycopy(newContent.getContent(), 0, bytes,
m.getContent().length,newContent.getContent().length);
m.setContent(bytes);
mutableEntry.setValue(m);
} else {
mutableEntry.setValue(newContent);
}

We tested by adding about 25MB of content.

What we have seen is -
1. In 8 threads test we have seen average time for Invoke() is 36ms.
2. Out of total time taken to execute Invoke(), 50% time taken by above
remote logic. Where does other 50% spend? Probably just to invoke the
method. 

What can we tune here? We are considering InvokeAll(), any more ideas?

If we increase this to 250MB we have seen a very slow performance and after
hours it goes Out-of-memory, setOffHeapMaxMemory(0) does not help on 32GB
box.

We are using Ignite 1.9, Java 1.8, Parallel GC and app JVM is 4GB.
Unfortunately, we can not upgrade Ignite version as of now.

Thanks,
-Sam




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Event Listeners

2017-07-07 Thread javastuff....@gmail.com
No. Near cache is one of the usecase, because of serializing/deserialize can
not use Ignite Near cache, it's a big overhead and negating benefits of near
cache altogether. 

In other usecases we need to notify all changes to cached data. In a
specific scenario, we have to remove all entries and same need to be
notified to all nodes.

Thansk,
-Sam





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Event-Listeners-tp8306p14508.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Event Listeners

2017-07-06 Thread javastuff....@gmail.com
Cached data change notifications are used in multiple use-cases. Here is one
-

We have OFFHEAP distributed Ignite cache fronted by JVM level local copy. It
is similar to what Near cache would work. However, Ignite Near cache (JVM
level local copy) also get serialized/de-serialized. which results in
latency compared to direct object access. So we are not using Ignite Near
cache but simulating same. To invalidate cache we rely on event
notifications. In some case, we need to clear all cached entries and that is
where we use cache.clear().

There are other scenarios where change event notifications are the
preliminary indication of data change, which needs to be notified to all
nodes to keep data and process consistency.  Message passing can be another
approach but it won't be completely consistent and tightly bound to data
change. 

Can I assume cache.clear() does not generate any event notification and need
to move away from it? Can you please help optimizing clear vs remove?

Thanks,
-Sam 

   



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Event-Listeners-tp8306p14437.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Event Listeners

2017-07-05 Thread javastuff....@gmail.com
In general what event to look for cache.clear()? I tried remoteListener for
EVT_CACHE_OBJECT_PUT/READ/REMOVED and unable to capture clear() activity.

To use remove() instead of clear() as a workaround, need to do a scan query
and then remove/removeAll. Sounds expensive. I am looking at PARTITIONED
OFF-HEAP cache with 1K to 40K entries with 1-4 nodes in the topology. How
can I optimize this workaround? Will Entryprocessor help? 

Appreciate your help and ideas.

Thanks,
-Sam   



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Event-Listeners-tp8306p14364.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Event Listeners

2017-07-05 Thread javastuff....@gmail.com
onRemoved listener is not called when cache.clear() is used. Can you please
let me know what event I need to listen for clear()?

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Event-Listeners-tp8306p14361.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Out-Of-Memory

2017-06-19 Thread javastuff....@gmail.com
Sorry my bad. Example also talking in terms of number of entries. 

Thanks for help.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Out-Of-Memory-tp13829p13969.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Out-Of-Memory

2017-06-19 Thread javastuff....@gmail.com
I found Performance tune tip with example of startSize  [1]
 
. Example specifying startSize in MB.

Can you also comment on what would be startSize incase offHeapMaxMemory set
to 0 (unlimited)? Based on your previous comment I assume it is
DFLT_START_SIZE=150 number of entries. However example in document made
me in doubt again.

If 150 is number of entries then documentation may need update and I
defiantly need to reduce it in default configuration. 
If 150 is bytes similar to cache size then it comes out to be 1.5MB
which is too small to get OOME with 350 cache definition. 

In order to define correct configuration can you please clarify again?

Thanks,
-Sam  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Out-Of-Memory-tp13829p13968.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Out-Of-Memory

2017-06-16 Thread javastuff....@gmail.com
I will try it out. One question -

Default 1_500_000 means number of entries in map or bytes or KB or MB?
Document does not clarifies this either.

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Out-Of-Memory-tp13829p13878.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Out-Of-Memory

2017-06-15 Thread javastuff....@gmail.com
Hi,

We are using Ignite 1.9, OFFHEAP with SWAP disabled. We are creating caches
programmatic and want to use SQL. 
At one instance creating 350 empty cache ran into out-of-memory.  We are
already setting low queue size for delete history
(IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE=100)

Log metrics showing 25% free heap memory and 87% free off-heap. Can you
please help me understand this issue and possible resolution? OOM with heap
or offheap or H2 ?

Below is Ignite log with stack trace as well as periodic metrics.

=
2017-06-15 11:41:41,404,[exchange-worker-#68%TESTNODE%],Skipping rebalancing
(nothing scheduled) [top=AffinityTopologyVersion [topVer=1,
minorTopVer=348], evt=DISCOVERY_CUSTOM_EVT,
node=0c23715a-8c78-4852-9b1a-a5f101d65b7b]
2017-06-15 11:41:43,345,[exchange-worker-#68%TESTNODE%],Failed to
reinitialize local partitions (preloading will be stopped):
GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=1,
minorTopVer=349], nodeId=0c23715a, evt=DISCOVERY_CUSTOM_EVT]
class
org.apache.ignite.internal.util.offheap.GridOffHeapOutOfMemoryException:
Failed to allocate memory [total=0, failed=16384]
at
org.apache.ignite.internal.util.offheap.unsafe.GridUnsafeMemory.allocate0(GridUnsafeMemory.java:182)
at
org.apache.ignite.internal.util.offheap.unsafe.GridUnsafeMemory.allocateSystem(GridUnsafeMemory.java:145)
at
org.apache.ignite.internal.util.offheap.unsafe.GridUnsafeMap$Segment.(GridUnsafeMap.java:624)
at
org.apache.ignite.internal.util.offheap.unsafe.GridUnsafeMap$Segment.(GridUnsafeMap.java:590)
at
org.apache.ignite.internal.util.offheap.unsafe.GridUnsafeMap.init(GridUnsafeMap.java:273)
at
org.apache.ignite.internal.util.offheap.unsafe.GridUnsafeMap.(GridUnsafeMap.java:248)
at
org.apache.ignite.internal.util.offheap.unsafe.GridUnsafePartitionedMap.(GridUnsafePartitionedMap.java:111)
at
org.apache.ignite.internal.util.offheap.GridOffHeapMapFactory.unsafePartitionedMap(GridOffHeapMapFactory.java:224)
at
org.apache.ignite.internal.processors.offheap.GridOffHeapProcessor.create(GridOffHeapProcessor.java:72)
at
org.apache.ignite.internal.processors.cache.GridCacheSwapManager.initOffHeap(GridCacheSwapManager.java:251)
at
org.apache.ignite.internal.processors.cache.GridCacheSwapManager.start0(GridCacheSwapManager.java:138)
at
org.apache.ignite.internal.processors.cache.GridCacheManagerAdapter.start(GridCacheManagerAdapter.java:50)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCache(GridCacheProcessor.java:1091)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1745)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1636)
at
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:382)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:581)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:464)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1674)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
2017-06-15 11:41:48,111,[grid-timeout-worker-#54%TESTNODE%],
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=0c23715a, name=TESTNODE, uptime=00:10:06:422]
^-- H/N/C [hosts=1, nodes=1, CPUs=2]
^-- CPU [cur=0.27%, avg=20.87%, GC=0%]
^-- Heap [used=1461MB, free=26.63%, comm=1991MB]
^-- Non heap [used=195MB, free=87.16%, comm=208MB]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=6, qSize=0]
^-- Outbound messages queue [size=0]
2017-06-15 11:41:48,333,[exchange-worker-#68%TESTNODE%],Failed to wait for
completion of partition map exchange (preloading will not start):
GridDhtPartitionsExchangeFuture [dummy=false, forcePreload=false,
reassign=false, discoEvt=DiscoveryCustomEvent [customMsg=null,
affTopVer=AffinityTopologyVersion [topVer=1, minorTopVer=349],
super=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=0c23715a-8c78-4852-9b1a-a5f101d65b7b, addrs=[127.0.0.1],
sockAddrs=[/127.0.0.1:49500], discPort=49500, order=1, intOrder=1,
lastExchangeTime=1497506480276, loc=true, ver=1.9.0#20170302-sha1:a8169d0a,
isClient=false], topVer=1, nodeId8=0c23715a, msg=null,
type=DISCOVERY_CUSTOM_EVT, tstamp=1497507101444]], crd=TcpDiscoveryNode
[id=0c23715a-8c78-4852-9b1a-a5f101d65b7b, addrs=[127.0.0.1],
sockAddrs=[/127.0.0.1:49500], discPort=49500, order=1, intOrder=1,

novalue (NULL) vs default zero for writeInt/readInt

2017-06-01 Thread javastuff....@gmail.com
Hi,

We have a usecase where we receive data in a map with all field values in
String format. However actual datatype is different for various fields.

While pushing into cache using writeBinary we convert string to
int/long/double. If String is NULL, we skip writeInt/writeLong.
These fields are SQL  searchable so we want novalue means NULL. H2 Debug
console shows it is NULL because we skipped it in writeBinary.
However when we read it back using readBinary We get value as zero.

Question -
How do I differentiate novalue (NULL) vs default zero while
writeBinary/readBinary as the same time we want SQL to refer it as NULL?

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/novalue-NULL-vs-default-zero-for-writeInt-readInt-tp13324.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Grid is in invalid state to perform this operation

2017-05-03 Thread javastuff....@gmail.com
Want to go to the cluster case as you suggested. As of now I am running only
single node. After awake application (JBOSS) revive, but cache grid seems to
be stopped.

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Grid-is-in-invalid-state-to-perform-this-operation-tp12360p12397.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Grid is in invalid state to perform this operation

2017-05-02 Thread javastuff....@gmail.com
Hi,

My application was working and later laptop went into sleep/standBy mode.
After awake got below exception. Looks like grid is stopped. Is there a way
to handle this?

/java.lang.IllegalStateException: Grid is in invalid state to perform this
operation. It either not started yet or has already being or have stopped
[gridName=grid.cfg, state=STOPPED]
org.apache.ignite.internal.GridKernalGatewayImpl.illegalState(GridKernalGatewayImpl.java:190)
org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:90)
org.apache.ignite.internal.IgniteKernal.guard(IgniteKernal.java:3300)
org.apache.ignite.internal.IgniteKernal.cache(IgniteKernal.java:2548)
/

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Grid-is-in-invalid-state-to-perform-this-operation-tp12360.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Serializer

2017-05-02 Thread javastuff....@gmail.com
Hi

We have multiple off-heap caches defined.  We have implemented Binarylizable
for each cache object. So far so good.

We have a requirement we have to store List or Map of Binarylizable objects
in a cache. How do I attempt to write serializer for List and Map? 

These List and Map are extended object with some extra logic on top of basic
ArrayList or LinkedHashMap. 

Thanks,
-Sam






--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Serializer-tp12359.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Offheap and max memory

2017-04-26 Thread javastuff....@gmail.com
It looks like documentation is misleading.

Tried test program - 

- without setting offHeapMaxMemory, getOffHeapMaxMemory returns 0. 
- Setting offHeapMaxMemory=-1, getOffHeapMaxMemory returns 0.

Thanks,
-Sam




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Offheap-and-max-memory-tp12275p12277.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Offheap and max memory

2017-04-26 Thread javastuff....@gmail.com
Hi 

I am using Partitioned, Atomic, OffHeap cache, creating cache pragmatically. 

Recently noticed offHeapMaxMemory is not set because of glitch in my
configuration and program. Based on java doc default offHeapMaxMemory value
is -1, specified by DFLT_OFFHEAP_MEMORY constant which means that off-heap
storage is disabled by default. 

I was not facing any issues for put, get and etc operation, so wondering
where does data gets cached? OffHeap/OnHeap.

/   try (Ignite ignite = Ignition.start("config/example-ignite.xml")) {
CacheConfiguration cacheCfg = new
CacheConfiguration<>();
cacheCfg.setName(CACHE_NAME);
cacheCfg.setCacheMode(CacheMode.PARTITIONED);
cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheCfg.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);

try (IgniteCache cache =
ignite.getOrCreateCache(cacheCfg)) {
putGet(cache);
putAllGetAll(cache);
} finally {
ignite.destroyCache(CACHE_NAME);
}
 }/

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Offheap-and-max-memory-tp12275.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: SQLQuery and Datatypes

2017-04-24 Thread javastuff....@gmail.com
It would be helpful to understand how the date and timestamp are stored with
writeDate and writeTimestamp specifically for the format perspective. So
that I can create queries on the same line if I choose not to use setArgs.

Thanks,
-Sam

 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SQLQuery-and-Datatypes-tp12167p12210.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: stdout - Message queue limit is set to 0, potential OOMEs

2017-04-24 Thread javastuff....@gmail.com
In that case I will change my configuration to have  and change it if needed. 

Out-of-box unlimited seems too aggressive with potential OOME risk.

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/stdout-Message-queue-limit-is-set-to-0-potential-OOMEs-tp12048p12209.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


SQLQuery and Datatypes

2017-04-21 Thread javastuff....@gmail.com
Hi 

My cached object is Binarylizable. I am using SQLQuery and SQLFieldQuery. 

In some cases I need to query on a hardcoded/fixed filter e.g. Name = 'xyz'.
Same usecase for Long, Date, TimeStamp datatype as well. 

Basically not using setArgs().

I see it works fine for String, Int, Long. 
- Can I use formatted string for Date, Timestamp and Boolean datatypes? 
- If yes - What format for Date and Timestamp? Is there any disadvantages
because not using proper datatype and setArg?

Thanks,
-Sam   








--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SQLQuery-and-Datatypes-tp12167.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: stdout - Message queue limit is set to 0, potential OOMEs

2017-04-21 Thread javastuff....@gmail.com
Added below to TcpCommunicationSpi and message went away, able to see the new
value on JMX as well.
//

However, how do I know what is the good or suitable value one need to set? I
have more than 50 different PARTITIONED OFF-HEAP caches defined, most likely
to have 2 to 4 nodes with 20GB each or more.
 
in my opinion, potential OOME is big risk with default configuration.
Everybody needs performance, but no one really need OOME. 

Thanks,
-Sam





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/stdout-Message-queue-limit-is-set-to-0-potential-OOMEs-tp12048p12165.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Index

2017-04-21 Thread javastuff....@gmail.com
Resolved it!

I used default constructor for QueryIndex and missed setting IndexType. This
resulted into missing index.
Added /"idx.setIndexType(QueryIndexType.SORTED)"/, able to see index on H2
debug console.

What is the default IndexType?
Is it expected to not throw error when IndexType is not set and ignore index
creation? With default constructor this is likely to happen.

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Index-tp11969p12162.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Index

2017-04-19 Thread javastuff....@gmail.com
I tried REST API and H2 debug console. Do not see index, not sure what is
wrong.

I am creating cache pragmatically and can not use annotations. Below is the
sample based on CacheQueryExample.java. Removed annotations from Person.java
to create Person2.java

 
   /CacheConfiguration personCacheCfg = new
CacheConfiguration<>(PERSON_CACHE);
personCacheCfg.setCacheMode(CacheMode.PARTITIONED);
Collection indx_fields = new ArrayList<>();
indx_fields.add("salary");
LinkedHashMap fields = new LinkedHashMap<>();
fields.put("id",Long.class.getName());
fields.put("salary",Double.class.getName());
QueryIndex idx = new QueryIndex();
idx.setName("SALARY_IDX");
idx.setFieldNames(indx_fields, true); 
Collection idxCollection = new ArrayList<>();
idxCollection.add(idx);
QueryEntity ent = new QueryEntity();
ent.setKeyType(Long.class.getName());
ent.setValueType(Person2.class.getName());
Collection entCollection = new ArrayList<>();
ent.setIndexes(idxCollection);
ent.setFields(fields);
entCollection.add(ent);
personCacheCfg.setQueryEntities(entCollection);
IgniteCache personCache =
ignite.getOrCreateCache(personCacheCfg);/

With this I was expecting to see index named SALARY_IDX on column SALARY.
Can you hep identifying the issue here?

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Index-tp11969p12095.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: stdout - Message queue limit is set to 0, potential OOMEs

2017-04-19 Thread javastuff....@gmail.com
Thank you. Last question -
Why it showing up with 1.9 and not earlier? 

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/stdout-Message-queue-limit-is-set-to-0-potential-OOMEs-tp12048p12091.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: stdout - Message queue limit is set to 0, potential OOMEs

2017-04-19 Thread javastuff....@gmail.com
What do you mean by messages? I am not using Ignite messaging. Are these
messages of rebalancing during topology change? 
How do I configure it to avoid potential OOME?

Thanks,
-Sam




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/stdout-Message-queue-limit-is-set-to-0-potential-OOMEs-tp12048p12066.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


stdout - Message queue limit is set to 0, potential OOMEs

2017-04-18 Thread javastuff....@gmail.com
Recently moved to Ignite 1.9 and noticed below line on stdout -

/Message queue limit is set to 0 which may lead to potential OOMEs when
running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message
queues growth on sender and receiver sides./

Not sure if this existed in older version. 

Can anybody explain what does it mean and how to avoid potential OOME?  

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/stdout-Message-queue-limit-is-set-to-0-potential-OOMEs-tp12048.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ScanQuery log all entries

2017-04-18 Thread javastuff....@gmail.com
Thanks. One question though -

Below loop will automatically go over all pages or just one page with 1024
entries?
   
for (Cache.Entry, Person> entry : cursor) 
System.out.println("Key: " + entry.getKey() + ", Value: " +
entry.getValue()); 

If it is just one page then how to know there is more results and how to
iterate on those?

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ScanQuery-log-all-entries-tp11970p12047.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Index

2017-04-17 Thread javastuff....@gmail.com
Webconsole is the only way? Visor or JMX or logs or sample JAVA API. 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Index-tp11969p12004.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ScanQuery log all entries

2017-04-17 Thread javastuff....@gmail.com
Thanks Dmitry.

So it means if I use QueryCursor, it will not bring all entries in heap but
only 1024 or page size setting.
Is there any example for QueryCursor? Can anybody share a sample code
snippet for QueryCursor usage?

Thanks,
-Sam 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ScanQuery-log-all-entries-tp11970p12003.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Index

2017-04-13 Thread javastuff....@gmail.com
Hi,

How to check index is created? How many indexes are present? What is the
definition of index?

Thanks,
--Sambhav



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Index-tp11969.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Logging ignite-log4j.jar

2017-03-24 Thread javastuff....@gmail.com
Thanks Andrey.

Looked at code and seems like check for log configuration is made, 
/if(!isConfigured()) {
this.impl.setLevel(Level.OFF);
}/

but not completely correct. Below is the method from Log4JLogger

/public static boolean isConfigured() {
return Logger.getRootLogger().getAllAppenders().hasMoreElements();
} /

It is looking at appenders from rootLogger only. It should also check for
other Loggers. Probably below code should fix it.

/public static boolean isConfigured() {
Enumeration appenders = Logger.getRootLogger().getAllAppenders();
if (appenders.hasMoreElements()) {
return true;
}
else {
Enumeration loggers = LogManager.getCurrentLoggers() ;
while (loggers.hasMoreElements()) {
Logger c = (Logger) loggers.nextElement();
if (c.getAllAppenders().hasMoreElements())
return true;
}
}
return false;
}/

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Logging-ignite-log4j-jar-tp11400p11439.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Logging ignite-log4j.jar

2017-03-23 Thread javastuff....@gmail.com
Hi 

My application already uses Log4j for logging, simply adding
ignite-log4j.jar enables Ignite logging. I also added a new logger
configuration for package "org.apche.ignite" in my application logging
configuration, so that Ignite can log into a separate rolling file. 
Ignite logging working fine. However rootLogger log level is changed to INFO
by Ignite initialization, which is problematic for my application logging.  

Ignite configuration file does not have configuration for GridLogger, so at
initialization it tries to add a new consoleAppender with INFO level by
instantiating Log4JLogger(), here it also updates rootLogger to INFO.

Question - 
1. Why does Ignite need to update rootLogger log level? 
2. Is this not the good way to configure logging for ignite using
ignite-log4j.jar?

Thanks
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Logging-ignite-log4j-jar-tp11400.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: REST service command LOG

2017-03-20 Thread javastuff....@gmail.com
Created minor ticket IGNITE-4845




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/REST-service-command-LOG-tp10148p11325.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: REST service command LOG

2017-03-13 Thread javastuff....@gmail.com
Hi Val,

Apologies for coming back to this thread late.

I agree for security reasons it should internally figure out which log file
to read. However it does not work, here is the scenario -

Ignite-log4j module is enabled simply including this jar file, now it uses
my application log4j configuration to log successfully. But ignite.log() is
never initialized, it is null, hence REST API not able to figure out where
is the log file. 
I have to enforce either of the workaround - 
1. Log file location need to start with IGNITE_HOME and use Path parameter
in LOG command.
2. Enforce logging into IGNITE_HOME/work/log/ignite.log location.

I think when Ignite-log4j module is enabled it should make Ignite aware of
logging configuration. 

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/REST-service-command-LOG-tp10148p11158.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Fetch by contains on a field as Array or List

2017-01-30 Thread javastuff....@gmail.com
"Search will not be indexed" sounds costly, Is there any other way this can
be done example - LIKE operator in SQL?

Regarding custom function option - 
1. Need to use  "WHERE CONTAINS(ID_LIST) =12345" where CONTAINS is custom
function.
2. From above example where clause ID_LIST will be available as ARRAY/LIST
to perform java logic inside custom function implementation. 

-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Fetch-by-contains-on-a-field-as-Array-or-List-tp10276p10315.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: eagerTtl

2017-01-27 Thread javastuff....@gmail.com
Thanks, Looks like false alarm. I see a single thread for ttl-cleanup-worker
per node.

-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/eagerTtl-tp10249p10292.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Fetch by contains on a field as Array or List

2017-01-26 Thread javastuff....@gmail.com
Hi,

I have an object with one of the field as Array or List. Array/List field
holds Long values. Size could be 10s or 100s or few 1000s, not expecting it
to grow more than 5000. Object won't be updated frequently, mostly read
only.

Object implements Binarylizable.

Requirement is to fetch all the cached objects where Array/List field
contains a give value. Is there any type of SQL query I can use to achieve
this usecase? 

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Fetch-by-contains-on-a-field-as-Array-or-List-tp10276.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: eagerTtl

2017-01-26 Thread javastuff....@gmail.com
Hi Val,

JavaDoc does not talk about threads per cache or node. To get more info on
this property, came across below post, I see similar memory and thread
pattern with my application but not yet investigated specifically for
eagerTtl or its related threads.

http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cache-High-Memory-Overhead-caused-by-GridCircularBuffer-Item-instances-td3682.html#a3691

Can you please point to thread name related to egarTtl which I can take a
look at in thread dump?

Secondly, If none of the cache is configured for expiry, will this thread be
present and running? If it s a single thread per node then I am not worried. 

Thank,
-Sam




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/eagerTtl-tp10249p10272.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


eagerTtl

2017-01-25 Thread javastuff....@gmail.com
Hi

I was going through JAVADoc and came across CacheConfiguration property
eagerTtl. Came across one of the post which says eagerTtl=true spans a
thread per cache.
 
We are not using ExpiryPolicy on any of the caches, we have almost 100
different cache. To keep memory and thread footprint down, do we need to
explicitly set eagerTtl=false or with default ExpiryPolicy it is set to
false?

-Sam 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/eagerTtl-tp10249.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


REST servive command LOG

2017-01-19 Thread javastuff....@gmail.com
Hi,

I find LOG command very useful. Periodic metrics and topology change
information helps monitoring. 

However it does not work with my application, It fails with below message -
"Request parameter 'path' must contain a path to valid log file."

My application already using Log4j, so not using Ignite configuration XML to
configure Ignite logging. Instead application log4j configured to push all
Ignite INFO logs to a separate rolling file. Logging works fine. But 

1. Ignite configuration does not have logging configuration details.
2. Log files can be anywhere, not under IGNITE_HOME

And because of above 2 points LOG command fails.

How can I make it work? 

Thanks,
-Sam







 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/REST-servive-command-LOG-tp10148.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite REST with JBOSS

2017-01-19 Thread javastuff....@gmail.com
I had looked at this again, here is what I think -
1. No embedded JETTY server
2. Custom plug-able endpoint handler 

Plug-able GridRestProcessor will be overkill. However customizable
HTTP_PROTO_CLS should be good enough to plug custom rest handler. In 1.7 I
see it is hard coded to Jetty.

private static final String HTTP_PROTO_CLS =
   
"org.apache.ignite.internal.processors.rest.protocols.http.jetty.GridJettyRestProtocol";

May be enhancement for future. 

Thanks,
-Sam





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-REST-with-JBOSS-tp10108p10147.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Data Rebalancing & Transactions

2017-01-19 Thread javastuff....@gmail.com
Hi Val,

I agree ASYNC makes more sense, but need tuning and lighting fast network to
complete GBs of data.

If it is just about backups then using NONE re-balancing mode can help in
certain usecases, by avoiding network transfers of GBs of data.

Side cache - Persistent store is DB, Cache does not have required data then
app will fetch from DB and supply to cache for further fetches. Cache
Partition backup not needed because there is DB backup.

In this case losing data from cache does not hurt alot with up-down scaling
of cache cluster.

Question -
1. NONE mode - shut down of a node will transfer data to other node or not?
2. PUT will re-balance partitions among multiple cache server node or not?

Thanks,
-Sam

 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Data-Rebalancing-Transactions-tp10092p10146.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite REST with JBOSS

2017-01-17 Thread javastuff....@gmail.com
Hi 

We are using Ignite in our application which runs on top of JBOSS.
Application starts Ignite server pragmatically. 
Is there a way to use Ignite REST-HTTP module?

-Sam 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-REST-with-JBOSS-tp10108.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


ignite update notifier

2017-01-17 Thread javastuff....@gmail.com
Hi

Is it possible to disable ignite-update-notifier-timer? 

-Sam 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ignite-update-notifier-tp10106.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Performance of multiple ignite JVMs within single node

2017-01-05 Thread javastuff....@gmail.com
I have observed significant degradation for get and put with increase in
number of Ignite JVM. I am still searching to tune this. 
 
More details can be found at -

http://apache-ignite-users.70518.x6.nabble.com/Performance-with-increase-in-node-td9378.html

  

1 Server to 4 Server timing degraded ~7 fold for Put and 22 fold for fetch.
I see similar behavior with our application and with a simple test program.

-Sam




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Performance-of-multiple-ignite-JVMs-within-single-node-tp9753p9910.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cluster hung after a node killed

2017-01-04 Thread javastuff....@gmail.com
Thanks Val. Ticket has been fixed for 1.9, Any idea when 1.9 will be
available?

-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p9880.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Performance with increase in node

2016-12-12 Thread javastuff....@gmail.com
Thanks Denis. Are you saying application in client mode is recommended
architecture in terms of performance? 

I had looked at GridGain bench-marking results, but it does not provide
comparative results to understand impact of adding/removing cache node. 

In my simple test, application running as server node and adding/removing
node providing drastic change in result.

-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Performance-with-increase-in-node-tp9378p9482.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cluster hung after a node killed

2016-12-09 Thread javastuff....@gmail.com
Hi Val,

Did you had chance to look at attached sample program? Did it help figuring
out what is going on?

-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p9465.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Performance with increase in node

2016-12-09 Thread javastuff....@gmail.com
Thank you Denis. Are you saying test application should not act as server
node for Ignite and it should be client node?  it does not matter.

Regarding Yardstick, I had difficulty setting up no credential user to
launch remote commands, so not able to make it fully functional. On other
side results published does not provide comparative results for cluster
topology size. I mean same test with Single node, 2 nodes, 4 nodes, like...

Can anybody post such results here, if you have yardstick setup working?

-Sam 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Performance-with-increase-in-node-tp9378p9464.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Performance with increase in node

2016-12-05 Thread javastuff....@gmail.com
Actual application is multiple machines and various threads doing get and
put. As a simple test which can provide average get and put time, I tried
attached test program.

Tried same program on multiple physical machines (each with 48 CPU, 40GB
Ram) and see similar behavior. 

To rule out network latency, tried same on single physical machine (48 CPU,
40GB Ram) and see similar behavior.

Anybody faced this? 
How are others using it? (single cache server or multiple nodes)
Are there any configurations which I need to tweak?

-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Performance-with-increase-in-node-tp9378p9404.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Performance with increase in node

2016-12-02 Thread javastuff....@gmail.com
Hi,

We are observing really good performance with single node, however with
multiple nodes in cluster ,performance is degrading or we can say
application throughput is not scaling. 

I wrote a sample program to put 100K  data into cache in a
single thread using put() and later read all of them using get().
(Application won't get all data from cache, but this gives a ball park idea
how put and get will behave)

Below is the time taken on 8 CPU Windows laptop -
1 Server  - Put took 5136 ms - Get took 725 ms
2 Servers - Put took 19468 ms - Get took 10162 ms
3 Servers - Put took 28087 ms - Get took 14481 ms
4 Servers - Put took 34948 ms - Get took 16310 ms

As you can see from 1 Server to 4 Server timing degraded ~7 fold for Put and
22 fold for fetch. 

Is this because of Partitioned cache where each partition getting balanced
automatically? How can I tune it to get optimal performance?

Attaching sample program. (To add nodes, use ExampleNodeStartup.java from
Ignite examples.)

-Sam

PutAndGetPerf.java
  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Performance-with-increase-in-node-tp9378.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cluster hung after a node killed

2016-12-02 Thread javastuff....@gmail.com
Attaching 4 classes -
Node1.java - create empty cache node. 
Node2.java - seed cache with 100K  dat.
Node3.java - take explicit lock on a key and wait for 15 seconds to unlock. 
Node4.java - fetch cached data.

Steps -
1. Run Node1, wait for empty node to boot up.
2. Run Node2, wait for completion message.
3. Run Node3, kill it when it prompts for "kill me...".
4. Run Node4 Topology snapshot on other node will show node joined, however
Node 4 will not able to complete with any fetch operations. Fetch from cache
hung.
5. Run another instance of Node2, it will not able to complete with any put
operation. Put to cache hung.

Node1.java
  
Node2.java
  
Node3.java
  
Node4.java
  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p9375.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cluster hung after a node killed

2016-12-01 Thread javastuff....@gmail.com
Val,

I have reproduced this with simple program - 
1. Node 1 - run example ExampleNodeStartup 
2. Node 2 - Run a program which create a transaction cache and add 100K
simple entries.  
cfg.setCacheMode(CacheMode.PARTITIONED); 
cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); 
cfg.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED); 
cfg.setSwapEnabled(false); 
cfg.setBackups(0); 
3. Node 3 - Run a program which takes a lock (cache.lock(key)) 
4. Kill Node 3 before it can unlock.
5. Node 4 - Run a program which tries to get cached data. 

Node4 not able to join cluster, it is hung. In-fact complete cluster is hung
andy operation from Node2 also hung.

-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p9337.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cluster hung after a node killed

2016-11-30 Thread javastuff....@gmail.com
Hi Val,

Killing the node that acquired the lock did not release it automatically and
leads whole cluster in hung state, any operation on any cache (not related
to lock) are in wait state. Cluster is not able to recover seamlessly. Looks
like a bug to me. 

I understand lock timeout can be error-prone, if configured correctly then
lock timeout can provide a second way of auto recovery in such failover
cases. Is there any way to configure timeouts on lock?

Explicit lock is one of the usecase we have, but Cluster auto recovery
during any change to cluster is the most important, so if these 2 not going
together then its a show stopper.

-Sam





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p9309.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cluster hung after a node killed

2016-11-28 Thread javastuff....@gmail.com
Ideally cluster should recover seamlessly.  Is there any locktimeout which I
can configure? or any other configuration which will make sure locks taken
by crashing node gets released and cluster still serves all requests?

Is this a bug?

-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p9243.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cluster hung after a node killed

2016-11-16 Thread javastuff....@gmail.com
Could you please elaborate your suspicion?
 
addRoleDelegationToCache and addDocument calls were made after killing
node3, these calls trying to push data into cache and we are not using any
transaction API to start or commit transaction on cache explicitly while
pushing data to cache. And these calls are made by node1 to access regular
application. Application access was not made immediately after killing
node3, I tried to access application after about 3-5 minutes. And killing
node3 was done when system was idle. 

Unfortunately I can not share application. I can try to reproduce it again
and provide more logs if needed or try to write a test program to simulate
it.

Let me know if you need more logs, probably DEBUG level logs.   

Below are some pointers from threaddump and logs -

1. Below from thread dump made me assume topology change not completed and
any cache operation later on waiting on it.
/org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.awaitTopologyVersion(GridAffinityAssignmentCache.java:523)
/

2. Node3 address 10.107.186.137, 17:03:48,223 is the time when Server2 log
first detected failed node. Below logs -

/17:03:48,223 WARNING [org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi]
(tcp-disco-msg-worker-#2%TESTNODE%) Local node has detected failed nodes and
started cluster-wide procedure. To speed up failure detection please see
'Failure Detection' section under javadoc for 'TcpDiscoverySpi'
17:03:48,237 WARNING
[org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]
(disco-event-worker-#254%TESTNODE%) Node FAILED: TcpDiscoveryNode
[id=e840a775-36b9-48d3-993c-25dea95d59d0, addrs=[10.107.186.137, 10.245.1.1,
127.0.0.1, 192.168.122.1], sockAddrs=[/192.168.122.1:48500,
/10.107.186.137:48500, /10.245.1.1:48500, /127.0.0.1:48500], discPort=48500,
order=55, intOrder=29, lastExchangeTime=1478911399482, loc=false,
ver=1.7.0#20160801-sha1:383273e3, isClient=false]
17:03:48,240 INFO  [stdout] (disco-event-worker-#254%TESTNODE%) [17:03:48]
Topology snapshot [ver=56, servers=2, clients=0, CPUs=96, heap=4.0GB]
17:03:48,241 INFO 
[org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]
(disco-event-worker-#254%TESTNODE%) Topology snapshot [ver=56, servers=2,
clients=0, CPUs=96, heap=4.0GB]
17:03:58,480 WARNING
[org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager]
(exchange-worker-#256%TESTNODE%) Failed to wait for partition map exchange
[topVer=AffinityTopologyVersion [topVer=56, minorTopVer=0],
node=8cc0ac24-24b9-4d69-8472-b6a567f4d907]. Dumping pending objects that
might be the cause: 
17:03:58,482 WARNING
[org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager]
(exchange-worker-#256%TESTNODE%) Ready affinity version:
AffinityTopologyVersion [topVer=55, minorTopVer=1]/






--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p9025.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cluster hung after a node killed

2016-11-15 Thread javastuff....@gmail.com
logs.zip   

Attaching threaddump and log from 2 nodes. Lost logs from node 3 which was
killed using "kill-9 "

Let me know if you need more logs. 

In my understanding, after killing node 3 topology version update screwed-up
and Node2 keeps on complaining about failed Node3. Node 1 tried to access
application which hung on get or put ignite call because of topology
mismatch or locked.

-Sam 




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p9010.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cluster hung after a node killed

2016-11-14 Thread javastuff....@gmail.com
Do you want Ignite to be running in DEBUG? or System.out should be enough
from all 3 nodes? 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p8978.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Cluster hung after a node killed

2016-11-14 Thread javastuff....@gmail.com
Hi,

I have configured cache as off-heap partitioned cache. Running 3 nodes on
separate machine. Loaded some data into cache using my application's normal
operations. 

Used "/kill -9 /" to kill node 3.

Node 2 shows below Warning on console after every 10 seconds -

/11:03:03,320 WARNING
[org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager]
(exchange-worker-#256%TESTNODE%) Failed to wait for partition map exchange
[topVer=AffinityTopologyVersion [topVer=3, minorTopVer=0],
node=8cc0ac24-24b9-4d69-8472-b6a567f4d907]. Dumping pending objects that
might be the cause:/

Node 1 looks fine. However application does not work anymore and threaddump
shows it is waiting on cache put -

/java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x0007ecbd4a38> (a
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache$AffinityReadyFuture)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:159)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:117)
at
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.awaitTopologyVersion(GridAffinityAssignmentCache.java:523)
at
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.cachedAffinity(GridAffinityAssignmentCache.java:434)
at
org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.nodes(GridAffinityAssignmentCache.java:387)
at
org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.nodes(GridCacheAffinityManager.java:259)
at
org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primary(GridCacheAffinityManager.java:295)
at
org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primary(GridCacheAffinityManager.java:286)
at
org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primary(GridCacheAffinityManager.java:310)
at
org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.entryExx(GridDhtColocatedCache.java:176)
at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.entryEx(GridNearTxLocal.java:1251)
at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.enlistWriteEntry(IgniteTxLocalAdapter.java:2354)
at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.enlistWrite(IgniteTxLocalAdapter.java:1990)
at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.putAsync0(IgniteTxLocalAdapter.java:2902)
at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.putAsync(IgniteTxLocalAdapter.java:1859)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2240)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2238)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4351)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2238)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2215)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1214)/


Is there any specific configuration I need to provide for self recovery of
cluster? Losing cache data is fine, data is backedup in some persistent
store Example - DATABASE.

-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Event Listeners

2016-10-14 Thread javastuff....@gmail.com
Hi,

1. Is there a way to create cache event listener only for a particular
cache, so that other cache don't generate events and its overhead?
Currently, able to work with listener handler filtering based on CacheName,
however events are generated for all cache.

public boolean apply(CacheEvent evt) {
if(evt.cacheName().equals("SAMPLE")) {
//do something.
} else {
// ignore
}
return true;
}  

2. Is there a way to ensure that event listener is already created? In other
words how to ensure that we are creating only one listener per node for
event and topic. Currently, able to work with localListener, however
specific case or topology remoteListener needed.

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Event-Listeners-tp8306.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Near cache

2016-10-11 Thread javastuff....@gmail.com
Thank you for your reply. Would like to add more details to 3rd point as you
have not clearly understood it.

Lets assume there are 4 nodes running, node A brings data to distributed
cache, as concept of Near Cahce I will push data to distributed cache as
well as Node A will have it on heap in Map implementation. Later each node
uses data from distributed cache and each node will now bring that data to
their local heap based map implementation.
Now comes the case of cache invalidation -  one of the node initiate REMOVE
call and this will remove local heap copy for this acting node and
distributed cache. This invokes EVT_CACHE_OBJECT_REMOVED event. However this
event will be generated only on one node have that data in its partition
(this is what I have observed, remote event for owner node and local event
for acting node). In that case owner node has the responsibility to
communicate to all other node to invalidate their local map based copy.   
So I am combining EVENT and TOPIC to implement this. 

Is this right approach? or there is a better approach? 

Cache remove event is generated only for owner node (node holding data in
its partition) and node who is initiating remove API. Is this correct or it
suppose to generate event for all nodes? Conceptually both have their own
meaning and use, so I think both are correct.
  




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Near-cache-tp8192p8223.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Near cache

2016-10-10 Thread javastuff....@gmail.com
Hi,

Because Ignite always serialize data, we are not able to take advantage of
Near cache where cached object are heavy interms of
serialization/de-serilalization and no requirement of query or eviction. 
We are taking about 5 min vs 2 hours difference, if we do not cache meta
information of frequently accessed small number of objects.   

I am working on Java's standard map based implementation to simulate Near
cache where local heap has non-serialized object backed by partitioned
ignited distributed cache. Cache invalidation is a challenge where one node
invalidates partitioned distributed cache and now we have to invalidate
local copy of same cache form each each node. 

Need experts opinion on my POC -

For a particular cache add EVT_CACHE_OBJECT_REMOVED cache event listener. 
Now remove cache event will be generated on local node and remote node who
own that cache key.  
Remove event handler/actor publish a message to a topic. 
On receiving message on topic each node will remove object from local map
based copy.

Question -
1. Do you see any issue with this kind of implementation when cache
invalidation will be very less, but read will be more frequent?
2. Approximately how much delay one should expect during event generation
and publishing messages to all nodes?   
3. I have observed remove Event is generated only on acting local node or
owner node. So need to combine Event with Topic. Is there any other way this
can be achieved?
4. In terms of performance should we use -
 - Should we use local listener or remote listener? In POC I have added
remote listener.
 - Should we use sendOrderd or send?
4. Is there any specific sizing need to be done for REMOVE event and TOPIC?
We have around 20 such cache and I am planning to have a single topic. 

Regarding topology - At minimum 2 nodes with 20 GB off-heap. 

Thanks,
-Sam  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Near-cache-tp8192.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Out of memory

2016-09-30 Thread javastuff....@gmail.com
Good to know about future change. I will try your suggestion regarding system
property. I do not think I will have scenario where insertion and deletion
of same key happens at the same time. 

BTW all of my cache are marked TRANSACTIONAL, just for the sack of having
capability of explicit distributed lock. Hoping the system property
suggested is not only limited for ATOMIC.

Thanks,
-Sam 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Out-of-memory-tp7995p8041.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Out of memory

2016-09-29 Thread javastuff....@gmail.com
Thank you for your reply Den. Many LOCAL cache are less than 10MB, would that
also need overhead of 20-30MB? 

When System is ideal, slowly consuming memory, so is there any
metrics/statistics/JMX being captured regularly which need to be turned off? 

I turned off task and cache events, still same issue. 

Any more hints or tweaks.

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Out-of-memory-tp7995p8026.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Out of memory

2016-09-28 Thread javastuff....@gmail.com
In my application I have 100s of different caches defined. Some are LOCAL,
NEAR and OFFHEAP, in total all are 10GB and LOCAL are about 1GB. 
Running application with 4GB heap, machine have 48 CPU and 40GB RAM. I am
starting ignite from java code. Under bigger load getting OOM. Attaching
screenshot from heap dump. 

Am I missing anything or need to disable any kind of metrics? I had enabled
task and cache events, which I am planing to disable. 

Any hints will be good.  

Thanks,
-Sam
  
 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Out-of-memory-tp7995.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Local cache

2016-09-09 Thread javastuff....@gmail.com
Hello,

It appears that entry is being serialized, eventhough cache is defined as
LOCAL. Below is the configuration I used -




 






1. It will be performance overhead, when object is not shared and kept
locally only.
2. It is losing state. To build a object we have to read from many slow
subsystems providing read only data, example filesystem. and this
information is transient for network flow, however object building is slow
and hence need of caching instances in pool. this is a third party library
engine which we are instantiating and hence can not move away from transient
declarations.  

Is there a way to disable serialization for local cache?

Similarly Near cache will be available locally after first read, but it is
again in serialized form then it s performance over head as well. 

-Sam







--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Local-cache-tp7645.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: understanding Locks usage

2016-09-07 Thread javastuff....@gmail.com
With Key Vs Entry lock, I am talking about lock granularity. 
Key lock is kind of synchronous block or method which have processing logic
unrelated to the key or cached data being processed but a different shared
resource. Kind of distributed mutex. but note that it does not need all
other feature from a distributed cache access semantic.
Entry lock is a distributed cache lock with all data access semantics. It
can be seen as row level lock for a DB system.
It is good to have, but not necessary. Entry lock can achieve what key lock
need to do.

Few last question on Locks -
1. Is there a way to have time-to-live for a lock? what happens
thread/system got killed before unlocking?
2. Locks works only with TRANSACTIONAL mode, is there approx benchmark which
can be shared for FETCH/PUT/FECTCHALL/PUTALL comparing TRANSACTIONAL vs
ATOMIC? 

For key lock scenario above, I can have a separate cache just for locks
(Transnational) and separate cache for data (Atomic), but want to decide is
that really needed.
 
Thanks,
-Sam

 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/understanding-Locks-usage-tp7489p7596.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: understanding Locks usage

2016-09-07 Thread javastuff....@gmail.com
Thanks. Let me put it other way,

Is there any other way to take a distributed lock which can be unlocked by
owner thread, but does not need same LOCK instance? I mean unlock api finds
and gets the lock instance automatically to unlock. I am not sure if
IgniteLock can do that, not yet tried.

Couple of more questions on feature set with Lock - 
1. Is there any atomic api which can be used as fetchAndLock and
putAndUnlock?
2. Is there concept of locking Key and locking cached entry? here is my
understanding on difference between them -
- Locking key - Distributed lock. Does not need existence of cache entry
with that key. 
- Locking Entry - Distributed lock. To lock entry, it need to exists in
cache. Once locked all other thread access are read only.  

Thanks,
--Sam  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/understanding-Locks-usage-tp7489p7591.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: understanding Locks usage

2016-09-03 Thread javastuff....@gmail.com
There are multiple common usecases I am looking at with distributed lock, few
are below -
1. In a distributed application where application is running on more than 1
server node, want to synchronize some task for serial execution whenever
there is common entity involved.
2. In concurrent environment, want to control updates to cache. this is like
single transaction doing fetch and put after processing of cached value or
kind of synchronous block execution. 

many other common usecases. 

Here to work with Lock, lock method should return Lock instance and passed
to unlock method in cache manager. I would have expected to just pass key to
make it less tightly coupled. It is simple to workaround but now application
code will have cache code which is what I am not agreeing with a little.

Exception I faced said "Failed to unlock keys (did current thread acquire
lock with this lock instance?)", it hints like same thread need to be used
for unlock however it seems like misleading. Not same thread but same LOCK
object instance need to be used for unlock. I will try lock and unlock in
separate thread.

--Sam
 




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/understanding-Locks-usage-tp7489p7522.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: GET with LIKE operator for keys

2016-09-03 Thread javastuff....@gmail.com
I tried other ways to query, however I did not able to get it working. 

With my requirement I want to reuse code for multiple caches in the system,
so I do not know stored object type. Based on documentation and examples I
have seen so far, I think query feature is not suiting for my usecase
without knowing object type being queried. I might be wrong here and that is
the reason need help from experts.

--Sam 




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/GET-with-LIKE-operator-for-keys-tp7381p7521.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: GET with LIKE operator for keys

2016-09-02 Thread javastuff....@gmail.com
I want to use same method for multiple caches, so object stored is unknown.
Key type is known and it is String. 

Following is the way I was able to make it working with regular expression
as matchCriteria (.*January.*), Can you please let me know how to use '_key
= %January%' way? and which way would be recommended for this usecase?

private static void scanQueryStringMatch(String cacheName, final String
matchCriteria) {
IgniteCache cache = Ignition.ignite().cache(cacheName);

ScanQuery scan = new ScanQuery<>(
new IgniteBiPredicate() {
@Override
public boolean apply(String key, Object person) {
return key.matches(matchCriteria);
}
}
);
print("query result : ", cache.query(scan).getAll());
}



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/GET-with-LIKE-operator-for-keys-tp7381p7491.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


understanding Locks usage

2016-09-02 Thread javastuff....@gmail.com
I am exploring Locks and need some help understanding it.

1. To take a distributed lock on a key, does key need to exists in cache?

2. Once distributed lock is taken on existing cache key, does any operation
on that cache key entry restricted from other thread? other thread is not
explicitly taking lock. Example -

Thread 1 -
Lock lock = cache.Lock("key1");
try {
lock.lock();
//do something 
} finally { lock.unlock(); }

Thread 2 -
cache.get("key1");
or
cache.put("key1", object);
or 
any other basic cache operation 

3. What happens if same thread tries to take lock multiple times before
unlocking it -
Lock lock = cache.lock("keyLock");
Lock lock2 = cache.lock("keyLock");
Lock lock3 = cache.lock("keyLock");


4. I had created a simple example and it failed with "Failed to unlock keys
(did current thread acquire lock with this lock instance?)". However it is
same thread tries to unlock it. I could able to resolve this by using same
Lock object instance for lock() and unlock() method. So question is - does
same thread or same Lock object instance is enforced here? 

5. Is there a way I can achieve all 4 cases listed in attached test program?
IgniteLock is what I am thinking. CacheLockExample2.java

  





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/understanding-Locks-usage-tp7489.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


LOCAL cache and EntryProcessor

2016-08-30 Thread javastuff....@gmail.com
1. We have a separate logic for local cache for a specific scenario, How to
find cache mode for a given cache name? 

Does below code correct or there is alternate way -

private boolean isLocalCache(String pObjectType) {
CacheConfiguration[] cfg =
ignite.configuration().getCacheConfiguration();
for (CacheConfiguration aCfg : cfg) {
if(aCfg.getName().equalsIgnoreCase(pObjectType)) {
return aCfg.getCacheMode().equals(CacheMode.LOCAL);
}
}
return false;
}

2. What happens when EntryProcessor executed on LOCAL cache? Below style
needed or not necessary, invoke will handle it?

if(isLocalCache()) {
...
lock.lock();
long count = cache.get(key);
count = count + 1;
cache.put(key, count);
lock.unlock();
...
} else {
cache.invoke(key, new EntryProcessor(..);
}  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/LOCAL-cache-and-EntryProcessor-tp7419.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: GET with LIKE operator for keys

2016-08-30 Thread javastuff....@gmail.com
Thank you again.  My bad I missed 'Predefined Fields' note from
documentation.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/GET-with-LIKE-operator-for-keys-tp7381p7415.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


GET with LIKE operator for keys

2016-08-29 Thread javastuff....@gmail.com
Is there a way to query matching keys with wild card characters? Here is the
usecase -

We have few cache defined as local and few off-heap distributed.   
Some business logic need to capture all cache entries whose keys having text
"January".  (key='%January%')

cache.getLike('%January%')

--Sam





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/GET-with-LIKE-operator-for-keys-tp7381.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Near Cache

2016-08-29 Thread javastuff....@gmail.com
Thanks. This should help analyzing OOM with incorrect sizing configuration.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Near-Cache-tp7033p7380.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Near Cache

2016-08-24 Thread javastuff....@gmail.com
Thanks. I know it won't be a good way to approach it, Is there any class
instance I can search for in heapdump to count number of entries or heap
space used by near cache?  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Near-Cache-tp7033p7287.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Near Cache

2016-08-16 Thread javastuff....@gmail.com
How can I monitor NEAR cache? MBEAN and Visor not helping, is there another
way? 





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Near-Cache-tp7033p7116.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Near Cache

2016-08-12 Thread javastuff....@gmail.com
Thank you for quick response. Here are mode details and my view point of
usage -

>From java application we are starting Ignite instance using
"Ignition.start(configFilename)". 
We are keeping "clienMode" default. ( clientMode property not defined in XML
configuration and java code also not using Ignition.setClientMode() )
Typically we run 2 to 4 remote instances of application.
Total cache size (multiple caches) can grow beyond 10s of GB, so off-heap.
We have few caches with READ ONLY data and for those we would like to have
Near cache.

With Near cache configuration we are looking at below benefits -
1. Avoid Serialization/de-serialization for frequently used objects
2. Owner node for cached entry can be Node2, but Node 1 or all nodes are
frequently using it, so avoid network hop. (Node2 holds data in off-heap
partitioned cache, at the same time it is available locally with node1 or
all nodes in near cache after first access)
3. Want to keep limited entries in near cache to avoid huge heap size.

Thanks,
--Sambhav




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Near-Cache-tp7033p7035.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Near Cache

2016-08-12 Thread javastuff....@gmail.com
Hello,

I am new to Apache Ignite and exploring many cool features. My application
requires Ignite to run as embedded and use off-heap memory to cache data.
That means each application node is Server with off-heap storage capability. 

Can you please post example XML configuration for Near cache? 
I used below and caching seems to work, but not sure if it has correctly
configured it as Near cache. Tried MBEAN and visor console, but no hints.
Can you please comment on below configuration and way to ensure it is
configured correctly?









 
 
 




How can I configure off-heap 5GB and near cache for 1 entries?  I am
unable to configure eviction policy for near cache in XML.

Thanks a lot.
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Near-Cache-tp7033.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.