Re: Cannot query on a cache using Cassandra as a persistent store

2016-09-29 Thread zhaogxd
Hi Igor,

It is good to know that a cache can be created dynamically through the code,
so I started to try it out using  a simple approach:
1) Copied file
org/apache/ignite/tests/persistence/pojo/persistence-settings-3.xml to
persistence-settings-4.xml
2) Assigned a new Cassandra table name in persistence-settings-4.xml
3) Started an Ignite server node from a Window 10 cmd dialog
4) Ran following code to create a new cache called 'cache5':

public static void main(String[] args) throws ParseException {
//Start an ignite client node
Ignition.setClientMode(true);   
try (Ignite ignite =
Ignition.start("examples/config/cassandra/example-cassandra.xml")) {

//Create a new cache in Java
CacheConfiguration pcfg = new
CacheConfiguration<>("cache5");

pcfg.setIndexedTypes(PersonId.class, Person.class);
pcfg.setReadThrough(true);
pcfg.setWriteThrough(true);

//  DataSource cAdminDataSource = new DataSource();
//  cAdminDataSource.setCredentials(new 
CassandraAdminCredentials()); 
//  
cAdminDataSource.setContactPoints(CassandraHelper.getContactPointsArray());
//  cAdminDataSource.setReadConsistency("ONE");
//  cAdminDataSource.setWriteConsistency("ONE");
File kvpsf = new
File("C:\\mySoft\\apache-ignite-fabric-1.7.0-bin\\examples\\config\\cassandra\\persistence\\pojo\\persistence-settings-4.xml");
KeyValuePersistenceSettings kvps = new
KeyValuePersistenceSettings(kvpsf);

CassandraCacheStoreFactory ccsf = new
CassandraCacheStoreFactory<>();
//ccsf.setDataSource(cAdminDataSource); //Don't work!?
ccsf.setDataSourceBean("cassandraAdminDataSource");
ccsf.setPersistenceSettings(kvps);
pcfg.setCacheStoreFactory(ccsf);

IgniteCache cache5 = 
ignite.getOrCreateCache(pcfg);

System.out.println("Cache5 size:" + cache5.size());

SimpleDateFormat ft = new SimpleDateFormat 
("-MM-dd");  
Person p1 = new Person(1, "Tom", "Zhang", 30, false, 
170, 150,
ft.parse("1970-12-01"), Arrays.asList("123", "234", "456"));
Person p2 = new Person(2, "Frank", "Lee", 35, false, 
170, 150,
ft.parse("1978-12-01"), Arrays.asList("123", "234", "456"));
Person p3 = new Person(3, "Bob", "Liu", 40, false, 170, 
150,
ft.parse("1976-12-01"), Arrays.asList("123", "234", "456"));

Person p4 = new Person(4, "Tom", "Lee", 45, false, 170, 
150,
ft.parse("1970-12-01"), Arrays.asList("123", "234", "456"));
Person p5 = new Person(5, "Frank", "Wang", 40, false, 
170, 150,
ft.parse("1978-12-01"), Arrays.asList("123", "234", "456"));
Person p6 = new Person(6, "Bob", "Lu", 42, false, 170, 
150,
ft.parse("1976-12-01"), Arrays.asList("123", "234", "456"));

PersonId pid1 = new PersonId("Facebook", "Dev", 1);
PersonId pid2 = new PersonId("Facebook", "Dev", 2);
PersonId pid3 = new PersonId("Facebook", "AAG", 3);

PersonId pid4 = new PersonId("Google", "Dev", 4);
PersonId pid5 = new PersonId("Google", "Dev", 5);
PersonId pid6 = new PersonId("Google", "AAG", 6);

System.out.println("Populate some data into cache5...");

cache5.put(pid1, p1);
cache5.put(pid2, p2);
cache5.put(pid3, p3);

cache5.put(pid4, p4);
cache5.put(pid5, p5);
cache5.put(pid6, p6);


System.out.println("Cache5 size:" + cache5.size());
}   
System.out.println("Finished!");
}

It was nice that the cache5 was created and populated successfully as
expected.

Then I assumed that I need to add more data to the cache5, so I just run the
same code again with more person and personId objects added. This time, the
procedure get stuck with following warning:

/23:15:08,936  WARN [main] - Failed to wait for initial partition map
exchange. Possible reasons are: 
  ^-- Transactions in deadlock.
  ^-- Long running transactions (ignore if this is the case).
  ^-- Unreleased explicit locks./

I did some googling on this warning, and it seems there could be many reason
causing this pr

Re: Out of memory

2016-09-29 Thread javastuff....@gmail.com
Thank you for your reply Den. Many LOCAL cache are less than 10MB, would that
also need overhead of 20-30MB? 

When System is ideal, slowly consuming memory, so is there any
metrics/statistics/JMX being captured regularly which need to be turned off? 

I turned off task and cache events, still same issue. 

Any more hints or tweaks.

Thanks,
-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Out-of-memory-tp7995p8026.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Getting Error [grid-timeout-worker] when running join query on Single Ignite Node

2016-09-29 Thread Alexey Kuznetsov
Alex,

it is better to give a link to docs  when you are sugesting  EXPLAIN SELECT
:)

https://apacheignite.readme.io/docs/sql-queries#using-explain

Hope this help :)

-- 
Alexey Kuznetsov


Re: One failing node stalling the whole cluster

2016-09-29 Thread Denis Magda
A correct link is the following
https://issues.apache.org/jira/browse/IGNITE-4003 


> On Sep 29, 2016, at 9:31 AM, Denis Magda  wrote:
> 
> Good news to everyone. Looks like we could get to the bottom of this issue
> https://ggsystems.atlassian.net/browse/IGN-5958 
> 
> 
> Hope it will be fixed soon.
> 
> —
> Denis
> 
>> On Sep 16, 2016, at 9:38 AM, yfernando > > wrote:
>> 
>> Unfortunately iam unable to send the full logs files but they contain the
>> following exceptions 
>> 
>> [14 Sep 2016 11:14:30.290 EDT] [pub-#16%DataGridServer-Development%] ERROR
>> 11223 (OrderHolderSaveRunnable.java:273) exception ocurred while generating
>> Trade Order for Order: OrderKey [traderId=5
>> 207, orderId=16084348]
>> javax.cache.CacheException: class
>> org.apache.ignite.transactions.TransactionTimeoutException: Failed to
>> acquire lock within provided timeout for transaction [timeout=5000,
>> tx=GridNearTxLocal [ma
>> ppings=IgniteTxMappingsImpl [], nearLocallyMapped=false,
>> colocatedLocallyMapped=false, needCheckBackup=null, hasRemoteLocks=false,
>> mappings=IgniteTxMappingsImpl [], super=GridDhtTxLocalAdapter [
>> nearOnOriginatingNode=false, nearNodes=[], dhtNodes=[], explicitLock=false,
>> super=IgniteTxLocalAdapter [completedBase=null, sndTransformedVals=false,
>> depEnabled=false, txState=IgniteTxStateImpl
>> [activeCacheIds=GridLongList [idx=1, arr=[1633849959]], txMap={IgniteTxKey
>> [key=KeyCacheObjectImpl [val=BatchIdKey [privDb=trim_sys],
>> hasValBytes=true], cacheId=1633849959]=IgniteTxEntry [key=Ke
>> yCacheObjectImpl [val=BatchIdKey [privDb=trim_sys], hasValBytes=true],
>> cacheId=1633849959, txKey=IgniteTxKey [key=KeyCacheObjectImpl
>> [val=BatchIdKey [privDb=trim_sys], hasValBytes=true], cacheId
>> =1633849959], val=[op=READ, val=null], prevVal=[op=NOOP, val=null],
>> entryProcessorsCol=null, ttl=-1, conflictExpireTime=-1, conflictVer=null,
>> explicitVer=null, dhtVer=null, filters=null, filters
>> Passed=false, filtersSet=true, entry=GridDhtDetachedCacheEntry
>> [super=GridDistributedCacheEntry [super=GridCacheMapEntry
>> [key=KeyCacheObjectImpl [val=BatchIdKey [privDb=trim_sys], hasValBytes=tr
>> ue], val=null, startVer=1473869129773, ver=GridCacheVersion
>> [topVer=85333522, nodeOrderDrId=10, globalTime=1473859812640,
>> order=1473869129773], hash=1508409679, extras=null, flags=0]]], prepared
>> =false, locked=false, nodeId=3cd37805-46a7-4287-875e-9cbd0cf03fad,
>> locMapped=false, expiryPlc=null, transferExpiryPlc=false, flags=0,
>> partUpdateCntr=0, serReadVer=null, xidVer=GridCacheVersion [
>> topVer=85333522, nodeOrderDrId=10, globalTime=1473859812640,
>> order=1473869129772]]}], super=IgniteTxAdapter [xidVer=GridCacheVersion
>> [topVer=85333522, nodeOrderDrId=10, globalTime=1473859812640,
>> order=1473869129772], writeVer=null, implicit=false, loc=true, threadId=50,
>> startTime=1473859812630, nodeId=6f7a39ba-c520-435e-9480-a42ecf0d9a58,
>> startVer=GridCacheVersion [topVer=85333522, nod
>> eOrderDrId=10, globalTime=1473859812640, order=1473869129772], endVer=null,
>> isolation=REPEATABLE_READ, concurrency=PESSIMISTIC, timeout=5000,
>> sysInvalidate=false, sys=false, plc=2, commitVer=nul
>> l, finalizing=NONE, preparing=false, invalidParts=null,
>> state=MARKED_ROLLBACK, timedOut=false, topVer=AffinityTopologyVersion
>> [topVer=101, minorTopVer=0], duration=5007ms, onePhaseCommit=false],
>> size=1
>>at
>> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1618)
>> ~[ignite-core-1.5.0.final.jar:1.5.0.final]
>>at
>> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.cacheException(IgniteCacheProxy.java:1841)
>> ~[ignite-core-1.5.0.final.jar:1.5.0.final]
>>at
>> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:871)
>> ~[ignite-core-1.5.0.final.jar:1.5.0.final]
>>at
>> com.somecompany.grid.server.tradegen.BatchIdHelper.getListOfIds(BatchIdHelper.java:69)
>> ~[data-grid-server-ignite-3.0-SNAPSHOT.jar:3.0-SNAPSHOT]
>>at
>> com.somecompany.grid.server.tradegen.TradeGenerator.generateUniqueTradeId64(TradeGenerator.java:47)
>> ~[data-grid-server-ignite-3.0-SNAPSHOT.jar:3.0-SNAPSHOT]
>>at
>> com.somecompany.grid.server.tradegen.TradeGenerator.allocateTradesFromFills(TradeGenerator.java:158)
>> ~[data-grid-server-ignite-3.0-SNAPSHOT.jar:3.0-SNAPSHOT]
>>at
>> com.somecompany.grid.server.tradegen.OrderHolderSaveRunnable.run(OrderHolderSaveRunnable.java:271)
>> ~[data-grid-server-ignite-3.0-SNAPSHOT.jar:3.0-SNAPSHOT]
>>at
>> org.apache.ignite.internal.processors.closure.GridClosureProcessor$C4.execute(GridClosureProcessor.java:1879)
>> ~[ignite-core-1.5.0.final.jar:1.5.0.final]
>>at
>> org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:509)
>> ~[ignite-core-1.5.0.final

Re: Question about cron-based scheduler

2016-09-29 Thread Dmitriy Govorukhin
Hi,

Do you want use exactly cron scheduler?

Unfortunately cron supported the minimum unit time 1 min.

 # ┌─ min (0 - 59)
 # │ ┌── hour (0 - 23)
 # │ │ ┌─── day of month (1 - 31)
 # │ │ │ ┌ month (1 - 12)
 # │ │ │ │ ┌─ day of week (0 - 6) (0 to 6 are Sunday to
 # │ │ │ │ │  Saturday, or use names; 7 is also Sunday)
 # │ │ │ │ │
 # │ │ │ │ │

# * * * * * command to execute

Ignite just only add prefix specifying the number of iteration and delay
beetwen invoke method "scheduleLocal" and  first execution. {n1, n2} n1
-delay before first execution, n2- number of iteration.

if there is no need cron, you can try any other scheduler which support
delay less 1 min. For example ScheduledExecutorService.


On Thu, Sep 29, 2016 at 4:42 AM, Level D <724172...@qq.com> wrote:

> Hi all,
>
> I find the minimal scheduling time unit this scheduler supported is 1
> minute.
>
> But I need a scheduling time unit less than 1 minute .
> Is there a way to make it happen?
>
> Regards,
>
> Zhou.
>
>>
>


Re: One failing node stalling the whole cluster

2016-09-29 Thread Denis Magda
Good news to everyone. Looks like we could get to the bottom of this issue
https://ggsystems.atlassian.net/browse/IGN-5958 


Hope it will be fixed soon.

—
Denis

> On Sep 16, 2016, at 9:38 AM, yfernando  wrote:
> 
> Unfortunately iam unable to send the full logs files but they contain the
> following exceptions 
> 
> [14 Sep 2016 11:14:30.290 EDT] [pub-#16%DataGridServer-Development%] ERROR
> 11223 (OrderHolderSaveRunnable.java:273) exception ocurred while generating
> Trade Order for Order: OrderKey [traderId=5
> 207, orderId=16084348]
> javax.cache.CacheException: class
> org.apache.ignite.transactions.TransactionTimeoutException: Failed to
> acquire lock within provided timeout for transaction [timeout=5000,
> tx=GridNearTxLocal [ma
> ppings=IgniteTxMappingsImpl [], nearLocallyMapped=false,
> colocatedLocallyMapped=false, needCheckBackup=null, hasRemoteLocks=false,
> mappings=IgniteTxMappingsImpl [], super=GridDhtTxLocalAdapter [
> nearOnOriginatingNode=false, nearNodes=[], dhtNodes=[], explicitLock=false,
> super=IgniteTxLocalAdapter [completedBase=null, sndTransformedVals=false,
> depEnabled=false, txState=IgniteTxStateImpl
> [activeCacheIds=GridLongList [idx=1, arr=[1633849959]], txMap={IgniteTxKey
> [key=KeyCacheObjectImpl [val=BatchIdKey [privDb=trim_sys],
> hasValBytes=true], cacheId=1633849959]=IgniteTxEntry [key=Ke
> yCacheObjectImpl [val=BatchIdKey [privDb=trim_sys], hasValBytes=true],
> cacheId=1633849959, txKey=IgniteTxKey [key=KeyCacheObjectImpl
> [val=BatchIdKey [privDb=trim_sys], hasValBytes=true], cacheId
> =1633849959], val=[op=READ, val=null], prevVal=[op=NOOP, val=null],
> entryProcessorsCol=null, ttl=-1, conflictExpireTime=-1, conflictVer=null,
> explicitVer=null, dhtVer=null, filters=null, filters
> Passed=false, filtersSet=true, entry=GridDhtDetachedCacheEntry
> [super=GridDistributedCacheEntry [super=GridCacheMapEntry
> [key=KeyCacheObjectImpl [val=BatchIdKey [privDb=trim_sys], hasValBytes=tr
> ue], val=null, startVer=1473869129773, ver=GridCacheVersion
> [topVer=85333522, nodeOrderDrId=10, globalTime=1473859812640,
> order=1473869129773], hash=1508409679, extras=null, flags=0]]], prepared
> =false, locked=false, nodeId=3cd37805-46a7-4287-875e-9cbd0cf03fad,
> locMapped=false, expiryPlc=null, transferExpiryPlc=false, flags=0,
> partUpdateCntr=0, serReadVer=null, xidVer=GridCacheVersion [
> topVer=85333522, nodeOrderDrId=10, globalTime=1473859812640,
> order=1473869129772]]}], super=IgniteTxAdapter [xidVer=GridCacheVersion
> [topVer=85333522, nodeOrderDrId=10, globalTime=1473859812640,
> order=1473869129772], writeVer=null, implicit=false, loc=true, threadId=50,
> startTime=1473859812630, nodeId=6f7a39ba-c520-435e-9480-a42ecf0d9a58,
> startVer=GridCacheVersion [topVer=85333522, nod
> eOrderDrId=10, globalTime=1473859812640, order=1473869129772], endVer=null,
> isolation=REPEATABLE_READ, concurrency=PESSIMISTIC, timeout=5000,
> sysInvalidate=false, sys=false, plc=2, commitVer=nul
> l, finalizing=NONE, preparing=false, invalidParts=null,
> state=MARKED_ROLLBACK, timedOut=false, topVer=AffinityTopologyVersion
> [topVer=101, minorTopVer=0], duration=5007ms, onePhaseCommit=false],
> size=1
>at
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1618)
> ~[ignite-core-1.5.0.final.jar:1.5.0.final]
>at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.cacheException(IgniteCacheProxy.java:1841)
> ~[ignite-core-1.5.0.final.jar:1.5.0.final]
>at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:871)
> ~[ignite-core-1.5.0.final.jar:1.5.0.final]
>at
> com.somecompany.grid.server.tradegen.BatchIdHelper.getListOfIds(BatchIdHelper.java:69)
> ~[data-grid-server-ignite-3.0-SNAPSHOT.jar:3.0-SNAPSHOT]
>at
> com.somecompany.grid.server.tradegen.TradeGenerator.generateUniqueTradeId64(TradeGenerator.java:47)
> ~[data-grid-server-ignite-3.0-SNAPSHOT.jar:3.0-SNAPSHOT]
>at
> com.somecompany.grid.server.tradegen.TradeGenerator.allocateTradesFromFills(TradeGenerator.java:158)
> ~[data-grid-server-ignite-3.0-SNAPSHOT.jar:3.0-SNAPSHOT]
>at
> com.somecompany.grid.server.tradegen.OrderHolderSaveRunnable.run(OrderHolderSaveRunnable.java:271)
> ~[data-grid-server-ignite-3.0-SNAPSHOT.jar:3.0-SNAPSHOT]
>at
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$C4.execute(GridClosureProcessor.java:1879)
> ~[ignite-core-1.5.0.final.jar:1.5.0.final]
>at
> org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:509)
> ~[ignite-core-1.5.0.final.jar:1.5.0.final]
>at
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6397)
> ~[ignite-core-1.5.0.final.jar:1.5.0.final]
>at
> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:503)
> ~[ignite-core-1.5.0.final.jar:1.5.0.final]
>at

Re: Getting Error [grid-timeout-worker] when running join query on Single Ignite Node

2016-09-29 Thread Alexander Paschenko
Hello Manish,

Also, Ignite supports SQL command EXPLAIN SELECT - please use it to
make sure indexes are invoked during your query execution, i.e.
there's no full scan.

2016-09-29 17:04 GMT+03:00 Taras Ledkov :
> Hi, Manish
>
> - Do you use the 'index=true' parameter of the @QuerySqlField annotation
> (default false)?
>
> - Please use the
>
> SqlFieldsQuery.setLocal(true);
>
> if the the query is executed on the single node topology or all data are
> available locally. In this case map/reduce is skipped.
>
>
> On 29.09.2016 13:15, Manish Mishra wrote:
>
> Hi,
>
> I am populating three caches running on a single ignite node (version 1.5.27
> ) with 4 GB heap (Configure as: JVM_OPTS="-server -Xms4g -Xmx4g)  with less
> than 100k records in each. I'am performing a join query on them but the
> query takes a hell lot of time and I get this log (Or I don't know It is
> just ERROR or INFO) as following logs.
>
> [09:47:18,720][INFO][disco-event-worker-#96%null%][GridDiscoveryManager]
> Added new node to topology: TcpDiscoveryNode
> [id=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d, addrs=[0:0:0:0:0:0:0:1%lo,
> 10.178.148.8, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0,
> elssie-gridgain2.internal/10.178.148.8:0], discPort=0, order=8, intOrder=5,
> lastExchangeTime=1475142438712, loc=false,
> ver=1.5.27#20160624-sha1:0fe713ae, isClient=true]
> [09:47:18,721][INFO][disco-event-worker-#96%null%][GridDiscoveryManager]
> Topology snapshot [ver=8, servers=1, clients=1, CPUs=16, heap=5.8GB]
> [09:47:18,729][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
> Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
> [topVer=8, minorTopVer=0], evt=NODE_JOINED,
> node=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d]
> [09:47:19,279][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
> Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
> [topVer=8, minorTopVer=1], evt=DISCOVERY_CUSTOM_EVT,
> node=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d]
> [09:47:19,345][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
> Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
> [topVer=8, minorTopVer=2], evt=DISCOVERY_CUSTOM_EVT,
> node=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d]
> [09:47:19,376][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
> Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
> [topVer=8, minorTopVer=3], evt=DISCOVERY_CUSTOM_EVT,
> node=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d]
> [09:47:25,611][INFO][disco-event-worker-#96%null%][GridDiscoveryManager]
> Node left topology: TcpDiscoveryNode
> [id=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d, addrs=[0:0:0:0:0:0:0:1%lo,
> 10.178.148.8, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0,
> elssie-gridgain2.internal/10.178.148.8:0], discPort=0, order=8, intOrder=5,
> lastExchangeTime=1475142438712, loc=false,
> ver=1.5.27#20160624-sha1:0fe713ae, isClient=true]
> [09:47:25,612][INFO][disco-event-worker-#96%null%][GridDiscoveryManager]
> Topology snapshot [ver=9, servers=1, clients=0, CPUs=16, heap=4.0GB]
> [09:47:25,621][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
> Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
> [topVer=9, minorTopVer=0], evt=NODE_LEFT,
> node=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d]
> [09:47:47,234][INFO][disco-event-worker-#96%null%][GridDiscoveryManager]
> Added new node to topology: TcpDiscoveryNode
> [id=448da668-5262-46bb-951a-c6122543882a, addrs=[0:0:0:0:0:0:0:1%lo,
> 10.178.148.8, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0,
> elssie-gridgain2.internal/10.178.148.8:0], discPort=0, order=10, intOrder=6,
> lastExchangeTime=1475142467220, loc=false,
> ver=1.5.27#20160624-sha1:0fe713ae, isClient=true]
> [09:47:47,235][INFO][disco-event-worker-#96%null%][GridDiscoveryManager]
> Topology snapshot [ver=10, servers=1, clients=1, CPUs=16, heap=5.8GB]
> [09:47:47,243][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
> Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
> [topVer=10, minorTopVer=0], evt=NODE_JOINED,
> node=448da668-5262-46bb-951a-c6122543882a]
> [09:47:47,861][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
> Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
> [topVer=10, minorTopVer=1], evt=DISCOVERY_CUSTOM_EVT,
> node=448da668-5262-46bb-951a-c6122543882a]
> [09:47:47,922][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
> Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
> [topVer=10, minorTopVer=2], evt=DISCOVERY_CUSTOM_EVT,
> node=448da668-5262-46bb-951a-c6122543882a]
> [09:47:47,955][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
> Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
> [topVer=10, minorTopVer=3], evt=DISCOVERY_CUSTOM_EVT,
> node=448da668-5262-46bb-951a-c6122543882a]
> [09:48:04,885][INFO][grid-t

Re: Getting a cluster-configured JCache CacheManager instance

2016-09-29 Thread Josh Cummings
Here is what I did to satisfy our requirement:

System.setProperty("ignite.zookeeper.host",
resolvePropertyFromCentralizedRepo);
Caching.getCachingProvider(...).getCacheManager(springCfgUrl,
someClassLoader, someProps);

And then use the PropertyPlaceholderConfigurer and a placeholder in the xml
as you suggested.


On Wed, Sep 28, 2016 at 8:32 AM, Josh Cummings 
wrote:

> Okay, thanks, I'll try that. Along similar lines, is there a way for me to
> bring up an Ignite client, say programmatically, give it a name, and then
> send just the grid name through the JCache API?
>
> Not trying to make Ignite behave just like Hazelcast, but just to make my
> thoughts clear with an example, there I can do this:
>
> Properties properties = new Properties();
> properties.setProperty("hazelcast.instance.name", "my-instance-name");
> Caching.getCachingProvider().getCacheManager(someUri, someClassLoader,
> properties);
>
> And it will pick up a client I've already configured elsewhere in the
> runtime by the name of "my-instance-name".
>
> Is there a way to identify an already configured client through the JCache
> API?
>
> Thanks, again.
>
>
>
> On Tue, Sep 27, 2016 at 4:40 PM, vkulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
>> Hi Josh,
>>
>> Ignite uses Spring for configuration, so you can utilize Spring property
>> placeholder.You can find a nice example here:
>> https://www.mkyong.com/spring/spring-propertyplaceholderconf
>> igurer-example/
>>
>> You can also use one of the shared IP finders provided by Ignite. Shared
>> here means that each node that joins the topology will leave its
>> coordinates
>> in some shared storage, so that other nodes always know where to connect.
>> With this approach you will not have to explicitly specify addresses at
>> all.
>> Here is the list of all available IP finders (only static IP based is not
>> shared): https://apacheignite.readme.io/docs/cluster-config
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Getting-a-cluster-configured-JCache-CacheMa
>> nager-instance-tp7970p7978.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
>
> *JOSH CUMMINGS*
>
> Principal Engineer
>
> [image: Workfront] 
>
> *O*  801.477.1234  |  *M*  8015562751
>
> joshcummi...@workfront.com | www.workfront.com
> Address   |  Twitter
>   |  LinkedIn
>   |  Facebook
> 
>
> [image: Workfront] 
>



-- 

*JOSH CUMMINGS*

Principal Engineer

[image: Workfront] 

*O*  801.477.1234  |  *M*  8015562751

joshcummi...@workfront.com | www.workfront.com
Address   |  Twitter
  |  LinkedIn
  |  Facebook


[image: Workfront] 


Re: MapReduce with Apache-Ignite

2016-09-29 Thread lalit
Regarding first point about mapper output - I intend to know that the mapper
output will not be in-memory. Thus I/O operation on the local filesystem at
mapper side still exists?

Regarding second point - I tried running the job using the hadoop
accelerator, I want to locate the logs for the job id, the mapper tasks &
reducer tasks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/MapReduce-with-Apache-Ignite-tp8007p8018.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: MapReduce with Apache-Ignite

2016-09-29 Thread Vladimir Ozerov
Hi,

1. Mapper output should be written to the same place as if it job was run
through native Apache Hadoop engine. Apache Ignite Hadoop Accelerator can
work without IGFS at all.
2. Currently you will not see jobs in Resource Manager because they are
executed through separate engine. We will improve this in future. Please
clarify what logs do you mean?

Vladimir.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/MapReduce-with-Apache-Ignite-tp8007p8016.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Getting Error [grid-timeout-worker] when running join query on Single Ignite Node

2016-09-29 Thread Taras Ledkov

Hi, Manish

- Do you use the 'index=true' parameter of the @QuerySqlField annotation 
(default false)?


- Please use the

SqlFieldsQuery.setLocal(true);

if the the query is executed on the single node topology or all data are 
available locally. In this case map/reduce is skipped.



On 29.09.2016 13:15, Manish Mishra wrote:

Hi,

I am populating three caches running on a single ignite node (version 
1.5.27 ) with 4 GB heap (Configure as: JVM_OPTS="-server -Xms4g 
-Xmx4g)  with less than 100k records in each. I'am performing a join 
query on them but the query takes a hell lot of time and I get this 
log (Or I don't know It is just ERROR or INFO) as following logs.


[09:47:18,720][INFO][disco-event-worker-#96%null%][GridDiscoveryManager] 
Added new node to topology: TcpDiscoveryNode 
[id=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d, addrs=[0:0:0:0:0:0:0:1%lo, 
10.178.148.8, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, 
/127.0.0.1:0 , 
elssie-gridgain2.internal/10.178.148.8:0 ], 
discPort=0, order=8, intOrder=5, lastExchangeTime=1475142438712, 
loc=false, ver=1.5.27#20160624-sha1:0fe713ae, isClient=true]
[09:47:18,721][INFO][disco-event-worker-#96%null%][GridDiscoveryManager] 
Topology snapshot [ver=8, servers=1, clients=1, CPUs=16, heap=5.8GB]
[09:47:18,729][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager] 
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
[topVer=8, minorTopVer=0], evt=NODE_JOINED, 
node=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d]
[09:47:19,279][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager] 
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
[topVer=8, minorTopVer=1], evt=DISCOVERY_CUSTOM_EVT, 
node=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d]
[09:47:19,345][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager] 
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
[topVer=8, minorTopVer=2], evt=DISCOVERY_CUSTOM_EVT, 
node=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d]
[09:47:19,376][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager] 
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
[topVer=8, minorTopVer=3], evt=DISCOVERY_CUSTOM_EVT, 
node=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d]
[09:47:25,611][INFO][disco-event-worker-#96%null%][GridDiscoveryManager] 
Node left topology: TcpDiscoveryNode 
[id=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d, addrs=[0:0:0:0:0:0:0:1%lo, 
10.178.148.8, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, 
/127.0.0.1:0 , 
elssie-gridgain2.internal/10.178.148.8:0 ], 
discPort=0, order=8, intOrder=5, lastExchangeTime=1475142438712, 
loc=false, ver=1.5.27#20160624-sha1:0fe713ae, isClient=true]
[09:47:25,612][INFO][disco-event-worker-#96%null%][GridDiscoveryManager] 
Topology snapshot [ver=9, servers=1, clients=0, CPUs=16, heap=4.0GB]
[09:47:25,621][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager] 
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
[topVer=9, minorTopVer=0], evt=NODE_LEFT, 
node=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d]
[09:47:47,234][INFO][disco-event-worker-#96%null%][GridDiscoveryManager] 
Added new node to topology: TcpDiscoveryNode 
[id=448da668-5262-46bb-951a-c6122543882a, addrs=[0:0:0:0:0:0:0:1%lo, 
10.178.148.8, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, 
/127.0.0.1:0 , 
elssie-gridgain2.internal/10.178.148.8:0 ], 
discPort=0, order=10, intOrder=6, lastExchangeTime=1475142467220, 
loc=false, ver=1.5.27#20160624-sha1:0fe713ae, isClient=true]
[09:47:47,235][INFO][disco-event-worker-#96%null%][GridDiscoveryManager] 
Topology snapshot [ver=10, servers=1, clients=1, CPUs=16, heap=5.8GB]
[09:47:47,243][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager] 
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
[topVer=10, minorTopVer=0], evt=NODE_JOINED, 
node=448da668-5262-46bb-951a-c6122543882a]
[09:47:47,861][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager] 
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
[topVer=10, minorTopVer=1], evt=DISCOVERY_CUSTOM_EVT, 
node=448da668-5262-46bb-951a-c6122543882a]
[09:47:47,922][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager] 
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
[topVer=10, minorTopVer=2], evt=DISCOVERY_CUSTOM_EVT, 
node=448da668-5262-46bb-951a-c6122543882a]
[09:47:47,955][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager] 
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion 
[topVer=10, minorTopVer=3], evt=DISCOVERY_CUSTOM_EVT, 
node=448da668-5262-46bb-951a-c6122543882a]

[09:48:04,885][INFO][grid-timeout-worker-#81%null%][IgniteKernal]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=6caab193, name=null]
^-- H/N/C [hosts=1, nodes=2, CPU

Re: Data Streamer error

2016-09-29 Thread matt
Having similar problems still. I've tried at least 3 different serialization
methods for the addData message (latest is a POJO (Serializable) w/3 String
fields). Here's the latest error message: http://pastebin.com/b2awykDy



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Data-Streamer-error-tp7725p8013.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite zeppelin jdbc connection

2016-09-29 Thread vdpyatkov
Hi,

This can be if you change data from database (for example using SQL update)
after loading to Ignite cache.
Ignite does not load data if they had loaded already (idetificatin by key).

If you use setReadThrough and setWriteThrough, you need to modify data
through cache only.


junyoung.kang wrote
> I mistake.. Sorry.
> 
> my load() function is
> 
> @Override
> public TbCustomer load(Integer customerId) throws CacheLoaderException
> {
> 
> System.out.println(" load data key =" + customerId);
> 
> try {
> Connection connection = dataSource.getConnection();
> 
> PreparedStatement st = connection.prepareStatement("select *
> from TB_CUSTOMER where CUSTOMER_ID=?");
> st.setString(1, customerId.toString());
> 
> ResultSet rs = st.executeQuery();
> 
> return rs.next() ? new TbCustomer(rs.getInt(1),
> rs.getString("EXTRA_CUSTOMER_NAME"),
> rs.getString("MOBILE_PHONE"), rs.getString("EMAIL"),
> Integer.valueOf(rs.getString("DEL_FLAG"))) : null;
> 
> } catch (SQLException e) {
> e.printStackTrace();
> throw new CacheLoaderException("Failed to load customer id ="
> + customerId, e);
> }
> }
> 
> 
> I retry to test.. I want to value through database.
> 
> In my case(don't get value in database), I execute sql query on this
> cache.. 
> 
> like this..
> 
> String query = "select * from TbCustomer where delFlag=?";
> System.out.println(query);
> QueryCursor> results = customerCache.query(new
> SqlFieldsQuery(query).setArgs(0));
> Util.print("simple Query : select * from TbCustomer where
> delFlag=0 ", results.getAll());
> 
> In this case, I can't get value in all database.(database query result !=
> above sql result)
> 
> However, when I invoke get() method, I get a value(not existed cache and
> existed database)
> 
> Thank you to reply.





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-zeppelin-jdbc-connection-tp7898p8012.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: AtomicSequence not working when shutting down one server node from a cluster

2016-09-29 Thread hitansu
Thanks. The issue got resolved. While starting the client I was not giving
the configuration file.By setting the configuration file to client it solved
the issue.

But one for doubt I have. If I will run 2-3 client at the same time , then
why the id being generated in each client has a diff of 1000. Means 1st
client produces from 1 & 2nd from 1001 & 3rd from 2001.
By this way it is not a sequence id genarator.Is there any configuration for
that.So that the id will be a sequence across the cluster.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/AtomicSequence-not-working-when-shutting-down-one-server-node-from-a-cluster-tp7770p8008.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


MapReduce with Apache-Ignite

2016-09-29 Thread lalit
Hi Team,

Please help me with the following questions:
1.When we run maprreduce with ignite, does the mapper output still written
to local filesystem and not the IGFS?
2.When I submit a map-reduce job following the instructions mentioned in
Apache Ignite Documentation available online. I could not see the job listed
in Resource-Manager, where can I see the logs of the job?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/MapReduce-with-Apache-Ignite-tp8007.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


MapReduce with Apache-Ignite

2016-09-29 Thread Lalit Kumar

Hi Team,

Please help me answer the following questions:
1.When we run maprreduce with ignite, does the mapper output still written to 
local filesystem and not the IGFS?
2.When I submit a map-reduce job following the instructions mentioned in Apache 
Ignite Documentation available online. I could not see the job listed in 
Resource-Manager, where can I see the logs of the job?

Regards


American Express made the following annotations
**
"This message and any attachments are solely for the intended recipient and may 
contain confidential or privileged information. If you are not the intended 
recipient, any disclosure, copying, use, or distribution of the information 
included in this message and any attachments is prohibited. If you have 
received this communication in error, please notify us by reply e-mail and 
immediately and permanently delete this message and any attachments. Thank you."

American Express a ajouté le commentaire suivant le Ce courrier et toute pièce 
jointe qu'il contient sont réservés au seul destinataire indiqué et peuvent 
renfermer des 
renseignements confidentiels et privilégiés. Si vous n'êtes pas le destinataire 
prévu, toute divulgation, duplication, utilisation ou distribution du courrier 
ou de toute pièce jointe est interdite. Si vous avez reçu cette communication 
par erreur, veuillez nous en aviser par courrier et détruire immédiatement le 
courrier et les pièces jointes. Merci.

**


Getting Error [grid-timeout-worker] when running join query on Single Ignite Node

2016-09-29 Thread Manish Mishra
Hi,

I am populating three caches running on a single ignite node (version
1.5.27 ) with 4 GB heap (Configure as: JVM_OPTS="-server -Xms4g -Xmx4g)
with less than 100k records in each. I'am performing a join query on them
but the query takes a hell lot of time and I get this log (Or I don't know
It is just ERROR or INFO) as following logs.

[09:47:18,720][INFO][disco-event-worker-#96%null%][GridDiscoveryManager]
Added new node to topology: TcpDiscoveryNode
[id=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d, addrs=[0:0:0:0:0:0:0:1%lo,
10.178.148.8, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0,
elssie-gridgain2.internal/10.178.148.8:0], discPort=0, order=8, intOrder=5,
lastExchangeTime=1475142438712, loc=false,
ver=1.5.27#20160624-sha1:0fe713ae, isClient=true]
[09:47:18,721][INFO][disco-event-worker-#96%null%][GridDiscoveryManager]
Topology snapshot [ver=8, servers=1, clients=1, CPUs=16, heap=5.8GB]
[09:47:18,729][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
[topVer=8, minorTopVer=0], evt=NODE_JOINED,
node=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d]
[09:47:19,279][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
[topVer=8, minorTopVer=1], evt=DISCOVERY_CUSTOM_EVT,
node=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d]
[09:47:19,345][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
[topVer=8, minorTopVer=2], evt=DISCOVERY_CUSTOM_EVT,
node=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d]
[09:47:19,376][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
[topVer=8, minorTopVer=3], evt=DISCOVERY_CUSTOM_EVT,
node=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d]
[09:47:25,611][INFO][disco-event-worker-#96%null%][GridDiscoveryManager]
Node left topology: TcpDiscoveryNode
[id=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d, addrs=[0:0:0:0:0:0:0:1%lo,
10.178.148.8, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0,
elssie-gridgain2.internal/10.178.148.8:0], discPort=0, order=8, intOrder=5,
lastExchangeTime=1475142438712, loc=false,
ver=1.5.27#20160624-sha1:0fe713ae, isClient=true]
[09:47:25,612][INFO][disco-event-worker-#96%null%][GridDiscoveryManager]
Topology snapshot [ver=9, servers=1, clients=0, CPUs=16, heap=4.0GB]
[09:47:25,621][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
[topVer=9, minorTopVer=0], evt=NODE_LEFT,
node=ea2b3ca3-f5d0-45bd-adb3-58d46bc85b7d]
[09:47:47,234][INFO][disco-event-worker-#96%null%][GridDiscoveryManager]
Added new node to topology: TcpDiscoveryNode
[id=448da668-5262-46bb-951a-c6122543882a, addrs=[0:0:0:0:0:0:0:1%lo,
10.178.148.8, 127.0.0.1], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0,
elssie-gridgain2.internal/10.178.148.8:0], discPort=0, order=10,
intOrder=6, lastExchangeTime=1475142467220, loc=false,
ver=1.5.27#20160624-sha1:0fe713ae, isClient=true]
[09:47:47,235][INFO][disco-event-worker-#96%null%][GridDiscoveryManager]
Topology snapshot [ver=10, servers=1, clients=1, CPUs=16, heap=5.8GB]
[09:47:47,243][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
[topVer=10, minorTopVer=0], evt=NODE_JOINED,
node=448da668-5262-46bb-951a-c6122543882a]
[09:47:47,861][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
[topVer=10, minorTopVer=1], evt=DISCOVERY_CUSTOM_EVT,
node=448da668-5262-46bb-951a-c6122543882a]
[09:47:47,922][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
[topVer=10, minorTopVer=2], evt=DISCOVERY_CUSTOM_EVT,
node=448da668-5262-46bb-951a-c6122543882a]
[09:47:47,955][INFO][exchange-worker-#99%null%][GridCachePartitionExchangeManager]
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
[topVer=10, minorTopVer=3], evt=DISCOVERY_CUSTOM_EVT,
node=448da668-5262-46bb-951a-c6122543882a]
[09:48:04,885][INFO][grid-timeout-worker-#81%null%][IgniteKernal]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=6caab193, name=null]
^-- H/N/C [hosts=1, nodes=2, CPUs=16]
^-- CPU [cur=0.03%, avg=3.31%, GC=0%]
^-- Heap [used=1219MB, free=70.23%, comm=4095MB]
^-- Public thread pool [active=0, idle=32, qSize=0]
^-- System thread pool [active=0, idle=32, qSize=0]
^-- Outbound messages queue [size=0]
[09:49:04,879][INFO][grid-timeout-worker-#81%null%][IgniteKernal]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=6caab193, name=null]
^-- H/N/C [hosts=1, nodes=2, CPUs=16]
^-- CPU [cur=0%, avg=3.23%, GC=0%]
^-- Heap [used