回复: Fetched result set too large

2017-09-19 Thread Lucky
I see.
Thanks a lot!






2017年09月19日 13:49,slava.koptilin :
Hi Lucky,

You just need to start grid with -DIGNITE_SQL_MERGE_TABLE_MAX_SIZE=3 JVM
flag.

Thanks,
Slava.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: work around for problem where ignite query does not include objects added into Cache from within a transaction

2017-09-19 Thread kotamrajuyashasvi
Hi 

Thanks for your responses. I just wanted a temporary work around till the
actual feature is implemented, even at the cost of performance. I have
thought of another approach to this problem. 

I plan to use my own transaction mechanism instead of Ignite transactions. I
can use explicit locks to lock the keys and my own commit and rollback
functionality.In Primary Key Pojo class along with pk fields I can add a
field 'transaction_id' which has the value of the transaction id(some unique
id) in which the row has been inserted with in the transaction. By default
it is null. I do not perform update or delete or insert directly using
queries. For update and delete first I perform a select query to get the
keys which should be deleted/updated.Then I lock the keys using explicit
locks and also I check whether if I had to wait to acquire lock or I got it
directly. If I had to wait to acquire lock then I rerun select query as
there are chances that the keys that are not locked yet might not be
eligible for the select query or might change their values. Before locking
the keys I make a few checks. 
(a)If the key's transaction_id is not null and is not same as the present
transaction id then i discard the key and continue with the remaining
keys(as it belongs to some other transaction). 
(b)If the key is already present with in the transaction(i.e same key fields
except the transaction_id) I discard it because that row has been updated
with in transaction and hence I need to use the updated value of pk if it
was eligible for query result. 
(c)I maintain a HashMap of oldcacherows that I just locked. also
HashMap of new rows that are updated or inserted with
transaction_id with value of current transaction id.I also maintain a list
of deleteList which are to deleted with in the transaction. when ever a
delete request comes I do not delete it directly, I first push it into the
delete HashSet and during commit I delete them. during select if key is
present  in the delete HashSet I discard the key. If the key is present in
oldcacherows or new cacherows HashMap I do not acquire locks. 

Most of these checks after select query take constant amount time (O(1))
since I use HashMap and HashSets. But rerunning query will degrade
performance but it might not happen frequently.

Once I obtain locks, for delete I just push the key into a HashSet. for
update for the given key I update the value accordingly and put in cache but
the key's transaction_id field is set to current transaction id. Hence the
old row still exists and is still visible to other transactions. Also I put
this row into newcacherows HashMap.  During Insert also I put the new row
into cache but the key has transaction_id set to present transaction id
hence will be ignored by other transactions. I put the inserted into the
newcacherows HashMap. During the commit I just put all the rows in
newcacherows into the cache with transaction_id as null replacing old rows/
new rows inserted into the cache. also delete any rows still with
transaction_id with current transaction Also delete rows from delete
HashSet. During rollback I just remove the temporary rows inserted during
transaction as rows are not actually inserted or deleted with transaction_id
as null. Also I maintain a list of locks that I acquired and during
commit/rollback I release all the locks.

Also I have a work around for the problem where select query might return
partial results of a commit. The solution is to maintain a commit_bit in row
object which is by default 0 . while inserting into cache during commit set
commit_bit to 1 and insert. and after commit i.e all rows are inserted into
cache then update the commit_bit back to 0 for all these rows. Now during
select another additional check is made 
(d)if any row has commit_bit as 1 it indicates that this row has been
inserted in the middle of a commit and hence some additional rows might get
inserted/updated which might change the query result and hence rerun the
query until no row has commit_bit as 1.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite YARN deployment mode issues

2017-09-19 Thread Ray
I'm using ignite 2.1, my ignite config xml is as follows.

   












And in standalone mode, ignite can read the log4j2.xml in local directory
and works ok.
When deployed in YARN, ignite says it can find log4j2.xml with a bunch of
spring exceptions.
My cluster.properties is as follows 

IGNITE_NODE_COUNT=6
IGNITE_RUN_CPU_PER_NODE=6
IGNITE_MEMORY_PER_NODE=3
IGNITE_VERSION=2.1.0
IGNITE_PATH=/***/apache-ignite-fabric-2.1.0.zip
IGNITE_XML_CONFIG=/***/ignite-config/default-config.xml
IGNITE_USERS_LIBS=/***/ignite-libs/

I tried to put the log4j2.xml file under IGNITE_USERS_LIBS, still not
working.

And another question is how to activate Ignite with persistent store enabled
in YARN mode.
I tried running ./control.sh --host ignitenode --port 11211 --activate
And the log says.
Sep 19, 2017 6:25:27 AM
org.apache.ignite.internal.client.impl.GridClientImpl 
WARNING: Failed to initialize topology on client start. Will retry in
background.
Sep 19, 2017 6:25:27 AM
org.apache.ignite.internal.client.impl.GridClientImpl 
INFO: Client started [id=5f00be2b-7679-46e7-9f8e-3435f7f1d759, protocol=TCP]
Something fail during activation, exception message: Latest topology update
failed.
Sep 19, 2017 6:25:27 AM
org.apache.ignite.internal.client.impl.GridClientImpl stop
INFO: Client stopped [id=5f00be2b-7679-46e7-9f8e-3435f7f1d759,
waitCompletion=true]
And there's no log in the server node side.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite YARN deployment mode issues

2017-09-19 Thread Ray
Figured the second question out by myself.
The node I'm running ./control.sh does not seem to have ignite grid started.
Here's the yarn log.



[08:09:30,866][SEVERE][main][IgniteKernal] Exception during start
processors, node will be stopped and close connections
class org.apache.ignite.IgniteException:
/yarn/nm/usercache/root/appcache/application_1505700561210_0016/container_1505700561210_0016_01_76/ignite/apache-ignite-fabric-2.1.0-bin/work/db/10_29_42_49_127_0_0_1_47500/lock
(Permission denied)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$FileLockHolder.(GridCacheDatabaseSharedManager.java:2931)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$FileLockHolder.(GridCacheDatabaseSharedManager.java:2899)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.start0(GridCacheDatabaseSharedManager.java:374)
at
org.apache.ignite.internal.processors.cache.GridCacheSharedManagerAdapter.start(GridCacheSharedManagerAdapter.java:61)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.start(GridCacheProcessor.java:696)
at
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1788)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:929)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1896)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1648)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1076)
at
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:994)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:880)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:779)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:649)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:618)
at org.apache.ignite.Ignition.start(Ignition.java:347)
at
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:302)
Caused by: java.io.FileNotFoundException:
/yarn/nm/usercache/root/appcache/application_1505700561210_0016/container_1505700561210_0016_01_76/ignite/apache-ignite-fabric-2.1.0-bin/work/db/10_29_42_49_127_0_0_1_47500/lock
(Permission denied)
at java.io.RandomAccessFile.open0(Native Method)
at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
at java.io.RandomAccessFile.(RandomAccessFile.java:243)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$FileLockHolder.(GridCacheDatabaseSharedManager.java:2925)
... 16 more
[08:09:30,868][SEVERE][main][IgniteKernal] Got exception while starting
(will rollback startup routine).
class org.apache.ignite.IgniteException:
/yarn/nm/usercache/root/appcache/application_1505700561210_0016/container_1505700561210_0016_01_76/ignite/apache-ignite-fabric-2.1.0-bin/work/db/10_29_42_49_127_0_0_1_47500/lock
(Permission denied)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$FileLockHolder.(GridCacheDatabaseSharedManager.java:2931)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$FileLockHolder.(GridCacheDatabaseSharedManager.java:2899)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.start0(GridCacheDatabaseSharedManager.java:374)
at
org.apache.ignite.internal.processors.cache.GridCacheSharedManagerAdapter.start(GridCacheSharedManagerAdapter.java:61)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.start(GridCacheProcessor.java:696)
at
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1788)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:929)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1896)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1648)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1076)
at
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:994)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:880)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:779)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:649)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:618)
at org.apache.ignite.Ignition.start(Ignition.java:347)
at
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:302)
Caused by: java.io.FileNotFoundException:
/yarn/nm/usercache/root/appcache/application_1505700561210_0016/container_1505700561210_0016_01_76

Exception use affinity key when put #invoke from Client.

2017-09-19 Thread aa...@tophold.com
hi all, 

We using the affinity key map to affinity cache connected,  the key is a very 
simple class:

class AssetKey

@AffinityKeyMapped
private String accountId;
private String transId;


But every time when run this command from client:

ignite.cache(AssetEntry.IG_CACHE_NAME).invoke(new 
AssetKey(..), (entry, arguments))

A exception always thrown:

Caused by: class org.apache.ignite.binary.BinaryObjectException: Binary type 
has different affinity key fields [typeName=AssetKey, affKeyFieldName1=null, 
affKeyFieldName2=accountId]
at 
org.apache.ignite.internal.binary.BinaryUtils.mergeMetadata(BinaryUtils.java:950)
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.addMeta(CacheObjectBinaryProcessorImpl.java:430)
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$2.addMeta(CacheObjectBinaryProcessorImpl.java:173)

For Ignite configuration we put this:










Any configuration we missed?  Thanks for your time!



Regards
Aaron


aa...@tophold.com


Re: Existing queue can't be accessed on client node

2017-09-19 Thread Mikhail
Hi again,

I filed a bug about the issue you described:

https://issues.apache.org/jira/browse/IGNITE-6437

Thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Exception use affinity key when put #invoke from Client.

2017-09-19 Thread Denis Mekhanikov
Hi Aaron!

Is it possible that you wrote a value to cache before configuring an
affinity key?
This exception occurs when configuration of affinity key for a sought key
doesn't match configuration that is stored in cache.
Maybe you have persistence enabled and you wrote the value before adding
@AffinityKeyMapped to field of AssetKey class?

Denis

вт, 19 сент. 2017 г. в 13:10, aa...@tophold.com :

> hi all,
>
> We using the affinity key map to affinity cache connected,  the key is a
> very simple class:
>
> class AssetKey
>
> @AffinityKeyMapped
> private String accountId;
> private String transId;
>
>
> But every time when run this command from client:
>
> ignite.cache(AssetEntry.IG_CACHE_NAME).invoke(new
> AssetKey(..), (entry, arguments))
>
> A exception always thrown:
>
>
> Caused by: class org.apache.ignite.binary.BinaryObjectException: Binary type 
> has different affinity key fields [typeName=AssetKey, affKeyFieldName1=null, 
> affKeyFieldName2=accountId]
>
> at 
> org.apache.ignite.internal.binary.BinaryUtils.mergeMetadata(BinaryUtils.java:950)
>
> at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.addMeta(CacheObjectBinaryProcessorImpl.java:430)
>
> at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$2.addMeta(CacheObjectBinaryProcessorImpl.java:173)
>
> For Ignite configuration we put this:
>
> 
> 
>
> 
> 
> 
> 
> 
>
>
> Any configuration we missed?  Thanks for your time!
>
>
>
> Regards
> Aaron
> --
> aa...@tophold.com
>


Conflicting cross-version suffixes in: org.scalatest:scalatest, com.twitter:chill, org.apache.spark:spark-unsafe, org.apache.spark:spark-tags

2017-09-19 Thread pradeepchanumolu
I am hitting this error when I add ignite-spark 2.2.0 artifact to by project.

[error] Modules were resolved with conflicting cross-version suffixes in
{file://ignite-poc/}ignite-poc:
[error]org.scalatest:scalatest _2.11, _2.10
[error]com.twitter:chill _2.11, _2.10
[error]org.apache.spark:spark-unsafe _2.11, _2.10
[error]org.apache.spark:spark-tags _2.11, _2.10
java.lang.RuntimeException: Conflicting cross-version suffixes in:
org.scalatest:scalatest, com.twitter:chill, org.apache.spark:spark-unsafe,
org.apache.spark:spark-tags
at scala.sys.package$.error(package.scala:27)
at
sbt.ConflictWarning$.processCrossVersioned(ConflictWarning.scala:46)
at sbt.ConflictWarning$.apply(ConflictWarning.scala:32)
at sbt.Classpaths$$anonfun$100.apply(Defaults.scala:1300)
at sbt.Classpaths$$anonfun$100.apply(Defaults.scala:1297)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at
sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at
sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
at
sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:237)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
at
sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
[error] (*:update) Conflicting cross-version suffixes in:
org.scalatest:scalatest, com.twitter:chill, org.apache.spark:spark-unsafe,
org.apache.spark:spark-tags


After looking at the dependency graph, looks like ignite-spark is depending
on both spark-tags_2.10 and spark-tags_2.11. 

[info] | +-org.apache.spark:spark-tags_2.11:2.1.0
[info] | | +-org.scalatest:scalatest_2.11:2.2.6 [S]
[info] | | | +-org.scala-lang.modules:scala-xml_2.11:1.0.2 [S] (evicted
by: 1.0.6)
[info] | | | +-org.scala-lang.modules:scala-xml_2.11:1.0.6 [S]
[info] | | | +-org.scala-lang:scala-reflect:2.11.8 [S]
[info] | | |
[info] | | +-org.spark-project.spark:unused:1.0.0
[info] | |
[info] | +-org.spark-project.spark:unused:1.0.0
[info] |
[info] +-org.apache.spark:spark-tags_2.11:2.1.0
[info] | +-org.scalatest:scalatest_2.11:2.2.6 [S]
[info] | | +-org.scala-lang.modules:scala-xml_2.11:1.0.2 [S] (evicted
by: 1.0.6)
[info] | | +-org.scala-lang.modules:scala-xml_2.11:1.0.6 [S]
[info] | | +-org.scala-lang:scala-reflect:2.11.8 [S]
[info] | |
[info] | +-org.spark-project.spark:unused:1.0.0
[info] |
[info] +-org.apache.spark:spark-unsafe_2.10:2.1.0
[info] | +-com.google.code.findbugs:jsr305:1.3.9
[info] | +-com.twitter:chill_2.10:0.8.0 [S]
[info] | | +-com.esotericsoftware:kryo-shaded:3.0.3
[info] | | | +-com.esotericsoftware:minlog:1.3.0
[info] | | | +-org.objenesis:objenesis:2.1
[info] | | |
[info] | | +-com.twitter:chill-java:0.8.0
[info] | |   +-com.esotericsoftware:kryo-shaded:3.0.3
[info] | | +-com.esotericsoftware:minlog:1.3.0
[info] | | +-org.objenesis:objenesis:2.1
[info] | |
[info] | +-org.apache.spark:spark-tags_2.10:2.1.0
[info] | | +-org.scalatest:scalatest_2.10:2.2.6 [S]
[info] | | | +-org.scala-lang:scala-reflect:2.11.8 [S]
[info] | | |
[info] | | +-org.spark-project.spark:unused:1.0.0
[info] | |
[info] | +-org.spark-project.spark:unused:1.0.0

Can someone look into this problem?






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Does Ignite write READ operations of Txs to WAL?

2017-09-19 Thread John Wilson
Hi,

Does Ignite write READ operations of transactions (e.g. for future auditing
purposes) in the WAL?

Thanks,


Re: Does Ignite write READ operations of Txs to WAL?

2017-09-19 Thread vkulichenko
No, reads are not appended to WAL. It is designed for recovery, not for
auditing purposes.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Conflicting cross-version suffixes in: org.scalatest:scalatest, com.twitter:chill, org.apache.spark:spark-unsafe, org.apache.spark:spark-tags

2017-09-19 Thread vkulichenko
Looks like you use Scala 2.10 and corresponding Spark libraries. In this case
you should use 'ignite-spark_2.10' Ignite module instead of just
'ignite-spark'.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


CacheStoreAdapter#loadAll and ContinuousQuery

2017-09-19 Thread matt
I've got an implementation of CacheStoreAdapter that appears to be working
(it's persisting items etc..). I also have a ContinuousQuery setup and an
initialQuery that runs after the impls loadAll(). Before I started using my
own impl of CacheStoreAdapter - the ContinuousQuery worked as expected, but
now with my impl, it's not. The initialQuery cursor actually doesn't ever
yield anything. Is there something I'm missing with making these two
components work together properly? Anything special I need to do with the
impl or config?

Thanks,
- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: REST API response json is empty.

2017-09-19 Thread ANJANEYA PRASAD NIDUBROLU
Any luck with my query? What am i missing? Why REST response is blank
though the cache has data, i tried scan on visor i can see the data over
there.

Thanks,
Anji.
On 19 Sep 2017 00:26, "ANJANEYA PRASAD NIDUBROLU" 
wrote:

> Hello All,
>
> Hope you are doing great!.
>
> I have tried Ignite's REST API via postman. It is not throwing any errors,
> but the response json's value part has nothing in it.
>
> Here I am pasting cache config (piece of xml file), bean class and main
> class where I am saving the sparkRDD to cache. Also, the attached document
> has REST requests and responses along with respective logs.
>
> As the Spark RDD/ DF I am using has more columns, i have created scala
> bean class so that I can save it on to IgniteCache as Key, Value.
>
> Ignite server and clients are able to talk to each other. Cache is created
> and loaded successfully.
> So far so good, trouble started when I am trying to trigger from REST API
> (the attached notepad has REST APIs i tried to test and their response).
>
> 1) Though the Bean class I created has 8 columns - the cache created has
> only 7 columns, what happened to final one?  [even "*cache -c=<> -scan*"
> command from "*visor*" results has 7 columns.]
> 2) The REST API responses says it is success but the response json's value
> part is empty.
>
> Not sure what went wrong. Happy to provide more details if required.
> Many Thanks,
> Anji.
>
> *ignite-config.xml*
>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
> 
> 
> 
>
> 
> 
> 
> 
> 
> 
> 
>
> 
>
> 
> 
> 
> 
> 
> 
> 
> 
>
> =
> *College.scala*
>
> package org.anjaneya.prasad.loadbean
> import scala.beans.BeanProperty
>
> class College(@BeanProperty register_number :String,
>   @BeanProperty current_city: String,
>   @BeanProperty date2: String,
>   @BeanProperty date_of_birth: String,
>   @BeanProperty student_code: String,
>   @BeanProperty native_city: String,
>   @BeanProperty college_end_date_1: String,
>   @BeanProperty college_start_date_1: String
>  ) extends Serializable{
>   override def toString: String = s"College: $register_number,
> $current_city, $date2, $date_of_birth, $student_code, $native_city,
>  $college_end_date_1, college_start_date_1"
>
> //return format("%s, %s, %s, %s, %s, %s, %s, %s", register_number ,
> native_city , current_city , student_code,college_end_date_1,
>  date_of_birth,  date2)
> }
>
>
> 
> *MainProcess.scala*
>
> val ic = new IgniteContext(sc, 
> "/home/ops/College/src/main/resources/ignite-config.xml",
> true)
>
> var sharedRDDCollege: IgniteRDD[String, College] =
> ic.fromCache("CollegeCache")
> //sharedRDDCollege.collect().foreach(print)
>
> var CollegeCache = test2.rdd.map(x => (x.getString(0),
>new College(x.getString(0) , x.getString(1) , x.getString(2) ,
> x.getString(3) , x.getString(4) , x.getString(5) , x.getString(6) ,
> x.getString(7
>
> //CollegeCache.collect.foreach(print)
> sharedRDDCollege.savePairs(CollegeCache)
>
>
>>


Re: Re: Exception use affinity key when put #invoke from Client.

2017-09-19 Thread aa...@tophold.com
Thanks Denis. 

We customized own BinaryConfiguration, which include this AssetKey, when I 
removed this seem can work now. 


Regards
Aaron


aa...@tophold.com
 
From: Denis Mekhanikov
Date: 2017-09-20 00:36
To: user
Subject: Re: Exception use affinity key when put #invoke from Client.
Hi Aaron!

Is it possible that you wrote a value to cache before configuring an affinity 
key?
This exception occurs when configuration of affinity key for a sought key 
doesn't match configuration that is stored in cache.
Maybe you have persistence enabled and you wrote the value before adding 
@AffinityKeyMapped to field of AssetKey class?

Denis

вт, 19 сент. 2017 г. в 13:10, aa...@tophold.com :
hi all, 

We using the affinity key map to affinity cache connected,  the key is a very 
simple class:

class AssetKey

@AffinityKeyMapped
private String accountId;
private String transId;


But every time when run this command from client:

ignite.cache(AssetEntry.IG_CACHE_NAME).invoke(new 
AssetKey(..), (entry, arguments))

A exception always thrown:

Caused by: class org.apache.ignite.binary.BinaryObjectException: Binary type 
has different affinity key fields [typeName=AssetKey, affKeyFieldName1=null, 
affKeyFieldName2=accountId]
at 
org.apache.ignite.internal.binary.BinaryUtils.mergeMetadata(BinaryUtils.java:950)
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.addMeta(CacheObjectBinaryProcessorImpl.java:430)
at 
org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$2.addMeta(CacheObjectBinaryProcessorImpl.java:173)

For Ignite configuration we put this:










Any configuration we missed?  Thanks for your time!



Regards
Aaron


aa...@tophold.com


Re: Job Listeners

2017-09-19 Thread chandrika
Hello Alexey,

thanks a lot for the input it was very useful, there are two more things i m
stuck at:

1. when i run the above in cluster environment(with more than one node) with
value as an object in session.setAttribute(key, value) value being an
object, then i m unable to proceed further as one of the jobs is getting
rejected on another slave node. the object in the value is serializable.
could u please guide me how to track down what could be the reason for job
to be rejected.

2. also on another note , if i wanted to get the detailed information of the
job as given below
JVM start time   | 03/14/16, 10:53:49   | 
| Node start time  | 03/14/16, 10:53:58   | 
| Up time  | 00:21:31:692 | 
| Last metric update   | 03/14/16, 11:15:20   | 
| CPUs | 4| 
| Thread count | 91   | 
| Cur/avg active jobs  | 0/0.01   | 
| Cur/avg waiting jobs | 0/0.00   | 
| Cur/avg rejected jobs| 0/0.00   | 
| Cur/avg cancelled jobs   | 0/0.00   | 
| Cur/avg job wait time| 0/0.00ms | 
| Cur/avg job execute time | 0/11486.00ms | 
| Cur/avg CPU load %   | 0.13/0.39%

is there a mechanism other than the IgnitePredicate.

thanks and regards,
chandrika



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


WAL log folder issue in YARN mode

2017-09-19 Thread Ray
When I deployed ignite as a YARN application with persistent store enabled.
The WAL logs are under the
/yarn/nm/usercache/username/appcache/application_appid/container_containerID/ignite/apache-ignite-fabric-2.1.0-bin/work/db/wal/.
But when ignite is restarted using YARN, a new appid will be created so the
WAL in the old app will not be copied to the new app.
I tried setting IGNITE_WORK_DIR to a local directory and IGNITE_RELEASES_DIR
to a hdfs directory hoping ignite will save the WAL logs to these folders,
but it does not.
Please advice me how to solve this issue.

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/