connecting to Zookeeper cluster using SSL/TLS Connection

2018-09-19 Thread Raghavan, Aravind

Hi All,

I am trying to use Zookeeper for node discovery with Apache Ignite. I have 
configured Zookeeper to only accept SSL/TLS connections. How do I provide 
Zookeeper keystore detail to Apache Ignite ZookeeperDiscoverySpi? I have 
checked the documentation and source code of ignite-zookeeper.jar and I do not 
see any options to supply these details? Should I be providing these details 
elsewhere in the ignite config?

Thanks,
Aravind



IgniteSparkSession exception:Ignite instance with provided name doesn't exist. Did you call Ignition.start(..) to start an Ignite instance? [name=null]

2018-09-19 Thread yangjiajun
I use IgniteSparkSession to excute spark sql but get an
exception:org.apache.ignite.IgniteIllegalStateException: Ignite instance
with provided name doesn't exist. Did you call Ignition.start(..) to start
an Ignite instance? [name=null] 

My test case runs well when I run spark in local mode,but it throws the
exception when I run it with my local spark cluster.I try to find out why in
mailing list but did not get a clear reason. 

My test environment: 
My app uses default settings in the examples 
OS:Windows 10 
JDK:1.8.0_112 
Ignition version is 2.6.0,I start a node with default settings. 
Sprak version is 2.3.1,I start a standalone cluster with a master and a
worker.I have copied required jars from Ignition to Spark. 

The full exception stack trace is: 

Exception in thread "main" org.apache.spark.SparkException: Exception thrown
in awaitResult: 
at
org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205) 
at
org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:136)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:144)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:140)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
 
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) 
at
org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:140) 
at
org.apache.spark.sql.execution.joins.BroadcastNestedLoopJoinExec.doExecute(BroadcastNestedLoopJoinExec.scala:343)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
 
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) 
at
org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) 
at
org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.prepareShuffleDependency(ShuffleExchangeExec.scala:92)
 
at
org.apache.spark.sql.execution.exchange.ExchangeCoordinator.doEstimationIfNecessary(ExchangeCoordinator.scala:211)
 
at
org.apache.spark.sql.execution.exchange.ExchangeCoordinator.postShuffleRDD(ExchangeCoordinator.scala:259)
 
at
org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:124)
 
at
org.apache.spark.sql.execution.exchange.ShuffleExchangeExec$$anonfun$doExecute$1.apply(ShuffleExchangeExec.scala:119)
 
at
org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52) 
at
org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.doExecute(ShuffleExchangeExec.scala:119)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
 
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) 
at
org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) 
at
org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:371)
 
at
org.apache.spark.sql.execution.SortExec.inputRDDs(SortExec.scala:121) 
at
org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:605)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
 
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) 
at
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) 
at
org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) 
at
org.apache.spark.sql.execution.InputAdapter.doExecute(WholeStageCodegenExec.scala:363)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
 
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
 
at

Setting performance expectations

2018-09-19 Thread Daryl Stultz
Hello,


I am trying out Ignite for the first time. I have some interesting performance 
metrics that are unexpected. I'm looking to see if my understanding of what 
Ignite is meant to do is correct.


I have 400K records from a database that I am loading into a cache with the 
primary key mapped to a DTO of a few properties from the record. For my real 
world scenario I need to perform a lot of calculations in memory of data. The 
bottleneck by far is the time it takes to load data from the database. (So I 
want to do repeated calculations with cached data.)


If I read all 400K rows from the database and simply discard the DTO it takes 
about 700 ms.

Reading all rows into a HashMap is about 1100 ms.

If I then putAll the map to the cache it's 3100 ms.

If I then getAll keys from the cache it's 550 ms.


The cache in this case is "embedded", i.e., just the one JVM I'm running the 
test on (clientMode false). I have a lot of other numbers but these are best 
case. Going to a server node on the same machine takes an order of magnitude 
longer if getting and putting in a loop rather than batch. I didn't test remote 
caches.


I'm not looking for the best way to warm/pre-load a cache, I'm just looking to 
see how the basics work. Reading a row at a time from the database is 30 times 
faster than reading a key at a time from the cache.


As such Ignite will not help me with my problem. Do my numbers seem normal? Is 
this a use case you would expect Ignite to help with?


I ran IgnitePutBenchmark and the report says 124K ops per second average. I'm 
certainly not seeing that in my test case.


Thanks.


--

Daryl Stultz
Principal Software Developer
_
OpenTempo, Inc
http://www.opentempo.com
mailto:daryl.stu...@opentempo.com



Re: Is ID generator split brain compliant?

2018-09-19 Thread Jörn Franke
I think you need to also look at the processes that are using the id in case of 
a split brain scenario.
A unique identifier is always some centralistic approach either it is done by 
one central service or a central rule that is enforced in a distributed fashion.

For instance, in your case you can have x nodes and each of them generates a 
unique id starting with nodename_localuniqueid of the node. In this case you 
can be also sure in case of a splitbrain scenario that the ids are unique if 
and only if the node names are unique.

> On 19. Sep 2018, at 21:36, abatra  wrote:
> 
> Hi, 
> 
> I have a requirement to create a distributed cluster-unique ID generator
> microservice. I have done a PoC on it using Apache Ignite ID Generator. 
> 
> I created a 2 node cluster with two instances of microservices running on
> each node. Nodes are in the same datacenter (in fact in the same network and
> will always be deployed in the same network) and I use TCP/IP discovery to
> discover cluster nodes. 
> 
> So far, it looks pretty good except that it does not provide persistence out
> of the box. But I can work around it by backing latest generated ID in a
> persistent cache and initializing ID generator with the latest value on a
> cluster restart. 
> 
> However, one thing I could not find an answer for is if the out of the box
> ID generator is split brain compliant. I cannot afford to have a duplicate
> ID and want to understand if duplicate ID(s) could occur in a split-brain
> scenario. If yes, what is the recommended approach to handling that
> scenario?
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Is ID generator split brain compliant?

2018-09-19 Thread abatra
Hi, 

I have a requirement to create a distributed cluster-unique ID generator
microservice. I have done a PoC on it using Apache Ignite ID Generator. 

I created a 2 node cluster with two instances of microservices running on
each node. Nodes are in the same datacenter (in fact in the same network and
will always be deployed in the same network) and I use TCP/IP discovery to
discover cluster nodes. 

So far, it looks pretty good except that it does not provide persistence out
of the box. But I can work around it by backing latest generated ID in a
persistent cache and initializing ID generator with the latest value on a
cluster restart. 

However, one thing I could not find an answer for is if the out of the box
ID generator is split brain compliant. I cannot afford to have a duplicate
ID and want to understand if duplicate ID(s) could occur in a split-brain
scenario. If yes, what is the recommended approach to handling that
scenario?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Query 3x slower with index

2018-09-19 Thread eugene miretsky
Hi Ilya,

I created 4 indexs on the table:
1) ga_pKey: on customer_id, dt, category_id (that's our primary key columns)
2) ga_customer_and_category_id: on customer_id and category_id
2) ga_customer_id: on customer_id
4) ga_category_id: on category_id


For the first query (category in ()), the execution plan when using the
first 3 index is exactly the same  - using /* PUBLIC.AFFINITY_KEY */
When using #4 (alone or in combination with any of the other 3)

   1. /* PUBLIC.AFFINITY_KEY */ is replaced with  /* PUBLIC.GA_CATEGORY_ID:
   CATEGORY_ID IN(117930, 175930, 175940, 175945, 101450) */
   2. The query runs slower.

For the second query (join on an inlined table) the behaviour is very
similar. Using the first 3 indexes results in the same plan - using  /*
PUBLIC.AFFINITY_KEY */ and  /* function: CATEGORY_ID = GA__Z0.CATEGORY_ID
*/.
When using #4 (alone or in combination with any of the other 3)

   1. /* function */ and /* PUBLIC.GA_CATEGORY_ID: CATEGORY_ID =
   CATS__Z1.CATEGORY_ID */ are used
   2. The query is much slower.


Theoretically the query seems pretty simple

   1. Use affinity key  to make sure the query runs in parallel and there
   are no shuffles
   2. Filter rows that match category_id using the category_id index
   3. Used customer_id index for the group_by (not sure if this step makes
   sense)

But I cannot get it to work.

Cheers,
Eugene




On Tue, Sep 18, 2018 at 10:56 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> I can see you try to use _key_PK as index. If your primary key is
> composite, it won't work properly for you. I recommend creating an explicit
> (category_id, customer_id) index.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 18 сент. 2018 г. в 17:47, eugene miretsky :
>
>> Hi Ilya,
>>
>> The different query result was my mistake - one of the categoy_ids was
>> duplicate, so in the query that used join, it counted rows for that
>> category twice. My apologies.
>>
>> However, we are still having an issue with query time, and the index not
>> being applied to category_id. Would appreciate if you could take a look.
>>
>> Cheers,
>> Eugene
>>
>> On Mon, Sep 17, 2018 at 9:15 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> Why don't you diff the results of those two queries, tell us what the
>>> difference is?
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пн, 17 сент. 2018 г. в 16:08, eugene miretsky >> >:
>>>
 Hello,

 Just wanted to see if anybody had time to look into this.

 Cheers,
 Eugene

 On Wed, Sep 12, 2018 at 6:29 PM eugene miretsky <
 eugene.miret...@gmail.com> wrote:

> Thanks!
>
> Tried joining with an inlined table instead of IN as per the second
> suggestion, and it didn't quite work.
>
> Query1:
>
>- Select COUNT(*) FROM( Select customer_id from GATABLE3  use
>Index( ) where category_id in (9005, 175930, 175930, 
> 175940,175945,101450,
>6453) group by customer_id having SUM(product_views_app) > 2 OR
>SUM(product_clicks_app) > 1 )
>- exec time = 17s
>- *Result: 3105868*
>- Same exec time if using AFFINITY_KEY index or "_key_PK_hash or
>customer_id index
>- Using an index on category_id increases the query time 33s
>
> Query2:
>
>- Select COUNT(*) FROM( Select customer_id from GATABLE3 ga  use
>index (PUBLIC."_key_PK") inner join table(category_id int = (9005, 
> 175930,
>175930, 175940,175945,101450, 6453)) cats on cats.category_id =
>ga.category_id   group by customer_id having SUM(product_views_app) > 
> 2 OR
>SUM(product_clicks_app) > 1 )
>- exec time = 38s
>- *Result: 3113921*
>- Same exec time if using AFFINITY_KEY index or "_key_PK_hash or
>customer_id index or category_id index
>- Using an index on category_id doesnt change the run time
>
> Query plans are attached.
>
> 3 questions:
>
>1. Why is the result differnt for the 2 queries - this is quite
>concerning.
>2. Why is the 2nd query taking longer
>3. Why  category_id index doesn't work in case of query 2.
>
>
> On Wed, Sep 5, 2018 at 8:31 AM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> I don't think that we're able to use index with IN () clauses. Please
>> convert it into OR clauses.
>>
>> Please see
>> https://apacheignite-sql.readme.io/docs/performance-and-debugging#section-sql-performance-and-usability-considerations
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 3 сент. 2018 г. в 12:46, Andrey Mashenkov <
>> andrey.mashen...@gmail.com>:
>>
>>> Hi
>>>
>>> Actually, first query uses index on affinity key which looks more
>>> efficient than index on category_id column.
>>> The first query can process groups one by one and stream partial

Re: Number of threads in computations

2018-09-19 Thread F.D.
Perfect.

Thanks,
   F.D.

On Wed, Sep 19, 2018 at 11:23 AM Evgenii Zhuravlev 
wrote:

> Hi,
>
> You can use set FifoCollisionSpi.parallelJobsNum:
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/collision/fifoqueue/FifoQueueCollisionSpi.html
>
> Evgenii
>
> ср, 19 сент. 2018 г. в 11:22, F.D. :
>
>> Hi,
>>
>> I'd like to know if is it possible to limit the number of threads,
>> launched when a distubuted closure arrive to the server?
>>
>> thanks,
>>   F.D.
>>
>


Re: .net decimal being stored as Other in ignite.

2018-09-19 Thread Ilya Kasnacheev
Hello!

I have tried that, and indeed: DBeaver shows the column type is OTHER even
while the type of column is correctly mapped to java.math.BigDecimal.

I have filed a ticket: https://issues.apache.org/jira/browse/IGNITE-9650

Note that SQL on that column since to work correctly, it's only metadata
that is affected.

Regards,
-- 
Ilya Kasnacheev


ср, 19 сент. 2018 г. в 11:25, wt :

> anybody have any clue as to how Ignite is not mapping .Net decimal to
> Java.math.Bigdecimal?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is there a way to use Ignite optimization and Spark optimization together when using Spark Dataframe API?

2018-09-19 Thread vkulichenko
Ray,

Per my understanding, pushdown filters are propagated to Ignite either way,
it's not related to the "optimization". Optimization affects joins,
gropings, aggregations, etc. So, unless I'm missing something, the behavior
you're looking for is achieved by setting
OPTION_DISABLE_SPARK_SQL_OPTIMIZATION to true.

However, can you please clarify what you mean "Ignite is not optimized for
join"? 

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Problems are enabling Ignite Persistence

2018-09-19 Thread akurbanov
Hello,

Could you also set correct path for IGNITE_HOME or set work directory in
Ignite configuration to exclude chances for your /tmp directory to be wiped.

Regards,




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Problems are enabling Ignite Persistence

2018-09-19 Thread akurbanov
Hello,

There are no issues with starting nodes with enabled persistence, your
DataStorageConfiguration is fine, are there any other preconditions to be
met to face same issue? Could you try to send minimalistic reproducer that
shows issue on clean setup?

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is there a way to use Ignite optimization and Spark optimization together when using Spark Dataframe API?

2018-09-19 Thread aealexsandrov
Hi,

I am not sure that it will work but you can try next:

SparkSession spark = SparkSession
.builder()
.appName("SomeAppName")
.master("spark://10.0.75.1:7077")
.config(OPTION_DISABLE_SPARK_SQL_OPTIMIZATION, "false") //or
true
.getOrCreate();

JavaSparkContext sparkContext = new
JavaSparkContext(spark.sparkContext());

JavaIgniteRDD igniteRdd1 = igniteContext.fromCache("CACHE1");

//here Ignite sql processor will be used because inside
SqlFieldsQuery
Dataset ds1 = igniteRdd1.sql("select * from CACHE1");
Dataset ds2 = igniteRdd1.sql("select * from CACHE2");

//here spark sql processor will be used
ds1.join(ds2).where();

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Recommended HW on AWS EC2 - vertical vs horizontal scaling

2018-09-19 Thread aealexsandrov
Hi,

Left the list with some useful articles links here
http://apache-ignite-users.70518.x6.nabble.com/Slow-SQL-query-uses-only-a-single-CPU-td23553.html

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ttl-cleanup-worker got "Critical system error detected"

2018-09-19 Thread akurbanov
Hello,

Would you mind sharing complete log for your nodes? Looks like SQL indexes
got corrupted or operation was interrupted. Could you also point out exact
version of Ignite that you are using?

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Slow SQL query uses only a single CPU

2018-09-19 Thread aealexsandrov
Hi,

I think you can try to investigate the articles from the next wiki:

https://cwiki.apache.org/confluence/display/IGNITE/Design+Documents

Next blog contains the interesting information (possible some will be out of
date):

http://gridgain.blogspot.com

It contains a lot of information about how Ignite works under the hood. 

According to indexes and h2. Ignite used the h2 for query parsing, execution
planning, and indexing but looks like there is no detailed documentation
about it. Only official information:

https://apacheignite-sql.readme.io/docs/how-ignite-sql-works

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


***UNCHECKED*** Re: new CacheConfiguration always returns same instance

2018-09-19 Thread akurbanov
Hello,

Why would you expect them to be different in your test? This is not a
violation of hashcode general contract. They are calculated using 
MutableConfiguration

 
.hashCode() from  JSR 107
  , you
may check here for source code executed at this point: 
MutableConfiguration.java

  

Regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How does lazy load work internally in Ignite?

2018-09-19 Thread Ilya Kasnacheev
Hello!

The default batch size is 1000 rows so it seems
(org.apache.ignite.internal.processors.cache.query.GridCacheTwoStepQuery#DFLT_PAGE_SIZE).

Regards,
-- 
Ilya Kasnacheev


ср, 19 сент. 2018 г. в 15:09, Ray :

> Hi Ilya, thanks for the reply.
>
> So is it like cursor on the server side?
>
> Let's say user ran a query "select * from tableA" where tablaA has a
> million
> records.
> When the lazy loading flag is on Ignite server will send the first batch of
> records to the user.
> When user's client asks for second batch then Ignite sends the second batch
> of records.
>
> Is my understanding correct?
>
> What's the default batch size sent to user if my understanding is correct?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to write Trigger Script

2018-09-19 Thread Ilya Kasnacheev
I have found this instruction on SO:

https://stackoverflow.com/a/45235510/36498

Please try it and see if there are any Ignite specifics.

Regards,
-- 
Ilya Kasnacheev


ср, 19 сент. 2018 г. в 15:01, Malashree :

> How to write Trigger Script in a DBeaver Tool Using Apache Ignite Database.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How does lazy load work internally in Ignite?

2018-09-19 Thread Ray
Hi Ilya, thanks for the reply.

So is it like cursor on the server side?

Let's say user ran a query "select * from tableA" where tablaA has a million
records.
When the lazy loading flag is on Ignite server will send the first batch of
records to the user.
When user's client asks for second batch then Ignite sends the second batch
of records.

Is my understanding correct?

What's the default batch size sent to user if my understanding is correct?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite Cache metrics on K8s

2018-09-19 Thread Premachandran, Mahesh (Nokia - IN/Bangalore)
Hi,

I am trying to fetch some cache metrics from Ignite 2.5 running on k8s. Enabled 
the cache level metrics by setting the below property in ignite config xml for 
specific cache.


But both 
"ignite_org_apache_ignite_internal_processors_cache_cacheclustermetricsmxbeanimpl_cacheputs"
 and 
"ignite_org_apache_ignite_internal_processors_cache_cachelocalmetricsmxbeanimpl_averageputtime"
 value is coming as 0 while am able to get values for 
"ignite_org_apache_ignite_internal_processors_cache_cacheclustermetricsmxbeanimpl_keysize"

Are there any other property that I need to set to get these metrics? I am 
injecting data using IgniteDataStreamer.

Regards,
Mahesh




Unable to get the ignite cache metrics

2018-09-19 Thread kripa
Hi 
I brought ignite server on k8s cluster.
Set the below property for a cache i wanted to check the metrics
 

Then i started the client and tried to push the data into ignite cache.
I am able to see the data in the cache.
But the values for the following metrics i am getting as 0. Can some one let
me know why is this.
ignite_org_apache_ignite_internal_processors_cache_cachelocalmetricsmxbeanimpl_cacheputs
= 0
ignite_org_apache_ignite_internal_processors_cache_cachelocalmetricsmxbeanimpl_averageputtime
= 0




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Inconsistent data.

2018-09-19 Thread Shrikant Haridas Sonone
Hi,
I am using Ignite 2.6.0 on top of google kubernetes engine v1.10.6-gke.2.
I am using native persistent and example configuration as follows.












We have around 2GB of data.
We have recently migrated our data from 2.4.0 to 2.6.0

Issue
1. When running 3 nodes of Ignite (each in a pod) in 3 as baseline nodes. If 
any of the nodes get restarted then it takes approximately of 5-6sec to 
rebalance the data. If there are any requests to be served, some of the 
requests are sent to the newly created node which is yet to complete the 
rebalance operation. Hence response received is inconsistent (sometimes empty) 
with the previous state before the node is added

2.  Same output of point 1 is repeated even if a new/fresh node is added to the 
baseline topology.

We tried to use the cache rebalance mode as SYNC. (as per doc 
https://apacheignite.readme.io/docs/rebalancing) The issue still persists.

Is this the expected behavior?

Is there any way/settings, which will restrict the requests directed to the 
newly added/restarted node till the rebalance operation is completed 
successfully..?

Is it side effect of the migration activity from Apache Ignite 2.4.0 to 2.6.0..?

Thanks and Regards,
Shrikant Sonone
Schlumberger.


Re: Number of threads in computations

2018-09-19 Thread Evgenii Zhuravlev
Hi,

You can use set FifoCollisionSpi.parallelJobsNum:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/collision/fifoqueue/FifoQueueCollisionSpi.html

Evgenii

ср, 19 сент. 2018 г. в 11:22, F.D. :

> Hi,
>
> I'd like to know if is it possible to limit the number of threads,
> launched when a distubuted closure arrive to the server?
>
> thanks,
>   F.D.
>


Re: .net decimal being stored as Other in ignite.

2018-09-19 Thread wt
anybody have any clue as to how Ignite is not mapping .Net decimal to
Java.math.Bigdecimal?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Number of threads in computations

2018-09-19 Thread F.D.
Hi,

I'd like to know if is it possible to limit the number of threads, launched
when a distubuted closure arrive to the server?

thanks,
  F.D.


Re: ScanQuery throwing Exception for java Thin client while peerclassloading is enabled

2018-09-19 Thread Saby
Hi Ilya,
Thanks for your response, yes I had tried with withKeepBinary() option. If I
call with withKeepBinary() option then there no deserialization related
exception is coming but the returned entries are being wrapped to
'BinaryObjectImpl' and hence getting ClassCastException. Is there any option
to get the entries of rawtype?

I'm using the following code snippet to fetch the data:


public class Predicate implements IgniteBiPredicate
{

/**
 * 
 */
private static final long serialVersionUID = 1L;

@Override
public boolean apply(K e1, V e2) {
return true;
}
}

public class IgniteValueClass implements Binarylizable, Serializable {
private static final long serialVersionUID = 50283244369719L;
@QuerySqlField(index = true)
String id;
@QuerySqlField(index = true)
String val1;

@QuerySqlField
String val2;

public String getId() {
return id;
}

public void setId(String id) {
this.id = id;
}

public String getVal1() {
return val1;
}

public void setVal1(String val1) {
this.val1 = val1;
}

public String getVal2() {
return val2;
}

public void setVal2(String val2) {
this.val2 = val2;
}

@Override
public void writeBinary(BinaryWriter writer) throws 
BinaryObjectException {
writer.writeString("id", id);
writer.writeString("val1", val1);
writer.writeString("val2", val2);

}

@Override
public void readBinary(BinaryReader reader) throws 
BinaryObjectException {
id = reader.readString("id");
val1 = reader.readString("val1");
val2 = reader.readString("val2");

}

}

public Iterator> getAllEntries(String areaName)
throws NotClientException {
checkConnectedAndExists(areaName);
Query> sql = new ScanQuery<>(new
Predicate<>());
try (QueryCursor> cursor =
ignite.cache(areaName).withKeepBinary().query(sql)) {
return returnEntry(cursor);
}
} 

Thanks
Saby




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


***UNCHECKED*** new CacheConfiguration always returns same instance

2018-09-19 Thread kcheng.mvp
here is a simple test code

CacheConfiguration cacheCfg = new CacheConfiguration<>(IG_P);
cacheCfg.setBackups(1);
   
cacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.PRIMARY_SYNC);
logger.info("instance {}", cacheCfg.hashCode());

cacheCfg = new CacheConfiguration<>(IG_T);
cacheCfg.setBackups(1);
   
cacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.PRIMARY_SYNC);
logger.info("instance {}", cacheCfg.hashCode());


the log shows even I create CacheConfiguration two times, but they have the
same hashcode. is this correct?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/