Re: POJO get from any ignite console? visor or rest?

2017-09-21 Thread Alexey Kuznetsov
Binti,

You can try to use Web Console [1] "Queries screen"  and execute SCAN on
you cache.

[1] https://ignite.apache.org/features/datavisualization.html

On Fri, Sep 22, 2017 at 2:32 AM, bintisepaha  wrote:

> Hi, I see that ignire rest API does not yet have support for json/custom
> objects lookup by key.
> But is this something I can do via JMX or visor or a management console?
>
> I only would like to see how the object looks in the cache easily.
> The key is usually a composite key of 2 integers.
>
> Let me know if there is a quick web based way to access this.
>
> Thanks,
> Binti
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Alexey Kuznetsov


Re: Using ignite with spark

2017-09-21 Thread Valentin Kulichenko
Hello Patrick,

See my comments below.

Most of your questions don't have a generic answer and would heavily depend
on your use case. Would you mind giving some more details about it so that
I can give more specific suggestions?

-Val

On Thu, Sep 21, 2017 at 8:24 AM, Patrick Brunmayr <
patrick.brunm...@kpibench.com> wrote:

> Hello
>
>
>- What is currently the best practice of deploying Ignite with Spark ?
>
>
>- Should the Ignite node sit on the same machine as the Spark executor
>?
>
>
Ignite can run either on same boxes where Spark runs, or as a separate
cluster, and both approaches have their pros and cons.


> According to this documentation
>  Spark
> should be given 75% of machine memory but what is left for Ignite then ?
>
> In general, Spark can run well with anywhere from *8 GB to hundreds of
>> gigabytes* of memory per machine. In all cases, we recommend allocating
>> only at most 75% of the memory for Spark; leave the rest for the operating
>> system and buffer cache.
>
>
Documentation states that you should give *at most* 75% to make sure OS has
a safe cushion for its own purposes. If Ignite runs along with Spark,
amount of memory allocated to Spark should be less then that maximum of
course.


>
>- Don't they battle for memory ?
>
>
You should configure both Spark and Ignite so that they never try to
consume more memory than physically available, also leaving some for OS.
This way there will be no conflict.

>
>-
>- Should i give the memory to Ignite or Spark ?
>
>
Again, this heavily depends on use case and on how heavily you use both
Spark and Ignite.


>-
>- Would Spark even benefit from Ignite if the Ignite nodes would be
>hostet on other machines ?
>
>
There are definitely use cases when this can be useful. Although in others
it is better to run Ignite separately.


>-
>
>
> We are currently having hundress of GB for analytics and we want to use
> ignite to speed up things up.
>
> Thank you
>
>
>
>
>
>
>
>


Re: How to configure a QueryEntity for a BinaryObject

2017-09-21 Thread Savagearts
Thanks Evgenii,It does work!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


[ANNOUNCE] Apache Ignite 2.2.0 Released

2017-09-21 Thread Denis Magda
The Apache Ignite Community is pleased to announce the release of Apache Ignite 
2.2.0.

Apache Ignite [1] is the in-memory computing platform that is
durable, strongly consistent and highly available 
with powerful SQL, key-value and processing APIs.

You can view Apache Ignite as a collection of independent, well-integrated 
components geared to improve performance and scalability of your application 
such as:
- Data Grid
- SQL Database
- Compute Grid
- Service Grid
- Machine Learning Grid

This release includes a couple of fixes to resolve excessive memory usage 
caused by default memory allocation parameters. The issue could happen more 
frequently if several nodes were started on a single box. It's recommended to 
update to 2.2 if your cluster nodes freeze and unresponsive because of a rival 
for available RAM on a local host.

The full list of the changes can be found here [2].

Please visit this page if you’re ready to try the release out:
https://ignite.apache.org/download.cgi 

Please let us know [3] if you encounter any problems.

Regards,

The Apache Ignite Community

[1] https://ignite.apache.org 
[2] https://ignite.apache.org/releases/2.2.0/release_notes.html 

[3] https://ignite.apache.org/community/resources.html#ask 


Ignite Context failing with java.lang.NullPointerException: Ouch! Argument cannot be null: cfg

2017-09-21 Thread pradeepchanumolu
I am hitting the following exception when running Ignite with Spark on Yarn.
Here is the snippet of the code.
The same job runs fine in spark local mode (spark-master: local). Only
failing when running on Yarn.  

val config = new IgniteConfiguration()
val tcpDiscoverySpi = new TcpDiscoverySpi()
val ipFinder = new TcpDiscoveryVmIpFinder()
ipFinder.setAddresses(
  util.Arrays.asList(
"server1-ip",
"server2-ip",
"server3-ip",
"server4-ip",
"server5-ip:47500"
  ))
tcpDiscoverySpi.setIpFinder(ipFinder)
config.setDiscoverySpi(tcpDiscoverySpi)


val igniteContext = new IgniteContext(spark.sparkContext, () ⇒ config,
standalone = false)

Exception: 


Driver stacktrace:
at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
at scala.Option.foreach(Option.scala:257)
at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:925)
at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:923)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:923)
at org.apache.ignite.spark.IgniteContext.(IgniteContext.scala:54)
at
BulkLoadFeatures$.delayedEndpoint$BulkLoadFeatures$1(BulkLoadFeatures.scala:37)
at BulkLoadFeatures$delayedInit$body.apply(BulkLoadFeatures.scala:18)
at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
at 
scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:76)
at scala.App$$anonfun$main$1.apply(App.scala:76)
at scala.collection.immutable.List.foreach(List.scala:381)
at
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.App$class.main(App.scala:76)
at BulkLoadFeatures$.main(BulkLoadFeatures.scala:18)
at BulkLoadFeatures.main(BulkLoadFeatures.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:637)
Caused by: java.lang.NullPointerException: Ouch! Argument cannot be null:
cfg
at
org.apache.ignite.internal.util.GridArgumentCheck.notNull(GridArgumentCheck.java:48)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:594)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:536)
at org.apache.ignite.Ignition.getOrStart(Ignition.java:414)
at org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:143)
at
org.apache.ignite.spark.IgniteContext$$anonfun$1.apply(IgniteContext.scala:54)
at
org.apache.ignite.spark.IgniteContext$$anonfun$1.apply(IgniteContext.scala:54)
at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:925)
at
org.apache.spark.rdd.RDD$$anonfun$foreac

POJO get from any ignite console? visor or rest?

2017-09-21 Thread bintisepaha
Hi, I see that ignire rest API does not yet have support for json/custom
objects lookup by key.
But is this something I can do via JMX or visor or a management console?

I only would like to see how the object looks in the cache easily.
The key is usually a composite key of 2 integers.

Let me know if there is a quick web based way to access this.

Thanks,
Binti



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Context : java.lang.NoSuchMethodError: org.apache.curator.framework.api.CreateBuilder.creatingParentContainersIfNeeded

2017-09-21 Thread pradeepchanumolu
Thanks Denis for looking into the problem. 

Here are the dependencies from my build.sbt file

lazy val Versions = new {
  val igniteVersion = "2.2.0"
}

libraryDependencies ++= Seq(
  "org.apache.ignite" % "ignite-core" % Versions.igniteVersion,
  "org.apache.ignite" % "ignite-spring" % Versions.igniteVersion,
  "org.apache.ignite" % "ignite-zookeeper" % Versions.igniteVersion,
  "org.apache.ignite" % "ignite-spark" % Versions.igniteVersion
exclude ("org.scalatest", "scalatest_2.10")
exclude ("com.twitter", "chill_2.10")
exclude ("org.apache.spark", "spark-unsafe_2.10")
exclude ("org.apache.spark", "spark-tags_2.10"),
  "org.apache.ignite" % "ignite-clients" % Versions.igniteVersion
)

I am using spark-submit command to launch the job. 

command: 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Context : java.lang.NoSuchMethodError: org.apache.curator.framework.api.CreateBuilder.creatingParentContainersIfNeeded

2017-09-21 Thread pradeepchanumolu
Thanks Denis for looking into the problem. 

Here are the dependencies from my build.sbt file

val Versions = new {
  val igniteVersion = "2.2.0"
}

libraryDependencies ++= Seq(
  "org.apache.ignite" % "ignite-core" % Versions.igniteVersion,
  "org.apache.ignite" % "ignite-spring" % Versions.igniteVersion,
  "org.apache.ignite" % "ignite-zookeeper" % Versions.igniteVersion,
  "org.apache.ignite" % "ignite-spark" % Versions.igniteVersion
exclude ("org.scalatest", "scalatest_2.10")
exclude ("com.twitter", "chill_2.10")
exclude ("org.apache.spark", "spark-unsafe_2.10")
exclude ("org.apache.spark", "spark-tags_2.10"),
  "org.apache.ignite" % "ignite-clients" % Versions.igniteVersion
)

I am using spark-submit command to launch the job. 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: TcpDiscoveryS3IpFinder AmazonS3Exception: Slow Down

2017-09-21 Thread Dave Harvey
The only possibly different thing we are doing is using a VPC endpoint to
allow the nodes to access S3 directly, without having to supply credentials.
   


 










  











--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


TcpDiscoveryS3IpFinder AmazonS3Exception: Slow Down

2017-09-21 Thread Dave Harvey
Is TcpDiscoveryS3IpFinder expected to work?   I randomly get exceptions which
seem to be considered part of normal S3 operation, but are not
handled/retried.   

com.amazonaws.services.s3.model.AmazonS3Exception: Slow Down (Service:
Amazon S3; Status Code: 503; Error Code: 503 Slow Down; Request ID:
823A32B2F20B2E3B), S3 Extended Request ID:
/qAAKF/LgRP8Y+7sGXSslaxRy6nBYYsJD17aSrFmCTsTXv+vEaeYxBFYT63km+T7PkRkHw1UdAQ=
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1586)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1254)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1035)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:747)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:721)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:704)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:672)
at
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:654)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:518)
at
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4137)
at
com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1275)
at
com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1232)
at
org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder.initClient(TcpDiscoveryS3IpFinder.java:256)
at
org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder.registerAddresses(TcpDiscoveryS3IpFinder.java:184)




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Using ignite with spark

2017-09-21 Thread Patrick Brunmayr
Hello


   - What is currently the best practice of deploying Ignite with Spark ?

   - Should the Ignite node sit on the same machine as the Spark executor ?


According to this documentation
 Spark
should be given 75% of machine memory but what is left for Ignite then ?

In general, Spark can run well with anywhere from *8 GB to hundreds of
> gigabytes* of memory per machine. In all cases, we recommend allocating
> only at most 75% of the memory for Spark; leave the rest for the operating
> system and buffer cache.



   - Don't they battle for memory ?

   - Should i give the memory to Ignite or Spark ?

   - Would Spark even benefit from Ignite if the Ignite nodes would be
   hostet on other machines ?


We are currently having hundress of GB for analytics and we want to use
ignite to speed up things up.

Thank you


回复: Fetched result use too much time

2017-09-21 Thread Lucky
Andrey Mashenkov
Thank you very much!
1.query parallelism:this will cause a problem: fetch wrong reslut. 
   I set it to 10,and have table a with 150,000 records, table b with 
12,000,000 records.
   when I query single table,the result is correct.
   but when the sql is like this:
   select a.id from a inner join b on a.id = b.tid 
  it got the wrong result. The result should be 11,000,000;but it just 
return 380,000 records.
  when I remove query parallelism setting,it return correctly. 


2. I have modified ths property,and restart the server.for the record is 
too large, it need 4 hours to load data to ignite.So I have to wait.
3.Actually, if I remove the group by clause and having condition, it took 
more time!
4  and 5: I have try them before ,but it did not work.
Thanks again.
Lucky   


在2017年09月21日 21:28,Andrey Mashenkov 写道:
Lucky,




1. Looks like it make no sense to set query parallelism level higher number of 
available CPU on node.


2. Map query use index for field FASSIGCUID type of String and seems values are 
16 chars length strings (32 bytes)
By default, values with size < 10 bytes can be inlined in index, so Ignite 
doesn't need to lookup a data page for value data. 
You can try to increase it up to 32 via 
cacheConfiguration.setSqlIndexMaxInlineSize(32) or JVM property 
-DIGNITE_MAX_INDEX_PAYLOAD_SIZE=32.


3. Ignite doesn't know whether your data is collocated by FDATABASEDID (group 
by clause) or not collocated.
So, Ignite can't apply HAVING condition instantly on map phase and have to load 
and merge all groups from all nodes before check for HAVING.
If it possible to collocate data on GROUP BY condition, you can hint Ignite 
with setting query flag:   sqlFieldsQuery.setCollocated(true).
However, I'm not sure it will help much and H2 will be able to make any 
optimization here.


4. Also, you can force Ignite to use different index. E.g. group index on 
FDATABASEDID and FASSIGCUID and same fields in different order.


5. Sometimes, Ignite change join order and it can cause unexcpected slowdown. 
You can try to change join order by changing tables positions in query string.
To preserve Ignite join order optimization you may use a flag:  
sqlFieldsQuery.setEnforceJoinOrder(true).




Hope, this will help you.



Re: Ignite Context : java.lang.NoSuchMethodError: org.apache.curator.framework.api.CreateBuilder.creatingParentContainersIfNeeded

2017-09-21 Thread Denis Mekhanikov
Hi!

Apparently you have some problem with classpath. Looks like version of
Apache Curator used in runtime differs from the one used during compilation.
It will be easier to say what is wrong if you provide your pom.xml

Denis

чт, 21 сент. 2017 г. в 8:57, pradeepchanumolu :

> Hitting the following exception when starting the Ignite context.
>
> Here is the snippet of the code. Also double checked that no other curator
> jars are present in the class path except from the one from ignite.
>
> Here are the versions I am using.
> Apache Ignite : 2.2.0
> Apache Spark: 2.1.0
>
> Can someone help with this error? Thanks !!
>
> Also I am hitting this error only when I try to launch IgniteContext. When
> I
> try to launch Ignite in client mode (Ignite.start()) I don't hit this
> error.
> In both cases I use TcpDiscoveryZookeeperIpFinder as ipFinder.
>
> 17/09/20 22:47:08 ERROR internal.IgniteKernal: Got exception while starting
> (will rollback startup routine).
> java.lang.NoSuchMethodError:
>
> org.apache.curator.framework.api.CreateBuilder.creatingParentContainersIfNeeded()Lorg/apache/curator/framework/api/ProtectACLCreateModePathAndBytesable;
> at
>
> org.apache.curator.x.discovery.details.ServiceDiscoveryImpl.internalRegisterService(ServiceDiscoveryImpl.java:222)
> at
>
> org.apache.curator.x.discovery.details.ServiceDiscoveryImpl.registerService(ServiceDiscoveryImpl.java:188)
> at
>
> org.apache.ignite.spi.discovery.tcp.ipfinder.zk.TcpDiscoveryZookeeperIpFinder.registerAddresses(TcpDiscoveryZookeeperIpFinder.java:237)
> at
>
> org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinderAdapter.initializeLocalAddresses(TcpDiscoveryIpFinderAdapter.java:61)
> at
>
> org.apache.ignite.spi.discovery.tcp.TcpDiscoveryImpl.registerLocalNodeAddress(TcpDiscoveryImpl.java:317)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:341)
> at
>
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:1834)
> at
>
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
> at
>
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:842)
> at
>
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1786)
> at
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:978)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1896)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1648)
> at
> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1076)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:596)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:536)
> at org.apache.ignite.Ignition.getOrStart(Ignition.java:414)
> at
> org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:143)
> at
>
> org.apache.ignite.spark.IgniteContext$$anonfun$1.apply(IgniteContext.scala:54)
> at
>
> org.apache.ignite.spark.IgniteContext$$anonfun$1.apply(IgniteContext.scala:54)
> at
>
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:925)
> at
>
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:925)
> at
>
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
> at
>
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1944)
> at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
> at org.apache.spark.scheduler.Task.run(Task.scala:99)
> at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: INSERT into SELECT from Ignite 1.9 or 2.0

2017-09-21 Thread Dmitriy Setrakyan
To add to Andrey's example, here is how you would use IgniteAtomicSequence
to make IDs unique across the whole distributed cluster:

*public static class CustomSQLFunctions {*
*@QuerySqlFunction*
*public static long nextId(String seqName, long initVal) {*
*return Ignition.ignite().atomicSequence("idGen", 0,
true).incrementAndGet();*
*}*
* }*


On Thu, Sep 21, 2017 at 5:37 AM, Andrey Mashenkov <
andrey.mashen...@gmail.com> wrote:

> Hi,
>
> As a workaround you can implement custom function [1] for unique number
> generation.
>
> 1.You need to create a class with static functions annotated with
> @QuerySqlFunction.
>
> E.g. for single node grid you can use some AtomicLong static field.
>
>
> public class MyFunctions {
>
> static AtomicLong seq = new AtomicLong();
>
>
> @QuerySqlFunction
> public static long nextID() {
> return seq.getAndIncrement();
> }
> }
>
>
> This class should be added to classpath on all nodes.
>
> 2.Register class with functions.
>
> cacheConfiguration.setSqlFunctionClasses(MyFunctions.class);
>
>
> 3. For multi-node grid you use IgniteAtomicSequence instead and
> initialize static variable on grid start, e.g. manually or via
> LifecycleBean [2].
>
> 4. Now you can run query like "INSERT ... (ID, ...) SELECT nextID(), ..."
>
> [1] https://apacheignite.readme.io/docs/miscellaneous-
> features#custom-sql-functions
> [2] https://apacheignite.readme.io/docs/ignite-life-
> cycle#section-lifecyclebean
>
> On Mon, Sep 18, 2017 at 4:17 PM, Alexander Paschenko <
> alexander.a.pasche...@gmail.com> wrote:
>
>> Hello,
>>
>> Andrey, I believe you're wrong. INSERT from SELECT should work. AUTO
>> INCREMENT columns indeed are not supported for now though, it's true.
>>
>> - Alex
>>
>> 2017-09-18 16:09 GMT+03:00 Andrey Mashenkov :
>> > Hi,
>> >
>> > Auto-increment fields are not supported yet. Here is a ticket for this
>> [1]
>> > and you can track it's state.
>> > Moreover, underlying H2 doesn't support SELECT with JOINs nested into
>> > INSERT\UPDATE query.
>> >
>> > [1] https://issues.apache.org/jira/browse/IGNITE-5625
>> >
>> > On Mon, Sep 18, 2017 at 12:31 PM, acet 
>> wrote:
>> >>
>> >> Hello,
>> >> I would like to insert the result of a select query into a cache in
>> >> ignite.
>> >> Something like:
>> >>
>> >> INSERT INTO "new_cache_name".NewCacheDataType(ID, CUSTOMERID,
>> PRODUCTNAME)
>> >> (SELECT {?}, c.id, p.product_name
>> >> FROM "customers".CUSTOMER as c
>> >> JOIN "products".PRODUCT as p
>> >> ON c.id = p.customer_id)
>> >>
>> >> in the place of the {?} i would like to put in something similar to
>> >> AtomicSequence, however seeing as this will be work done without using
>> the
>> >> client I cannot tell how this is possible.
>> >> Can someone advise if this can be done, and if so, how?
>> >>
>> >> Thanks.
>> >>
>> >>
>> >>
>> >> --
>> >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>> >
>> >
>> >
>> >
>> > --
>> > Best regards,
>> > Andrey V. Mashenkov
>>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Re: Compute Grid Request Tracing

2017-09-21 Thread Denis Mekhanikov
Hi Chris!

Take a look at the documentation of LoggerResource annotation:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/resources/LoggerResource.html
You can annotate a field of ComputeTask, ComputeJob or some other classes
and use the injected logger during execution.

Does it fit your requirements?

Denis

ср, 20 сент. 2017 г. в 21:11, Chris Berry :

> Hi,
>
> We wish to implement request tracing in our Ignite Compute Grid.
>
> Typically we handle all of this at the Jersey/Servlet level.
> Where we pass a Header to the webapp, causing a Filter to then write a flag
> into the Logger's MDC,
> which, in turn, we use to tell Logback to enable TRACE logging on that
> particular Thread.
>
> Done using Jersey's ContainerRequestFilter and ContainerResponseFilter.
> And Logback's  MDCFilter.
>
> So before I start building something on top of our ComputeTask and
> ComputeJob,
> is there any built-in mechanism for request tracing in Ignite??
> (there is nothing I could find)
>
> And is there any Filter mechanism that fits this sort of concept?? So that
> I
> can do this out-of-band.
> (what I've found is not really a fit)
>
> Thanks,
> -- Chris
>
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Fetched result use too much time

2017-09-21 Thread Andrey Mashenkov
Lucky,


1. Looks like it make no sense to set query parallelism level higher number
of available CPU on node.

2. Map query use index for field FASSIGCUID type of String and seems values
are 16 chars length strings (32 bytes)
By default, values with size < 10 bytes can be inlined in index, so Ignite
doesn't need to lookup a data page for value data.
You can try to increase it up to 32 via*
cacheConfiguration.setSqlIndexMaxInlineSize(32) *or JVM property
*-DIGNITE_MAX_INDEX_PAYLOAD_SIZE=32*.

3. Ignite doesn't know whether your data is collocated by FDATABASEDID
(group by clause) or not collocated.
So, Ignite can't apply HAVING condition instantly on map phase and have to
load and merge all groups from all nodes before check for HAVING.
If it possible to collocate data on GROUP BY condition, you can hint Ignite
with setting query flag:   *sqlFieldsQuery.setCollocated(true).*
However, I'm not sure it will help much and H2 will be able to make any
optimization here.

4. Also, you can force Ignite to use different index. E.g. group index on
FDATABASEDID and FASSIGCUID and same fields in different order.

5. Sometimes, Ignite change join order and it can cause unexcpected
slowdown. You can try to change join order by changing tables positions in
query string.
To preserve Ignite join order optimization you may use a flag:
*sqlFieldsQuery.setEnforceJoinOrder(true).*


Hope, this will help you.

On Tue, Sep 19, 2017 at 5:07 AM, Lucky  wrote:

>
> Please see the attachment.
> I have set query parallelism  to 30. it took 42 seconds.
> But it is not enough.
> I excepted it took less than 3 seconds.
>
> then,I have 3 nodes.
>
> As for the 3589 number, we need to check the number of ID using in
> conditions. Only the number of times used is equal to the record in the in
> condition. That's the record we need. This is the business scenario
> required. I can't change this.
>
> Thanks for your suggestion.
> Lucky
>
>
> 2017年09月18日 21:55,Vladimir Ozerov
>  :
>
> Hi Lucky,
>
> Could you please share you data model and node/cache configuration? I want
> to make sure that proper indexes are set. I will be able to advise
> something then. As I quick suggestion you may try to increase query
> parallelism on your "databaseDAssignCache". Please try setting it to the
> number of cores on your server nodes. Relevant property -
> CacheConfifuration.queryParallelism. Btw, how many nodes do you have?
>
> Also I am struggling to understand the number "3589". Why this number
> appears both as ">= 3589" condition and as a number of parameters inside
> "IN" clause?
>
> Vladimir.
>
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite/yardstick benchmarking from within Docker Image

2017-09-21 Thread Denis Mekhanikov
You can change a docker entry point, so Ignite instance won't start by
default. 
You can do something like this:
  *docker run --entrypoint=/bin/bash -it apacheignite/ignite*

Denis



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Client Mode and client to client communication

2017-09-21 Thread ilya.kasnacheev
Hello John!

Why don't you promote your clients to servers?

In Ignite, it is possible that only your dedicated servers will contain
caches data, while other servers will participate in cluster without storing
data. You can set up custom Cluster Groups / server roles for that. For
every cache you can specify nodes that this cache will be started on, by
setting nodeFilter property on cache configuration.

Please refer to https://apacheignite.readme.io/docs/cluster-groups and
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/CacheConfiguration.html#getNodeFilter()

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: INSERT into SELECT from Ignite 1.9 or 2.0

2017-09-21 Thread Andrey Mashenkov
Hi,

As a workaround you can implement custom function [1] for unique number
generation.

1.You need to create a class with static functions annotated with
@QuerySqlFunction.

E.g. for single node grid you can use some AtomicLong static field.


public class MyFunctions {

static AtomicLong seq = new AtomicLong();


@QuerySqlFunction
public static long nextID() {
return seq.getAndIncrement();
}
}


This class should be added to classpath on all nodes.

2.Register class with functions.

cacheConfiguration.setSqlFunctionClasses(MyFunctions.class);


3. For multi-node grid you use IgniteAtomicSequence instead and initialize
static variable on grid start, e.g. manually or via LifecycleBean [2].

4. Now you can run query like "INSERT ... (ID, ...) SELECT nextID(), ..."

[1]
https://apacheignite.readme.io/docs/miscellaneous-features#custom-sql-functions
[2]
https://apacheignite.readme.io/docs/ignite-life-cycle#section-lifecyclebean

On Mon, Sep 18, 2017 at 4:17 PM, Alexander Paschenko <
alexander.a.pasche...@gmail.com> wrote:

> Hello,
>
> Andrey, I believe you're wrong. INSERT from SELECT should work. AUTO
> INCREMENT columns indeed are not supported for now though, it's true.
>
> - Alex
>
> 2017-09-18 16:09 GMT+03:00 Andrey Mashenkov :
> > Hi,
> >
> > Auto-increment fields are not supported yet. Here is a ticket for this
> [1]
> > and you can track it's state.
> > Moreover, underlying H2 doesn't support SELECT with JOINs nested into
> > INSERT\UPDATE query.
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-5625
> >
> > On Mon, Sep 18, 2017 at 12:31 PM, acet 
> wrote:
> >>
> >> Hello,
> >> I would like to insert the result of a select query into a cache in
> >> ignite.
> >> Something like:
> >>
> >> INSERT INTO "new_cache_name".NewCacheDataType(ID, CUSTOMERID,
> PRODUCTNAME)
> >> (SELECT {?}, c.id, p.product_name
> >> FROM "customers".CUSTOMER as c
> >> JOIN "products".PRODUCT as p
> >> ON c.id = p.customer_id)
> >>
> >> in the place of the {?} i would like to put in something similar to
> >> AtomicSequence, however seeing as this will be work done without using
> the
> >> client I cannot tell how this is possible.
> >> Can someone advise if this can be done, and if so, how?
> >>
> >> Thanks.
> >>
> >>
> >>
> >> --
> >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> >
> >
> >
> >
> > --
> > Best regards,
> > Andrey V. Mashenkov
>



-- 
Best regards,
Andrey V. Mashenkov


Re: INSERT into SELECT from Ignite 1.9 or 2.0

2017-09-21 Thread Alexander Paschenko
Hi,

Here's an example how to implement this using sequences and SQL functions.

Please note how I refer to node name here - most likely you'll have to
tweak this thing.

- Alex

2017-09-18 16:17 GMT+03:00 Alexander Paschenko
:
> Hello,
>
> Andrey, I believe you're wrong. INSERT from SELECT should work. AUTO
> INCREMENT columns indeed are not supported for now though, it's true.
>
> - Alex
>
> 2017-09-18 16:09 GMT+03:00 Andrey Mashenkov :
>> Hi,
>>
>> Auto-increment fields are not supported yet. Here is a ticket for this [1]
>> and you can track it's state.
>> Moreover, underlying H2 doesn't support SELECT with JOINs nested into
>> INSERT\UPDATE query.
>>
>> [1] https://issues.apache.org/jira/browse/IGNITE-5625
>>
>> On Mon, Sep 18, 2017 at 12:31 PM, acet  wrote:
>>>
>>> Hello,
>>> I would like to insert the result of a select query into a cache in
>>> ignite.
>>> Something like:
>>>
>>> INSERT INTO "new_cache_name".NewCacheDataType(ID, CUSTOMERID, PRODUCTNAME)
>>> (SELECT {?}, c.id, p.product_name
>>> FROM "customers".CUSTOMER as c
>>> JOIN "products".PRODUCT as p
>>> ON c.id = p.customer_id)
>>>
>>> in the place of the {?} i would like to put in something similar to
>>> AtomicSequence, however seeing as this will be work done without using the
>>> client I cannot tell how this is possible.
>>> Can someone advise if this can be done, and if so, how?
>>>
>>> Thanks.
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>>
>>
>>
>> --
>> Best regards,
>> Andrey V. Mashenkov
/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *  http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.ignite.internal.processors.cache;

import java.util.Arrays;
import java.util.Collection;
import java.util.Collections;
import java.util.List;
import javax.cache.Cache;
import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.Ignition;
import org.apache.ignite.cache.query.Query;
import org.apache.ignite.cache.query.QueryCursor;
import org.apache.ignite.cache.query.SqlFieldsQuery;
import org.apache.ignite.cache.query.SqlQuery;
import org.apache.ignite.cache.query.annotations.QuerySqlField;
import org.apache.ignite.cache.query.annotations.QuerySqlFunction;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.internal.IgniteInternalFuture;
import org.apache.ignite.internal.processors.query.GridQueryProcessor;
import org.apache.ignite.internal.processors.query.GridRunningQueryInfo;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder;
import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
import org.apache.ignite.testframework.junits.common.GridCommonAbstractTest;

/**
 *
 */
public class SequenceTest extends GridCommonAbstractTest {
/** */
private static final TcpDiscoveryIpFinder ipFinder = new TcpDiscoveryVmIpFinder(true);

/**
 * Grid name to refer in funct
 */
private static String gridName = null;

/** {@inheritDoc} */
@Override protected IgniteConfiguration getConfiguration(String igniteInstanceName) throws Exception {
IgniteConfiguration cfg = super.getConfiguration(igniteInstanceName);

if ("client".equals(cfg.getIgniteInstanceName()))
cfg.setClientMode(true);

((TcpDiscoverySpi)cfg.getDiscoverySpi()).setIpFinder(ipFinder);

CacheConfiguration cc = new CacheConfiguration<>(DEFAULT_CACHE_NAME);

cc.setCopyOnRead(true);
cc.setIndexedTypes(Integer.class, Integer.class);
cc.setSqlFunctionClasses(TestSQLFunctions.class);

cfg.setCacheConfiguration(cc);

return cfg;
}

/** {@inheritDoc} */
@Override protected void beforeTestsStarted() throws Exception {
super.beforeTestsStarted();

startGridsMultiThreaded(1);

gridName = getTestIgniteInstanceName(0);
}

/** {@inheritDoc} */
@Override protected void afterTestsStopped() throws Exception {
super.afterTestsStopped();

stopAllGrids();
}

/**
 * Test collecting info about running.
 *
 * @throws Exception If failed.
 */
publi

Ignite Server start failing with the exception with Ignite 2.1.0

2017-09-21 Thread KR Kumar
I have a five node cluster and when I start the server, i get the following
error randomly in some servers


[2017-09-21
07:46:18,635][ERROR][exchange-worker-#34%null%][GridDhtPartitionsExchangeFuture]
Failed to reinitialize local partitions (preloading will be stopped):
GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=1,
minorTopVer=1], nodeId=13721454, evt=DISCOVERY_CUSTOM_EVT]
java.lang.ArrayIndexOutOfBoundsException: -1
at
org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forVersion(IOVersions.java:82)
at
org.apache.ignite.internal.processors.cache.persistence.tree.io.IOVersions.forPage(IOVersions.java:92)
at
org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.init(PagesList.java:174)
at
org.apache.ignite.internal.processors.cache.persistence.freelist.FreeListImpl.(FreeListImpl.java:357)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore$1.(GridCacheOffheapManager.java:893)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.init0(GridCacheOffheapManager.java:885)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.updateCounter(GridCacheOffheapManager.java:1130)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.updateCounter(GridDhtLocalPartition.java:882)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.casState(GridDhtLocalPartition.java:564)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.own(GridDhtLocalPartition.java:594)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.initPartitions0(GridDhtPartitionTopologyImpl.java:337)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.beforeExchange(GridDhtPartitionTopologyImpl.java:507)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:991)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:632)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1901)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
[2017-09-21 07:46:18,637][INFO
][exchange-worker-#34%null%][GridDhtPartitionsExchangeFuture] Snapshot
initialization completed [topVer=AffinityTopologyVersion [topVer=1,
minorTopVer=1], time=0ms]
[2017-09-21
07:46:18,650][ERROR][exchange-worker-#34%null%][GridCachePartitionExchangeManager]
Failed to wait for completion of partition map exchange (preloading will not
start): GridDhtPartitionsExchangeFuture [dummy=false, forcePreload=false,
reassign=false, discoEvt=DiscoveryCustomEvent [customMsg=null,
affTopVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
super=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=13721454-7d18-4c48-ba93-36417dbba34b, addrs=[0:0:0:0:0:0:0:1%lo,
127.0.0.1, 172.16.9.173],
sockAddrs=[ri-stress-grid-manager.altidev.net/172.16.9.173:47500,
/0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], discPort=47500, order=1,
intOrder=1, lastExchangeTime=1505994376713, loc=true,
ver=2.2.0#20170915-sha1:5747ce6b, isClient=false], topVer=1,
nodeId8=13721454, msg=null, type=DISCOVERY_CUSTOM_EVT,
tstamp=1505994240293]], crd=TcpDiscoveryNode
[id=13721454-7d18-4c48-ba93-36417dbba34b, addrs=[0:0:0:0:0:0:0:1%lo,
127.0.0.1, 172.16.9.173],
sockAddrs=[ri-stress-grid-manager.altidev.net/172.16.9.173:47500,
/0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], discPort=47500, order=1,
intOrder=1, lastExchangeTime=1505994376713, loc=true,
ver=2.2.0#20170915-sha1:5747ce6b, isClient=false],
exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=1,
minorTopVer=1], nodeId=13721454, evt=DISCOVERY_CUSTOM_EVT], added=true,
initFut=GridFutureAdapter [ignoreInterrupts=false, state=DONE, res=false,
hash=1691446648], init=false, lastVer=null,
partReleaseFut=GridCompoundFuture [rdc=null, initFlag=1, lsnrCalls=4,
done=true, cancelled=false, err=null, futs=[true, true, true, true]],
exchActions=null, affChangeMsg=null, skipPreload=false,
clientOnlyExchange=false, initTs=1505994240303, centralizedAff=false,
changeGlobalStateE=null, forcedRebFut=null, done=true, evtLatch=0,
remaining=[], super=GridFutureAdapter [ignoreInterrupts=false, state=DONE,
res=java.lang.ArrayIndexOutOfBoundsException: -1, hash=1144748699]]
class org.apache.ignite.IgniteCheckedException: -1
at 
org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7229)
at
org.apache.ignite.int

Re: exception in transaction, how to rollback all put?

2017-09-21 Thread Alexandr Kuramshin
Hi,

you are doing wrong.

cache = ignite.getOrCreateCache("txn test cache");

creates a new atomic cache, because you configure another cache as
transactional

cacheCfg.setName("cacheName");


2017-09-21 8:44 GMT+07:00 bingili :

> the test.print() before test.run() will printout three nulls, which is
> correct.
> However, after test.run(), the print in finally block will print out: 1 2
> null, which means cache.put("test1", "1"); cache.put("test2", "2"); are
> successfully written into cache, even though the exception happens. I
> thought the tx.rollback() should revert back the values of "test1" and
> "test2". It would expect to print out 3 nulls after tx.rollback().
>
>
> null
> null
> null
> 
> java.lang.NullPointerException
> at test.IgniteTest.run(IgniteTest.java:48)
> at test.IgniteTest.main(IgniteTest.java:71)
> tx.rollback
> 1
> 2
> null
>
> public void print() {
> System.out.println(cache.get("test1"));
> System.out.println(cache.get("test2"));
> System.out.println(cache.get("test3"));
> }
>
> public static void main(String[] args) {
> IgniteTest test = new IgniteTest();
> test.print();
> System.out.println("");
> try {
> test.run();
> } catch (Exception ex) {
>
> } finally {
> test.print();
> }
> }
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Thanks,
Alexandr Kuramshin