IgniteUtils NoClassDefFoundError

2018-09-10 Thread Jack Lever
Hi All,

I'm getting an error on application startup which has me stumped. I've
imported ignite-core, indexing, slf4j and spring-data via maven, version
2.6.0. I'm using ignite to do some cache operations, basic stuff
cross-node. However when I start it, it runs until the config of static ip
discovery or Ignition.start(config) call depending on what I have in the
setup and then stops with :

Failed to instantiate [i.o.c.IgniteManager]: Constructor threw exception;
nested exception is java.lang.NoClassDefFoundError: Could not initialize
class org.apache.ignite.internal.util.IgniteUtils

I can see the class inside intellij in the jar file in external libraries.
I can use the class in the code but when I run it appears to be missing ...

How do I go about fixing this or diagnosing it further?

Thanks,
Jack.


RE: Ignite Thin Client Continuous Query

2018-09-10 Thread vkulichenko
Gordon,

Yes, generally we do recommend using thin client for such applications.
However, it doesn't mean that you can't us client node in case it's really
needed, although it might require some additional tuning.

Would you mind telling if you have any other technology in mind? I highly
doubt that you will find anything that can provide functionality similar to
CQ in Ignite, especially with the same delivery guarantees, while being
based on a lightweight client. I believe you either will not succeed in
finding such an alternative, or your use case does not require continuous
queries in the first place. Can you give some more details on what you're
trying to achieve? I might be able to suggest some other approach then.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Ignite Thin Client Continuous Query

2018-09-10 Thread Gordon Reid (Nine Mile)
Thanks Val. We are currently using a client node in our desktop gui, but it 
performs very poorly when the latency to our server nodes is high. We also have 
other problems, such as when new client nodes join, the whole cluster will 
pause, which is unacceptable for an end user application. I raised a question 
about this a few weeks ago, and we were advised that client node is not 
intended for use in end user applications. Any sort of financial desktop 
application needs streaming data and event based updates.  So it seems like 
ignite is of no use for this type of application. And now I wonder if there is 
much value for us to keep using ignite on the server side of our application, 
since we need totally different technology to enable streaming data and event 
based notifications on our client.

-Original Message-
From: vkulichenko 
Sent: Tuesday, September 11, 2018 7:43 AM
To: user@ignite.apache.org
Subject: Re: Ignite Thin Client Continuous Query

Gordon,

Ignite thin client uses request-response model, which is not really suitable 
for functionality like this. I would never say never, but I think it's very 
unlikely that thin client would get any feature that imply pushing updates from 
server to client (this includes near caches, any type of listeners including 
continuous queries, etc.). If you have such a requirement, you should use 
client node instead.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


This email and any attachments are proprietary & confidential and are intended 
solely for the use of the individuals to whom it is addressed. Any views or 
opinions expressed are solely for those of the author and do not necessarily 
reflect those of Nine Mile Financial Pty. Limited. If you have received this 
email in error, please let us know immediately by reply email and delete from 
your system. Nine Mile Financial Pty. Limited. ABN: 346 1349 0252


Re: Error installing Ignite on K8s

2018-09-10 Thread vkulichenko
Does it work without specifying sessionAffinity?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Thin Client Continuous Query

2018-09-10 Thread vkulichenko
Gordon,

Ignite thin client uses request-response model, which is not really suitable
for functionality like this. I would never say never, but I think it's very
unlikely that thin client would get any feature that imply pushing updates
from server to client (this includes near caches, any type of listeners
including continuous queries, etc.). If you have such a requirement, you
should use client node instead.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Fulltext matching

2018-09-10 Thread Courtney Robinson
Hi,
Thanks for the response.
I went ahead and implemented a custom indexing SPI. Works like a charm. As
long as Ignite doesn't drop support for the indexing SPI interface this is
exactly what we need.
I'm happy to create Jira issues and extract this into something more
generic for upstream if it'll be accepted.

Regards,
Courtney Robinson
CTO, Hypi
Tel: +4402032870961 (GMT+0) 


https://hypi.io


On Thu, Sep 6, 2018 at 4:09 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Unfortunately, fulltext doesn't seem to have much traction, so I recommend
> doing investigations on your side, possibly creating JIRA issues in the
> process.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 3 сент. 2018 г. в 22:34, Courtney Robinson  >:
>
>> Hi,
>>
>> We've got Ignite in production and decided to start using some fulltext
>> matching as well.
>> I've investigated and can't figure out why my queries are not matching.
>>
>> I construct a query entity e.g new QueryEntity(keyClass, valueClass) and
>> in debug I can see it generates a list of fields
>> e.g. a, b, c.a, c.b
>> I then expected to be able to match on those fields that are marked as
>> indexed. Everything is annotation driven. The appropriate fields have been
>> annotated and appear to be detected as such
>> when I inspect what gets put into the QueryEntityDescriptor. i.e. all
>> expected indices and indexed fields are present.
>>
>> In LuceneGridIndex I see that the lucene document generated as fields a,b
>> (c.a and c.b are not included). Now a couple of questions arise:
>>
>> 1. Is there a way to get Ignite to index the nested fields as well so
>> that c.a and c.b end up in the doc?
>>
>> 2. If you use a composite object as a key, its fields are extracted into
>> the top level so if you have Key.a and Value.a you cannot index both since
>> Key.a becomes a which collides with Value.a - can this be changed, are
>> there any known reasons why it couldn't be (i.e. I'm happy to send a PR
>> doing so - but I suspect the answer to this is linked to the answer to the
>> first question)
>>
>> 3. The docs simply say you can use lucene syntax, I presume it means the
>> syntax that appears in
>> https://lucene.apache.org/core/2_9_4/queryparsersyntax.html is all valid
>> - checking the code that appears to be case as it does
>> a MultiFieldQueryParser in GridLuceneIndex. However, when I try to run a
>> query such as a: - none of the indexed documents match. In debug
>> mode I've enabled parser.setAllowLeadingWildcard(true); and if I do a
>> simple searcher.search * I get back the list of expected documents.
>>
>> What's even more odd is I tried querying each of the 6 indexed fields as
>> found in idxdFields in GridLuceneIndex and 1 of them match. The other
>> values are being typed exactly but also doing wild cards or other free text
>> forms do not match.
>>
>> 4. I couldn't see a way to provide a custom GridLuceneIndex, I found the
>> two cases where it's constructed in the code base and doesn't look like I
>> can inject instances. Is it ok to construct and use a custom
>> GridLuceneDirectory/IndexWriter/Searcher and so on in the same way
>> GridLuceneIndex does it so I can do a custom IndexingSpi to change how
>> indexing happens?
>> There are a number of things I'd like to customise and from looking at
>> the current impl. these things aren't injectable, I guess it's not
>> considered a prime use case maybe.
>>
>> Yeah, the analyzer and a number of things would be handy to change.
>> Ideally also want to customise how a field is indexed e.g. to be able to do
>> term matches with lucene queries
>>
>> Looking at this impl as well it passes Integer.MAX_VALUE and pulls back
>> all matches. That'll surely kill our nodes for some of the use cases we're
>> considering.
>> I'd also like to implement paging, the searcher API has a nice option to
>> pass through a last doc it can continue from to potentially implement
>> something like deep-paging.
>>
>> 5. If I were to do a custom IndexingSpi to make all of this happen, how
>> do I get additional parameters through so that I could have paging params
>> passed
>>
>> Ideally I could customise the indexing, searching and paging through
>> standard Ignite means but I can't find any means of doing that in the
>> current code and short of doing a custom IndexingSpi I think I've gone as
>> far as I can debugging and could do with a few pointers of how to go about
>> this.
>>
>> FYI, SQL isn't a great option for this part of the product, we're
>> generating and compiling Java classes at runtime and generating SQL to do
>> the queries is an order of magnitude more work than indexing the relatively
>> few fields we need and then searching but off the bat the paging would be
>> an issue as there can be several million matches to a query. Can't have
>> Ignite pulling all of those into memory.
>>
>> Thanks in advance
>>
>> Courtney
>>
>


Error with Spark + IGFS (HDFS cache) through Hive

2018-09-10 Thread Maximiliano Patricio Méndez
Hi,

I'm having an LinkageError in spark trying to read a hive table that has
the external location in IGFS:
java.lang.LinkageError: loader constraint violation: when resolving field
"LOG" the class loader (instance of
org/apache/spark/sql/hive/client/IsolatedClientLoader$$anon$1) of the
referring class, org/apache/hadoop/fs/FileSystem, and the class loader
(instance of sun/misc/Launcher$AppClassLoader) for the field's resolved
type, org/apache/commons/logging/Log, have different Class objects for that
type
  at
org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem.initialize(IgniteHadoopFileSystem.java:255)

>From what I can see the exception comes when spark tries to read a table
from Hive and then through IGFS and passing the "LOG" variable of the
FileSystem around to the HadoopIgfsWrapper (and beyond...).

The steps I followed to reach this error were:

   - Create a file /tmp/test.parquet in HDFS
   - Create an external table test.test in hive with location = igfs://igfs@
   /tmp/test.parquet
   - Start spark-shell with the command:
  - ./bin/spark-shell --jars
  
$IGNITE_HOME/ignite-core-2.6.0.jar,$IGNITE_HOME/ignite-hadoop/ignite-hadoop-2.6.0.jar,$IGNITE_HOME/ignite-shmem-1.0.0.jar,$IGNITE_HOME/ignite-spark-2.6.0.jar
  - Read the table through spark.sql
  - spark.sql("SELECT * FROM test.test")

Is there maybe a way to avoid having this issue? Has anyone used ignite
through hive as HDFS cache in a similar way?


Re: How to create tables with JDBC, read with ODBC?

2018-09-10 Thread limabean
Thank you very much for the thorough discussion/explanation and pending fix
for public schemas.  Much appreciated !
 
As an aside, I also contacted QLIK to see if they will fix their product
behavior, which does not seem correct to me either.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Thin Client Continuous Query

2018-09-10 Thread akurbanov
Hello,

This feature has not been planned yet as far as I know, I didn't manage to
find JIRA's for this, but I think discussion on this feature may be started
on dev list.

Regards





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Error installing Ignite on K8s

2018-09-10 Thread Jeff Simon
Hi, trying to install ignite on k8s running on ec2.  K8s cluster is running on 
EC2 in AWS (not using EKS.)

Following the guide at https://apacheignite.readme.io/docs/stateless-deployment

Getting the following error when trying to create the k8s service:  
"unsupported load balancer affinity: ClientIP"

What is a valid value for sessionAffinity that works with Ignite?  Is 
sessionAffinity required?

Thanks.



// Service spec (from https://apacheignite.readme.io/docs/ignite-service)
// This is supposed to be a guide for k8s, but it doesn't work.
// Could be that there's an issue with creating service on AWS?

apiVersion: v1
kind: Service
metadata:
  # The name must be equal to TcpDiscoveryKubernetesIpFinder.serviceName
  name: ignite
  # The name must be equal to TcpDiscoveryKubernetesIpFinder.namespaceName
  namespace: ignite
spec:
  type: LoadBalancer
  ports:
- name: rest
  port: 8080
  targetPort: 8080
- name: sql
  port: 10800
  targetPort: 10800
- name: thinclients
  port: 10900
  targetPort: 10900
  sessionAffinity: ClientIP
  selector:
# Must be equal to the label set for Ignite pods.
app: ignite

This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.


Re: Apache Ignite Hackathon: Open Source contribution is simple

2018-09-10 Thread Dmitriy Pavlov
Denis, Konstantin, Thank you for your feedback.

After a private discussion with Ksenia, we've created a new idea of how to
name the event: 'Apache Ignite Day, Workshop: Open Source contribution is
simple'.
It can be slightly better because we don't want competition between
members, just fun and a few practical steps in the world of open source and
Apache.

I will start some additional communication in Russian.

I would strongly appreciate any additional ideas and replies from
committers, which potentially can participate locally or remotely.

With best regards,
Dmitriy Pavlov

сб, 25 авг. 2018 г. в 11:59, Denis Magda :

> Some of us have been doing meetups and workshops in San Francisco Bay Area
> for a while but have never tried the hackathon format. Worth trying, thanks
> for putting out the idea, Dmitriy!
>
> Let's start with Russia first and then I can assist with the hackathon
> arrangement here in the Silicon Valley.
>
> --
> Denis
>
> On Fri, Aug 24, 2018 at 3:22 PM Dmitriy Pavlov 
> wrote:
>
>> Hi Ignite Users, Developers, and Enthusiasts,
>>
>> It's natural to assume that a newcomer has little to offer the Apache
>> Ignite. However, you can be surprised at how much each newcomer can help,
>> even now.
>>
>> I would propose to run a hackathon on the next Apache Ignite meetup. In
>> parallel with talks, an attendee can pick up a simple ticket and run full
>> patch submission process during meetup and make open source contribution.
>> Ignite experts will be there and will be able to help everyone interested.
>>
>> The first place to run - Ignite meetup in Saint Petersburg, Russia,
>> because I know that several committers live there.
>>
>> - If you're a user or a contributor, would you like to join such
>> activity?
>> - If you're a committer, will you be able to come and help with review
>> and merge?
>> - Would you propose a simple ticket(s) which can be done in one hour or
>> even several minutes?
>>
>> Any feedback is very welcome.
>>
>> Sincerely,
>> Dmitriy Pavlov
>>
>


Re: Query on nested objects

2018-09-10 Thread max904
Thank you for the answer! It helped.

I discovered two functions that can help in controlling the way ignite the
attributes onto table aliases.

QueryEntity.addQueryField("homeAddress.zip", "java.lang.String", "ha_zip")
would allow programmatically define the "homeAddress.zip" field in the table
and map it to "ha_zip" alias.

QueryEntity.setAliases(new HashMap(){{put("homeAddress.zip",
"ha_zip");}}) would allow creating "ha_zip" alias for the field
"homeAddress.zip" that has already been defined.

CacheConfiguration.setIndexedTypes() is what I was trying to use before with
@QuerySqlField annotations on the object classes and that what did not work
for the field clashes.
It seems like a defect in the code when the filed name clashes are not being
resolved or even indicated.

The goal of all these is to be able to run a query on a simple model with
multiple nested objects (which could possibly clash on some attributes).
Yes, the RDBMS tables are flat, but Ignite is not RDBMS, but a K-V store
with SQL query capabilities. Ignite permits using complex object structures
as cache values and advertises a capability to query on those.

Again, thanks a lot for your help!




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Off heap constantly grow on use BinaryObject as field of cached data

2018-09-10 Thread Ilya Kasnacheev
Hello!

There's a WAL reader somewhere in the code, it could help if you have
persistence. Note that both invocation and output of this tool is confusing.

It would be nice if you had a reproducer which would show this behavior.
The snippet that you have posted previously isn't very clear on expected
behavior. Can you please try to devise something stand-alone on Github?

Regards,
-- 
Ilya Kasnacheev


пн, 10 сент. 2018 г. в 12:04, Serg :

> Hi Ilya,
>
> Yes growing not so quick but in production we lost near 1GB every day with
> 15GB of data on each node.
> I had simplify data classes by remove annotations and this does not help.
>
> Is it possible debug off-heap memory? How I can  understand where memory
> goes?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Node keeps crashing under load

2018-09-10 Thread Ilya Kasnacheev
Hello!

I can see a lot of errors like this one:

[04:05:29,268][INFO][tcp-comm-worker-#1%Server%][ZookeeperDiscoveryImpl]
Created new communication error process future
[errNode=598e3ead-99b8-4c49-b7df-04d578dcbf5f, err=class
org.apache.ignite.IgniteCheckedException: Failed to connect to node (is
node still alive?). Make sure that each ComputeTask and cache Transaction
has a timeout set in order to prevent parties from waiting forever in case
of network issues [nodeId=598e3ead-99b8-4c49-b7df-04d578dcbf5f,
addrs=[ip-172-17-0-1.ap-south-1.compute.internal/172.17.0.1:47100,
ip-172-21-85-213.ap-south-1.compute.internal/172.21.85.213:47100,
/0:0:0:0:0:0:0:1%lo:47100, /127.0.0.1:47100]]]

I think the problem is, you have two nodes, they both have 172.17.0.1
address but it's the different address (totally unrelated private nets).

Try to specify your external address (such as 172.21.85.213) with
TcpCommunicationSpi.setLocalAddress() on each node.

Regards,
-- 
Ilya Kasnacheev


пт, 7 сент. 2018 г. в 20:01, eugene miretsky :

> Hi all,
>
> Can somebody please provide some pointers on what could be the issue or
> how to debug it? We have a fairly large Ignite use case, but cannot go
> ahead with a POC because of these crashes.
>
> Cheers,
> Eugene
>
>
>
> On Fri, Aug 31, 2018 at 11:52 AM eugene miretsky <
> eugene.miret...@gmail.com> wrote:
>
>> Also, don't want to spam the mailing list with more threads, but I get
>> the same stability issue when writing to Ignite from Spark. Logfile from
>> the crashed node (not same node as before, probably random) is attached.
>>
>>  I have also attached a gc log from another node (I have gc logging
>> enabled only on one node)
>>
>>
>> On Fri, Aug 31, 2018 at 11:23 AM eugene miretsky <
>> eugene.miret...@gmail.com> wrote:
>>
>>> Thanks Denis,
>>>
>>> Execution plan + all logs right after the carsh are attached.
>>>
>>> Cheers,
>>> Eugene
>>>  nohup.out
>>> 
>>>
>>>
>>>
>>> On Fri, Aug 31, 2018 at 1:53 AM Denis Magda  wrote:
>>>
 Eugene,

 Please share full logs from all the nodes and execution plan for the
 query. That's what the community usually needs to help with
 troubleshooting. Also, attach GC logs. Use these settings to gather them:
 https://apacheignite.readme.io/docs/jvm-and-system-tuning#section-detailed-garbage-collection-stats

 --
 Denis

 On Thu, Aug 30, 2018 at 3:19 PM eugene miretsky <
 eugene.miret...@gmail.com> wrote:

> Hello,
>
> I have a medium cluster set up for testings - 3 x r4.8xlarge EC2
> nodes. It has persistence enabled, and zero backup.
> - Full configs are attached.
> - JVM settings are: JVM_OPTS="-Xms16g -Xmx64g -server
> -XX:+AggressiveOpts -XX:MaxMetaspaceSize=256m  -XX:+AlwaysPreTouch
> -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC"
>
> The table has 145M rows, and takes up about 180G of memory
> I testing 2 things
> 1) Writing SQL tables from Spark
> 2) Performing large SQL queries (from the web console): for example Select
> COUNT (*) FROM (SELECT customer_id FROM MyTable where dt > '2018-05-12'
> GROUP BY customer_id having SUM(column1) > 2 AND MAX(column2) < 1)
>
> Most of the times I run the query it fails after one of the nodes
> crashes (it has finished a few times, and then crashed the next time). I
> have also similar stability issues when writing from Spark - at some 
> point,
> one of the nodes crashes. All I can see in the logs is
>
> [21:51:58,548][SEVERE][disco-event-worker-#101%Server%][] Critical
> system error detected. Will be handled accordingly to configured handler
> [hnd=class o.a.i.failure.StopNodeFailureHandler, failureCtx=FailureContext
> [type=SEGMENTATION, err=null]]
>
> [21:51:58,549][SEVERE][disco-event-worker-#101%Server%][FailureProcessor]
> Ignite node is in invalid state due to a critical failure.
>
> [21:51:58,549][SEVERE][node-stopper][] Stopping local node on Ignite
> failure: [failureCtx=FailureContext [type=SEGMENTATION, err=null]]
>
> [21:52:03] Ignite node stopped OK [name=Server, uptime=00:07:06.780]
>
> My questions are:
> 1) What is causing the issue?
> 2) How can I debug it better?
>
> The rate of crashes and our lack of ability to debug them is becoming
> quite a concern.
>
> Cheers,
> Eugene
>
>
>
>


Re: ClassNotFoundException when remotely calling cache.withKeepBinary().get(key)

2018-09-10 Thread smurphy
Ilya,

Apologies for the slow response..
You are right - this fixed my issue.
Thanks,

Steve



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: AFTER_NODE_START event

2018-09-10 Thread Lokesh Sharma
Great. Thank you for looking into it.

On Mon, Sep 10, 2018, 9:11 PM Maxim.Pudov  wrote:

> Hello!
> The answer is yes. It is safe to subscribe to AFTER_NODE_START.
> I've just checked the sequence of events happening during the node startup
> and org.apache.ignite.events.EventType#EVT_NODE_JOINED happens *before
> *org.apache.ignite.lifecycle.LifecycleEventType#AFTER_NODE_START.
>
> With Regards,
> Maxim Pudov.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: AFTER_NODE_START event

2018-09-10 Thread Maxim.Pudov
Hello!
The answer is yes. It is safe to subscribe to AFTER_NODE_START. 
I've just checked the sequence of events happening during the node startup
and org.apache.ignite.events.EventType#EVT_NODE_JOINED happens *before
*org.apache.ignite.lifecycle.LifecycleEventType#AFTER_NODE_START.

With Regards,
Maxim Pudov.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


AFTER_NODE_START event

2018-09-10 Thread Lokesh Sharma
I require a record to be present in the ignite database all the time. For
this I have written a function like following:

>
> @PostConstruct
> public void createRecordIfNotPresent() {
> if (entityRepository.findById(1) == null) {
> createRecord()
> }
> }


I have added @PostContruct annotation so that this code is run when a node
boots up. This way I make sure that record is present all the time.

But the problem I'm facing with above code is that, the function runs even
before the node which when boots up has joined the Ignite cluster. So,
"entityRepository.findById(1)" returns "null" and this results in split
brain problem.

How can I fix this problem? I'm thinking about using AFTER_NODE_START. My
question is whether this event is released after the node has completed
joining the cluster (if present) or not?

Thank You


Re: AWS Cluster

2018-09-10 Thread aealexsandrov
Hello,

In case if your nodes don't see each other then try to check next:

1)That your IP finder configuration for every node contains the IP addresses
of every AWS node from the cluster like next:

   









178.0.0.1:47500..47501
178.0.0.2:47500..47501







where 178.0.0.1 and 178.0.0.2 are AWS machines addresses.

2)Check you configurated the security group. It required because you should
open TCP ports for Ignite communication. 

https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

Example:

Inbound:

Custom TCP Rule TCP 10800 - 10900 0.0.0.0/0 client
Custom TCP Rule TCP 47500 - 47600 0.0.0.0/0 discovery
Custom TCP Rule TCP 47100 - 47200 0.0.0.0/0 communication

Outbound:

All traffic All All 0.0.0.0/0
All traffic All All ::/0

3)Check that ports is open in your operating system. For example, in
windows, you should add new rules to the firewall to open ports (or disable
the firewall)

When all steps above will be done and all IP addresses and ports will be
correct then your C# code will work as expected.

BR,
Andrei
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: JDK 11 support

2018-09-10 Thread ipsxd
Ok. Thx for quick answer. January 2019 sounds good.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: JDK 11 support

2018-09-10 Thread Petr Ivanov
For now seems that JDK 11 is not available yet (EAP is not an option).
I think the best hope of full JDK11 support is JDK8 maintenance drop date 
(around Jan 2019).


> On 10 Sep 2018, at 16:17, ipsxd  wrote:
> 
> For now seems that Ignite cannot start with JDK 11, I assume there is a plan
> to migrate to 11 and if can you specify what the timeline ?
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



JDK 11 support

2018-09-10 Thread ipsxd
For now seems that Ignite cannot start with JDK 11, I assume there is a plan
to migrate to 11 and if can you specify what the timeline ?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to create tables with JDBC, read with ODBC?

2018-09-10 Thread Igor Sapego
I've filed a ticket: [1]

[1] - https://issues.apache.org/jira/browse/IGNITE-9515

Best Regards,
Igor


On Mon, Sep 10, 2018 at 2:56 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Yes, I'm pretty confident that PUBLIC should work without quotes. I'm even
> not sure that it would work even with ordinary double quotes set.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 10 сент. 2018 г. в 14:28, Igor Sapego :
>
>> Ilya,
>>
>> If we won't bother with quotes, then many other tools will stop working,
>> as cache-names-schemas MUST be quoted, but they won't be. By the way,
>> even QLIK will not work with any other schema, except for PUBLIC.
>>
>> So for now, what I propose is not apply quotes to PUBLIC schema. This is
>> the only fix I can see here now.
>>
>> Best Regards,
>> Igor
>>
>>
>> On Fri, Sep 7, 2018 at 4:45 PM Ilya Kasnacheev 
>> wrote:
>>
>>> Maybe we shouldn't bother to quote schemas, assuming that it's the duty
>>> of client?
>>>
>>> Unfortunately after reading ODBC docs I have no idea, but there's no
>>> hints that the result will be quoted.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пт, 7 сент. 2018 г. в 15:38, Igor Sapego :
>>>
 Well, ODBC applies quotes to all schemas. It makes sense to
 check and not apply quotes to PUBLIC, but this won't help in
 all other cases, when cache-name-schema is used.

 Best Regards,
 Igor


 On Fri, Sep 7, 2018 at 2:13 PM Ilya Kasnacheev <
 ilya.kasnach...@gmail.com> wrote:

> Hello!
>
> It's actually very strange that we have quotes around PUBLIC since
> it's supposed to be used quote-free. I will take a look.
>
> Regards,
>
> --
> Ilya Kasnacheev
>
>
> пт, 7 сент. 2018 г. в 14:07, Igor Sapego :
>
>> It happens, because ODBC returns schema name in quotes,
>> so seems like QLIK adds its own quotes around it, as it encounters
>> non standard characters (quotes).
>>
>> I think, it is a QLIK's error, as our ODBC driver explicitly states,
>> that no
>> additional quotes should be used around identifiers. And even if it
>> choose
>> to apply quotes to "PUBLIC" result obviously should not be a
>> ""PUBLIC"".
>>
>>
>> Best Regards,
>> Igor
>>
>>
>> On Thu, Sep 6, 2018 at 6:43 PM limabean  wrote:
>>
>>> Although I specify lower case public in the odbc definition in
>>> Windows 10,
>>> the QLIK BI application, on its ODBC connection page, forces an
>>> upper case
>>> "PUBLIC" as you can see in the screen shot, and as far as I can tell
>>> there
>>> are no options to change that.
>>>
>>> QlikOdbcPanel.png
>>> <
>>> http://apache-ignite-users.70518.x6.nabble.com/file/t361/QlikOdbcPanel.png>
>>>
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>


Re: How to create tables with JDBC, read with ODBC?

2018-09-10 Thread Ilya Kasnacheev
Hello!

Yes, I'm pretty confident that PUBLIC should work without quotes. I'm even
not sure that it would work even with ordinary double quotes set.

Regards,
-- 
Ilya Kasnacheev


пн, 10 сент. 2018 г. в 14:28, Igor Sapego :

> Ilya,
>
> If we won't bother with quotes, then many other tools will stop working,
> as cache-names-schemas MUST be quoted, but they won't be. By the way,
> even QLIK will not work with any other schema, except for PUBLIC.
>
> So for now, what I propose is not apply quotes to PUBLIC schema. This is
> the only fix I can see here now.
>
> Best Regards,
> Igor
>
>
> On Fri, Sep 7, 2018 at 4:45 PM Ilya Kasnacheev 
> wrote:
>
>> Maybe we shouldn't bother to quote schemas, assuming that it's the duty
>> of client?
>>
>> Unfortunately after reading ODBC docs I have no idea, but there's no
>> hints that the result will be quoted.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пт, 7 сент. 2018 г. в 15:38, Igor Sapego :
>>
>>> Well, ODBC applies quotes to all schemas. It makes sense to
>>> check and not apply quotes to PUBLIC, but this won't help in
>>> all other cases, when cache-name-schema is used.
>>>
>>> Best Regards,
>>> Igor
>>>
>>>
>>> On Fri, Sep 7, 2018 at 2:13 PM Ilya Kasnacheev <
>>> ilya.kasnach...@gmail.com> wrote:
>>>
 Hello!

 It's actually very strange that we have quotes around PUBLIC since it's
 supposed to be used quote-free. I will take a look.

 Regards,

 --
 Ilya Kasnacheev


 пт, 7 сент. 2018 г. в 14:07, Igor Sapego :

> It happens, because ODBC returns schema name in quotes,
> so seems like QLIK adds its own quotes around it, as it encounters
> non standard characters (quotes).
>
> I think, it is a QLIK's error, as our ODBC driver explicitly states,
> that no
> additional quotes should be used around identifiers. And even if it
> choose
> to apply quotes to "PUBLIC" result obviously should not be a
> ""PUBLIC"".
>
>
> Best Regards,
> Igor
>
>
> On Thu, Sep 6, 2018 at 6:43 PM limabean  wrote:
>
>> Although I specify lower case public in the odbc definition in
>> Windows 10,
>> the QLIK BI application, on its ODBC connection page, forces an upper
>> case
>> "PUBLIC" as you can see in the screen shot, and as far as I can tell
>> there
>> are no options to change that.
>>
>> QlikOdbcPanel.png
>> <
>> http://apache-ignite-users.70518.x6.nabble.com/file/t361/QlikOdbcPanel.png>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: How to create tables with JDBC, read with ODBC?

2018-09-10 Thread Igor Sapego
Ilya,

If we won't bother with quotes, then many other tools will stop working,
as cache-names-schemas MUST be quoted, but they won't be. By the way,
even QLIK will not work with any other schema, except for PUBLIC.

So for now, what I propose is not apply quotes to PUBLIC schema. This is
the only fix I can see here now.

Best Regards,
Igor


On Fri, Sep 7, 2018 at 4:45 PM Ilya Kasnacheev 
wrote:

> Maybe we shouldn't bother to quote schemas, assuming that it's the duty of
> client?
>
> Unfortunately after reading ODBC docs I have no idea, but there's no hints
> that the result will be quoted.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 7 сент. 2018 г. в 15:38, Igor Sapego :
>
>> Well, ODBC applies quotes to all schemas. It makes sense to
>> check and not apply quotes to PUBLIC, but this won't help in
>> all other cases, when cache-name-schema is used.
>>
>> Best Regards,
>> Igor
>>
>>
>> On Fri, Sep 7, 2018 at 2:13 PM Ilya Kasnacheev 
>> wrote:
>>
>>> Hello!
>>>
>>> It's actually very strange that we have quotes around PUBLIC since it's
>>> supposed to be used quote-free. I will take a look.
>>>
>>> Regards,
>>>
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пт, 7 сент. 2018 г. в 14:07, Igor Sapego :
>>>
 It happens, because ODBC returns schema name in quotes,
 so seems like QLIK adds its own quotes around it, as it encounters
 non standard characters (quotes).

 I think, it is a QLIK's error, as our ODBC driver explicitly states,
 that no
 additional quotes should be used around identifiers. And even if it
 choose
 to apply quotes to "PUBLIC" result obviously should not be a ""PUBLIC"".


 Best Regards,
 Igor


 On Thu, Sep 6, 2018 at 6:43 PM limabean  wrote:

> Although I specify lower case public in the odbc definition in Windows
> 10,
> the QLIK BI application, on its ODBC connection page, forces an upper
> case
> "PUBLIC" as you can see in the screen shot, and as far as I can tell
> there
> are no options to change that.
>
> QlikOdbcPanel.png
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t361/QlikOdbcPanel.png>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



Re: Query execution too long even after providing index

2018-09-10 Thread Akash Shinde
Hello guys,

I am also facing the similar problem. Does community users have any
solution for this?

This has become blocking issue for me. Can someone please help?

Thanks,
Akash


On Mon, Sep 10, 2018 at 8:33 AM Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> Guys, is there any solution for this?
> Can someone please respond?
>
> Thanks,
> Prasad
>
>
> -- Forwarded message -
> From: Prasad Bhalerao 
> Date: Fri, Sep 7, 2018, 8:04 AM
> Subject: Fwd: Query execution too long even after providing index
> To: 
>
>
> Can we have update on this?
>
> -- Forwarded message -
> From: Prasad Bhalerao 
> Date: Wed, Sep 5, 2018, 11:34 AM
> Subject: Re: Query execution too long even after providing index
> To: 
>
>
> Hi Andrey,
>
> Can you please help me with this? I
>
> Thanks,
> Prasad
>
> On Tue, Sep 4, 2018 at 2:08 PM Prasad Bhalerao <
> prasadbhalerao1...@gmail.com> wrote:
>
>>
>> I tried changing SqlIndexMaxInlineSize to 32 byte and 100 byte using
>> cache config.
>>
>> ipContainerIpV4CacheCfg.setSqlIndexMaxInlineSize(32/100);
>>
>> But it did not improve the sql execution time. Sql execution time
>> increases with increase in cache size.
>>
>> It is a simple range scan query. Which part of the execution process
>> might take time in this case?
>>
>> Can you please advise?
>>
>> Thanks,
>> PRasad
>>
>> On Mon, Sep 3, 2018 at 8:06 PM Andrey Mashenkov <
>> andrey.mashen...@gmail.com> wrote:
>>
>>> HI,
>>>
>>> Have you tried to increase index inlineSize? It is 10 bytes by default.
>>>
>>> Your indices uses simple value types (Java primitives) and all columns
>>> can be easily inlined.
>>> It should be enough to increase inlineSize up to 32 bytes (3 longs + 1
>>> int = 3*(8 /*long*/ + 1/*type code*/) + (4/*int*/ + 1/*type code*/)) to
>>> inline all columns for the idx1, and up to 27 (3 longs) for idx2.
>>>
>>> You can try to benchmark queries with different inline sizes to find
>>> optimal ratio between speedup and index size.
>>>
>>>
>>>
>>> On Mon, Sep 3, 2018 at 5:12 PM Prasad Bhalerao <
>>> prasadbhalerao1...@gmail.com> wrote:
>>>
 Hi,
 My cache has 1 million rows and the sql is as follows.
 This sql is taking around 1.836 seconds to execute and this time
 increases as I go on adding the data to this cache. Some time it takes more
 than 4 seconds.

 Is there any way to improve the execution time?

 *SQL:*
 SELECT id, moduleId,ipEnd, ipStart
 FROM IpContainerIpV4Data USE INDEX(ip_container_ipv4_idx1)
 WHERE subscriptionId = ?  AND moduleId = ? AND (ipStart
 <= ? AND ipEnd   >= ?)
 UNION ALL
 SELECT id, moduleId,ipEnd, ipStart
 FROM IpContainerIpV4Data USE INDEX(ip_container_ipv4_idx1)
 WHERE subscriptionId = ? AND moduleId = ? AND (ipStart
 <= ? AND ipEnd   >= ?)
 UNION ALL
 SELECT id, moduleId,ipEnd, ipStart
 FROM IpContainerIpV4Data USE INDEX(ip_container_ipv4_idx1)
 WHERE subscriptionId = ? AND moduleId = ? AND (ipStart
 >= ? AND ipEnd   <= ?)

 *Indexes are as follows:*

 public class IpContainerIpV4Data implements Data, 
 UpdatableData {

   @QuerySqlField
   private long id;

   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
 "ip_container_ipv4_idx1", order = 1)})
   private int moduleId;

   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
 "ip_container_ipv4_idx1", order = 0),
   @QuerySqlField.Group(name = "ip_container_ipv4_idx2", order = 0)})
   private long subscriptionId;

   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
 "ip_container_ipv4_idx1", order = 3, descending = true),
   @QuerySqlField.Group(name = "ip_container_ipv4_idx2", order = 2, 
 descending = true)})
   private long ipEnd;

   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
 "ip_container_ipv4_idx1", order = 2),
   @QuerySqlField.Group(name = "ip_container_ipv4_idx2", order = 1)})
   private long ipStart;

 }


 *Execution Plan:*

 2018-09-03 19:32:03,098 232176 [pub-#78%springDataNode%] INFO
 c.q.a.g.d.IpContainerIpV4DataGridDaoImpl - SELECT
 __Z0.ID AS __C0_0,
 __Z0.MODULEID AS __C0_1,
 __Z0.IPEND AS __C0_2,
 __Z0.IPSTART AS __C0_3
 FROM IP_CONTAINER_IPV4_CACHE.IPCONTAINERIPV4DATA __Z0 USE INDEX
 (IP_CONTAINER_IPV4_IDX1)
 /* IP_CONTAINER_IPV4_CACHE.IP_CONTAINER_IPV4_IDX1: SUBSCRIPTIONID =
 ?1
 AND MODULEID = ?2
 AND IPSTART <= ?3
 AND IPEND >= ?4
  */
 WHERE ((__Z0.SUBSCRIPTIONID = ?1)
 AND (__Z0.MODULEID = ?2))
 AND ((__Z0.IPSTART <= ?3)
 AND (__Z0.IPEND >= ?4))
 2018-09-03 19:32:03,098 232176 [pub-#78%springDataNode%] INFO
 c.q.a.g.d.IpContainerIpV4DataGridDaoImpl - SELECT
 __Z1.ID AS __C1_0,
 

Re: Off heap constantly grow on use BinaryObject as field of cached data

2018-09-10 Thread Serg
Hi Ilya,

Yes growing not so quick but in production we lost near 1GB every day with
15GB of data on each node.
I had simplify data classes by remove annotations and this does not help.

Is it possible debug off-heap memory? How I can  understand where memory
goes?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite Thin Client Continuous Query

2018-09-10 Thread Gordon Reid (Nine Mile)
Hi,

Is there any plan to support continuous query in the Ignite Thin client.

This would be very useful for us. Without it, we probably have to use some 
other technology for serialization and streaming updates to the desktop.

Thanks,
Gordon



This email and any attachments are proprietary & confidential and are intended 
solely for the use of the individuals to whom it is addressed. Any views or 
opinions expressed are solely for those of the author and do not necessarily 
reflect those of Nine Mile Financial Pty. Limited. If you have received this 
email in error, please let us know immediately by reply email and delete from 
your system. Nine Mile Financial Pty. Limited. ABN: 346 1349 0252