Re: Ignite cache with custom key : key not found

2024-07-22 Thread Nikolay Izhikov
Hello.

It common issue with the thin client.
Please, set same value of BinaryConfiguration#compactFooter explicitly to false 
both on the server side and client side.

> On 22 Jul 2024, at 10:32, Pavel Tupitsyn  wrote:
> 
> Hello, could you please attach a reproducer?
> 
> This might have to do with type names / ids mismatch, but hard to tell 
> without the code.
> 
> On Fri, Jul 19, 2024 at 7:39 PM Louis C  > wrote:
>> Hello,
>> 
>> I have a strange problem for which I can't find the reason.
>> 
>> I made a cache (key/value cache) with a custom key type that is called 
>> "IgniteBinaryData".
>> 
>> I have a C++ thin client that calls the server and execute a Java 
>> ComputeTaskAdapter that I made (let's call it 
>> "Task1").
>> This Task1 writes data in the cache with the custom key type 
>> "IgniteBinaryData".
>> 
>> But the issue is that when I request the same cache from the C++ thin 
>> client, the key is not found.
>> 
>> What is strange is that I can then add the key with a "Put" from the C++, 
>> and when I look at the deserialized keys in the java code, there does not 
>> seem to be any difference between the 2 "different" keys, which are both 
>> present in the cache.
>> 
>> What I saw is that when I do a "Get" from the C++, the key is not 
>> deserialized (Ignite looks only at the serialized data of the keys).
>> 
>> So I think there might be a difference in the serialization of the key 
>> between the Java code and the C++, but not visible when deserialized.
>> 
>> But looking at all the entries in the cache with an iterator, I found no 
>> differences. I tried using the".withKeepBinary();" method to access the keys 
>> without deserialization, but I can't find a way to get the "bytes[]" 
>> corresponding to the key from the BinaryObject.
>> 
>> So, my question would be : how to get the "bytes[]" corresponding to a 
>> custom key ?
>> And also, is there a known issue that could arise when doing this ? I 
>> carefully followed 
>> https://ignite.apache.org/docs/latest/cpp-specific/cpp-platform-interoperability
>>  and I have no problem of deserialization...
>> 
>> Best regards,
>> 
>> Louis C.



Re: Ignite .NET + CDC replication using Kafka

2024-06-17 Thread Nikolay Izhikov
CDC not related to Ignite.NET .

It captures changes that happens on storage (WAL) layer of Ignite.
So, all you need is to follow the examples from documentation and configure CDC 
replication.

> 12 июня 2024 г., в 14:02, Pavel Tupitsyn  написал(а):
> 
> Can you  share some  working  examples  to  capture  the changes  happening  
> to  cache realtime using  .NET?
> 



Re: RE: Failed to find security context for subject with given ID

2021-12-06 Thread Nikolay Izhikov
Hello y.

We are searching for the root casue of your issues.
Investigation shows, that we are using String#getBytes() call that is
locale dependent and use that bytes for login hash.
Therefore hash can be different on nodes it started with different default
locale.

Can you, please confirm - are you using login with the Chinese characters?


пт, 19 нояб. 2021 г. в 12:13, y :

>
> *Ignite Version:2.11.0.  Here are some important configuration.*
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
> 
>
> *Using binary mode to* *insert data**.*
>
> At 2021-11-19 15:07:01, "Mikhail Petrov"  wrote:
>
> Pavel, at first glance these are not related issues.
>
> Tianyue Hu, could you please specify the version of Ignite you are using,
> the server nodes configuration, and which Ignite mechanism you are using to
> insert data?
> --
> Mikhail
>
> On 2021/11/19 06:37:47 y wrote:
> > Hello Igniters:
> >
> >
> > I start multiple nodes on one server.When I did data insertion, I got
> the following error:Failed to find security context for subject with given
> ID . And then the node stopped. Would you please help?
> >
> > Thanks!
> > Tianyue Hu
> >
> >
>
>
>
>
>


Re: Create two table with the same VALUE_TYPE

2020-07-07 Thread Nikolay Izhikov
Hello, Alexander.

> Can I somehow solve this problem?

You can't use same VALUE_TYPE for two tables with inconsistent field type.
This happens because VALUE_TYPE name actually used to specify
`BinaryObjectType` name.

After it all rows with the same VALUE_TYPE name checked according to the
first created row.
Please, note, that `VALUE_TYPE` can be a random string, not a java class
name:

"The name should correspond to a Java, .NET or C++ class, or it can be a
random one if BinaryObjects
 is used instead of
a custom class"

https://apacheignite-sql.readme.io/docs/create-table




вт, 7 июл. 2020 г. в 15:31, Surkov.Aleksandr :

> Exception:
>
> [14:34:21,079][SEVERE][client-connector-#81%7e375abc-4354-4c8d-a9e1-d193171826c0%][ClientListenerNioListener]
> Failed to process client request
>
> [req=o.a.i.i.processors.platform.client.cache.ClientCacheSqlFieldsQueryRequest@78f07b83
> ]
> class org.apache.ignite.binary.BinaryObjectException: Wrong value has been
> set
>
> [typeName=org.apache.ignite.internal.processors.cache.CreateTwoTablesWithDifferentSchemaTest$MyObj,
> fieldName=VALUE, fieldType=int, assignedValueType=String]
> at
>
> org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.checkMetadata(BinaryObjectBuilderImpl.java:433)
> at
>
> org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.serializeTo(BinaryObjectBuilderImpl.java:321)
> at
>
> org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl.build(BinaryObjectBuilderImpl.java:188)
> at
>
> org.apache.ignite.internal.processors.query.h2.dml.UpdatePlan.processRow(UpdatePlan.java:279)
> at
>
> org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.dmlDoInsert(DmlUtils.java:195)
> at
>
> org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.processSelectResult(DmlUtils.java:168)
> at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdateNonTransactional(IgniteH2Indexing.java:2899)
> at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdate(IgniteH2Indexing.java:2753)
> at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdateDistributed(IgniteH2Indexing.java:2683)
> at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeDml(IgniteH2Indexing.java:1186)
> at
>
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1112)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2574)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2570)
> at
>
> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:3097)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$1(GridQueryProcessor.java:2590)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:2628)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2564)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2491)
> at
>
> org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2447)
> at
>
> org.apache.ignite.internal.processors.platform.client.cache.ClientCacheSqlFieldsQueryRequest.process(ClientCacheSqlFieldsQueryRequest.java:110)
> at
>
> org.apache.ignite.internal.processors.platform.client.ClientRequestHandler.handle(ClientRequestHandler.java:99)
> at
>
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:200)
> at
>
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:54)
> at
>
> org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
> at
>
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
> at
>
> org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at
>
> org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> 

[ANNOUNCE] Apache Ignite 2.8.1 Released

2020-05-27 Thread Nikolay Izhikov
The Apache Ignite Community is pleased to announce the release of
Apache Ignite 2.8.1.

Apache Ignite® is an in-memory computing platform for transactional,
analytical, and streaming workloads delivering in-memory speeds at
petabyte scale.
https://ignite.apache.org

For the full list of changes, you can refer to the RELEASE_NOTES list
which is trying to catalogue the most significant improvements for
this version of the platform.
https://ignite.apache.org/releases/2.8.1/release_notes.html

Download the latest Ignite version from here:
https://ignite.apache.org/download.cgi

Please let us know if you encounter any problems:
https://ignite.apache.org/community/resources.html#ask


Regards,
Nikolay Izhikov on behalf of Apache Ignite community.

[ANNOUNCE] Apache Ignite Spring Boot extensions 1.0.0 Released

2020-05-07 Thread Nikolay Izhikov
The Apache Ignite Community is pleased to announce the release of
Apache Ignite Spring Boot extensions 1.0.0

Apache Ignite [1] is a memory-centric distributed database, caching,
and processing platform for transactional, analytical, and streaming
workloads delivering in-memory speeds at petabyte scale.

This is the very first release of spring-boot extensions:
https://ignite.apache.org/releases/ext/spring-boot-1.0.0/release_notes.html

Download the latest Apache Ignite Spring Boot extensions version from here:
  * 
https://repo.maven.apache.org/maven2/org/apache/ignite/ignite-spring-boot-autoconfigure-ext/1.0.0/
  * 
https://repo.maven.apache.org/maven2/org/apache/ignite/ignite-spring-boot-thin-client-autoconfigure-ext/1.0.0/

Please let us know [2] if you encounter any problems.

Regards,
Nikolay Izhikov on behalf of Apache Ignite community

[1] https://ignite.apache.org
[2] https://ignite.apache.org/community/resources.html#ask

[VOTE][EXTENSION] Release Apache Ignite Spring Boot extensions 1.0.0 RC1

2020-04-27 Thread Nikolay Izhikov
Dear Community,

I have uploaded a release candidate of the two extension modules 
`ignite-spring-boot-autoconfigure` and 
`ignite-spring-boot-client-autoconfigure`.

The following staging can be used for testing:
https://repository.apache.org/content/repositories/orgapacheignite-1478/

Tag with name `ignite-spring-boot-1.0.0-rc1` created:
https://gitbox.apache.org/repos/asf?p=ignite-extensions.git;a=commit;h=8b5f2d9143d281b90aceb6f11dabde50db48f6b3

Release 1.0 contains an initial implementation of the modules, please refer to 
the RELEASE_NOTES:
https://gitbox.apache.org/repos/asf?p=ignite-extensions.git;a=blob;f=RELEASE_NOTES.txt;h=e69173583068467b44f5e148151654ade3fe2452;hb=HEAD

Complete list of resolved issues:

https://issues.apache.org/jira/issues/?jql=labels%20%3D%20spring-boot-autoconfigure
https://issues.apache.org/jira/issues/?jql=labels%20%3D%20spring-boot-client-autoconfigure

DEVNOTES
https://gitbox.apache.org/repos/asf?p=ignite-extensions.git;a=blob;f=DEVNOTES.txt;h=7f25887adef738db956ae0f70c59839f73965d5f;hb=HEAD

The vote is formal, see voting guidelines
https://www.apache.org/foundation/voting.html

+1 - to accept Apache Ignite 2.8.0-rc1
0 - don't care either way
-1 - DO NOT accept Apache Ignite Ignite 2.8.0-rc1 (explain why)

See notes on how to verify release here
https://www.apache.org/info/verification.html
and 
https://cwiki.apache.org/confluence/display/IGNITE/Release+Process#ReleaseProcess-P5.VotingonReleaseandReleaseVerification

The vote will hold for 72 hours and will end on April 30 2020 20:20 UTC
https://www.timeanddate.com/countdown/generic?iso=20200430T2320&p0=166&font=cursive

Re: prometheus jmx scrape failed

2020-04-07 Thread Nikolay Izhikov
Hello, Zipporah.

AFAIK Zabbix doesn’t support scraping of the tabular data JMX beans.
We add this kind of beans in 2.8.

It’s a system view beans [1]

By default, system views exported in the form of JMX beans and SQL views.

You can:

a. Disable system view JMX exporter via configuration 
IgniteConfiguration#setSystemViewExporterSpi
You may want to keep SQL exporter to be able to observe system views via an SQL 
interface.

This can be done like that

IgniteConfiguration cfg = new IgniteConfiguration();

cfg.setSystemViewExporterSpi(new SqlViewExporterSpi());


b. You can filter system views bean in Zabbix exporter [2]

[1] https://apacheignite.readme.io/docs/system-views
[2] 
https://www.zabbix.com/documentation/current/manual/discovery/low_level_discovery/jmx

> 8 апр. 2020 г., в 00:52, zipporah  написал(а):
> 
> Hi,
> 
> I am new to Ignite and have a question regarding prometheus jmx exporter.
> 
> So I'm integrating Apache ignite with Apache Spark to accelerate the
> performance of Spark applications. As part of monitoring, I'm using
> prometheus jmx exporter to export metrics to prometheus.
> 
> My prometheus and metrics configurations are like this:
> https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/spark-docker/conf/prometheus.yaml
> 
> This is working fine with ignite 2.7.x. Starting from ignite 2.8, in spark
> application log I'm seeing following errors:
> 
> 2020-04-06T16:26:01.595 [pool-1-thread-2hread] ERROR
> prometheus.jmx.shaded.io.prometheus.jmx.JmxCollector - JMX scrape failed:
> java.lang.IllegalArgumentException: Not an Attribute:
> javax.management.openmbean.TabularDataSupport(tabularType=javax.management.openmbean.TabularType(name=*org.apache.ignite.spi.systemview.view.ClientConnectionView*,rowType=javax.management.openmbean.CompositeType(name=org.apache.ignite.spi.systemview.view.ClientConnectionView,items=((itemName=connectionId,itemType=javax.management.openmbean.SimpleType(name=java.lang.Long)),(itemName=localAddress,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=remoteAddress,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=systemViewRowId,itemType=javax.management.openmbean.SimpleType(name=java.lang.Integer)),(itemName=type,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=user,itemType=javax.management.openmbean.SimpleType(name=java.lang.String)),(itemName=version,itemType=javax.management.openmbean.SimpleType(name=java.lang.String,indexNames=(systemViewRowId)),contents={})
> at javax.management.AttributeList.adding(AttributeList.java:328)
> at javax.management.AttributeList.adding(AttributeList.java:335)
> at javax.management.AttributeList.asList(AttributeList.java:165)
> at
> io.prometheus.jmx.shaded.io.prometheus.jmx.JmxScraper.scrapeBean(JmxScraper.java:156)
> at
> io.prometheus.jmx.shaded.io.prometheus.jmx.JmxScraper.doScrape(JmxScraper.java:117)
> at
> io.prometheus.jmx.shaded.io.prometheus.jmx.JmxCollector.collect(JmxCollector.java:468)
> at
> io.prometheus.jmx.shaded.io.prometheus.client.CollectorRegistry$MetricFamilySamplesEnumeration.findNextElement(CollectorRegistry.java:183)
> at
> io.prometheus.jmx.shaded.io.prometheus.client.CollectorRegistry$MetricFamilySamplesEnumeration.nextElement(CollectorRegistry.java:216)
> at
> io.prometheus.jmx.shaded.io.prometheus.client.CollectorRegistry$MetricFamilySamplesEnumeration.nextElement(CollectorRegistry.java:137)
> at
> io.prometheus.jmx.shaded.io.prometheus.client.exporter.common.TextFormat.write004(TextFormat.java:22)
> at
> io.prometheus.jmx.shaded.io.prometheus.client.exporter.HTTPServer$HTTPMetricHandler.handle(HTTPServer.java:59)
> at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
> at sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:83)
> at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:82)
> at
> sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(ServerImpl.java:675)
> at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:79)
> at sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:647)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> 
> 
> I have no clue why this is happening. Is there anything wrong with the mbean
> exposed by ignite?
> 
> Thanks,
> Zippo
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: [External]Re: Exporter usage of Ignite 2.8.0

2020-03-26 Thread Nikolay Izhikov
Hello, Kamlesh!

Thanks for trying out open census integration.
You can find self-explained example in Ignite sources [1].

To integrate with prometheus you have:

1. Enable `ignite-opencensus`.
2. Configure opencensus exporter in IgniteConfiguration:

```
OpenCensusMetricExporterSpi openCensusMetricExporterSpi = new 
OpenCensusMetricExporterSpi();

// Metrics written to the collector each 1 second.
openCensusMetricExporterSpi.setPeriod(PERIOD);

cfg.setMetricExporterSpi(openCensusMetricExporterSpi);
```

3. Enable opencensus http server(after it you can view metrics values with the 
http://HOST:PORT/ URL):

```
// Setting up prometheus stats collector.
PrometheusStatsCollector.createAndRegister();

// Setting up HTTP server that would serve http://localhost:8080 
requests.
HTTPServer srv = new HTTPServer(HOST, PORT, true);
```

4. Grab metrics values in the prometheus:

prometheus.yml

```
scrape_configs:
  - job_name: ‘ignite'
static_configs:
  - targets: [‘localhost:8080’] # - same host and port as at step 3.
``` 

[1] 
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/opencensus/OpenCensusMetricsExporterExample.java

> 25 марта 2020 г., в 12:01, Kamlesh Joshi  написал(а):
> 
> Thanks for the update Anton. 
>  
> Have some queries as below:
> 1.   How do we feed the Ignite cluster data which is exposed on JMX port 
> to custom exporter given on 
> (https://opencensus.io/exporters/supported-exporters/java/prometheus/) ?
> 2.   If we move opencensus lib to $IGNITE_HOME/libs/ will it be exposed 
> on some default port (likewise ignite-rest)? How does exactly opencensus will 
> affect the cluster?
>  
> Thanks and Regards,
> Kamlesh Joshi
>  
> -Original Message-
> From: akurbanov  
> Sent: 24 March 2020 19:28
> To: user@ignite.apache.org
> Subject: [External]Re: Exporter usage of Ignite 2.8.0
>  
> The e-mail below is from an external source. Please do not open attachments 
> or click links from an unknown or suspicious origin.
>  
> Hello,
>  
> Unfortunately, the documentation is not available yet on the website, but you 
> can use org.apache.ignite.spi.metric.opencensus.OpenCensusMetricExporterSpi 
> that comes with ignite-opencensus in distribution:
> $IGNITE_HOME/libs/optional/ignite-opencensus. 
>  
> The metric exporter should be registered in IgniteConfiguration, please see 
> the Java example:
> https://github.com/nizhikov/ignite/blob/b362cfad309ec8f31c6cba172391c74589c9191f/modules/opencensus/src/test/java/org/apache/ignite/internal/processors/monitoring/opencensus/OpenCensusMetricExporterSpiTest.java
>  
> Prometeus:
> https://opencensus.io/exporters/supported-exporters/java/prometheus/
> Documentation waiting list:
> http://apache-ignite-developers.2346864.n4.nabble.com/Ignite-2-8-documentation-td46008.html
> IEP 35:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=112820392&src=sidebar
>  
> Best regards,
> Anton
>  
>  
>  
>  
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>  
> 
> "Confidentiality Warning: This message and any attachments are intended only 
> for the use of the intended recipient(s), are confidential and may be 
> privileged. If you are not the intended recipient, you are hereby notified 
> that any review, re-transmission, conversion to hard copy, copying, 
> circulation or other use of this message and any attachments is strictly 
> prohibited. If you are not the intended recipient, please notify the sender 
> immediately by return email and delete this message and any attachments from 
> your system.
> 
> Virus Warning: Although the company has taken reasonable precautions to 
> ensure no viruses are present in this email. The company cannot accept 
> responsibility for any loss or damage arising from the use of this email or 
> attachment."
> 



Re: Node not joined to the cluster - joining node doesn't have encryption data

2019-07-04 Thread Nikolay Izhikov
> What it means?

It means "Your TDE feature disabled".


В Чт, 04/07/2019 в 13:04 +0300, Nikolay Izhikov пишет:
> Hello, shahidv.
> 
> > What it means?
> 
> It's not a casue of an error.
> It just info message.
> 
> Can you please, send your Ignite configuration and log with error.
> 
> В Чт, 04/07/2019 в 00:46 -0700, shahidv пишет:
> > I am trying to add nodes to ignite cluster, when same host it will work, but
> > when we add node from separate host, it shows message *joining node doesn't
> > have encryption data* in log.
> > 
> > What it means?, I am using Ignite 2.7.5 and Java 11.
> > 
> > 
> > 
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/


signature.asc
Description: This is a digitally signed message part


Re: Node not joined to the cluster - joining node doesn't have encryption data

2019-07-04 Thread Nikolay Izhikov
Hello, shahidv.

> What it means?

It's not a casue of an error.
It just info message.

Can you please, send your Ignite configuration and log with error.

В Чт, 04/07/2019 в 00:46 -0700, shahidv пишет:
> I am trying to add nodes to ignite cluster, when same host it will work, but
> when we add node from separate host, it shows message *joining node doesn't
> have encryption data* in log.
> 
> What it means?, I am using Ignite 2.7.5 and Java 11.
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


signature.asc
Description: This is a digitally signed message part


Re: Ignite 2.7.5

2019-05-09 Thread Nikolay Izhikov
Hello, Dmitriy.

Can you, please, help up with the dates?

В Чт, 02/05/2019 в 08:21 -0700, Loredana Radulescu Ivanoff пишет:
> Hello,
> 
> Would you happen to have any news about the 2.7.5 release date? 
> 
> Thank you,
> Loredana


signature.asc
Description: This is a digitally signed message part


Re: Certificate upgrade in Ignite Cluster

2019-05-09 Thread Nikolay Izhikov
Hello, Ankit.

Please, clarify, what do you mean by "secure mode"?

В Чт, 09/05/2019 в 05:33 -0700, Ankit Singhai пишет:
> Hello,
> We are running Ignite Cluster with 3 servers and 10 client nodes in secure
> mode. Now as the certificate is going to expire, how can we configure the
> new certificate without taking any down time?
> 
> Thanks,
> Ankit Singhai
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


signature.asc
Description: This is a digitally signed message part


Re: Apache Ignite support for continuous queries over sliding windows

2019-04-19 Thread Nikolay Izhikov
Hello, Stefano.

> Does Apache Ignite support continuous queries over sliding windows?

Seems, you can easily implement this on top of regular ContinuousQuery.
You can combine several event batches based on your logic and apply it to
your internal consumer.

пт, 19 апр. 2019 г. в 11:08, stefano.rebora :

> Hi,
>
> I'm investigating the use of Apache Ignite for real-time stream processing.
> In particular, I'm interested in applying continuous queries for computing
> the
> average of some values over a sliding window.
>
> I've some questions:
>
> 1) Is Apache Ignite a suitable technology for this kind of scenario?
>
> 2) Does Apache Ignite support continuous queries over sliding windows?
> In Ignite doc for v2.7 sliding windows are not mentioned, but in some
> previous versions sliding windows are configured as Ignite cache eviction
> policies.
>
> 3) If 2) is yes, how to implement the continuous query for computing the
> mean value?
> Is there a suggest pattern to follow? It seems to me that continuous
> queries
> support
> only remote filter.
>
> Thanks in advance.
>
> Stefano
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Spark dataframe to Ignite write issue .

2019-03-26 Thread Nikolay Izhikov
Hello, Harshal

Can you, please, share your Ignite config?
Especially, "*ENTITY_PLAYABLE*" cache definition

вт, 26 мар. 2019 г. в 05:35, Denis Magda :

> Hi, as far as I can guess from the shared details, you should pass the
> IgniteCache name as a SQL schema if SQL metadata was configured via XML or
> annotations. Try this "INSERT INTO cacheName.ENTITY_PLAYABLE".
>
> -
> Denis
>
>
> On Mon, Mar 25, 2019 at 7:18 AM Harshal Patil <
> harshal.pa...@mindtickle.com> wrote:
>
>> Hi ,
>> I am running spark 2.3.1 with Ignite 2.7.0 . I have configured Postgres
>> as cachePersistance store . After loading of cache , i can read and convert
>> data from ignite cache to Spark Dataframe . But while writing back to
>> ignite , I get below error
>>
>> class org.apache.ignite.internal.processors.query.IgniteSQLException: *Table
>> "ENTITY_PLAYABLE" not found*; SQL statement:
>>
>> INSERT INTO
>> ENTITY_PLAYABLE(GAMEID,PLAYABLEID,COMPANYID,VERSION,EVENTTIMESTAMP,EVENTTIMESTAMPSYS,COMPANYIDPARTITION,partitionkey)
>> VALUES(?,?,?,?,?,?,?,?) [42102-197]
>>
>> at
>> *org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.streamUpdateQuery*
>> (IgniteH2Indexing.java:1302)
>>
>> at
>> org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:2206)
>>
>> at
>> org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:2204)
>>
>> at
>> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
>>
>>
>>
>> *Read from Ignite* :
>>
>>
>> loading cache
>>
>>
>> val conf = new SparkConf()
>> conf.setMaster("spark://harshal-patil.local:7077")
>> //conf.setMaster("local[*]")
>> conf.setAppName("IGniteTest")
>> conf.set("spark.executor.heartbeatInterval", "900s")
>> conf.set("spark.network.timeout", "950s")
>> conf.set("spark.default.parallelism", "4")
>> conf.set("spark.cores.max", "4")
>> 
>> conf.set("spark.jars","target/pack/lib/spark_ignite_cache_test_2.11-0.1.jar")
>>
>> val cfg = () => ServerConfigurationFactory.createConfiguration()
>>
>> Ignition.start(ServerConfigurationFactory.createConfiguration())
>>
>> val ic : IgniteContext = new IgniteContext(sc,  cfg)
>>
>> ic.ignite().cache("EntityPlayableCache").loadCache(null.asInstanceOf[IgniteBiPredicate[_,
>>  _]])
>>
>>
>>
>>
>> *spark.read*
>>
>>   .format(IgniteDataFrameSettings.*FORMAT_IGNITE*)
>>
>>   .option(IgniteDataFrameSettings.*OPTION_CONFIG_FILE*, configPath)
>>
>>   .option(IgniteDataFrameSettings.*OPTION_TABLE*,
>> "ENTITY_PLAYABLE").load().select(*sum*("partitionkey").alias("sum"),
>> *count*("gameId").as("total")).collect()(0)
>>
>>
>> *Write To Ignite* :
>>
>>
>> *df.write*
>>
>>   .format(IgniteDataFrameSettings.*FORMAT_IGNITE*)
>>
>>   .option(IgniteDataFrameSettings.*OPTION_CONFIG_FILE*, configPath)
>>
>>
>>   .option(IgniteDataFrameSettings.*OPTION_TABLE*, "ENTITY_PLAYABLE")
>>
>> .option(IgniteDataFrameSettings.
>> *OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS*,
>> "gameId,playableId,companyId,version")
>>
>> .option(IgniteDataFrameSettings.*OPTION_STREAMER_ALLOW_OVERWRITE*,
>> "true")
>>
>>   .mode(SaveMode.*Append*)
>>
>>   .save()
>>
>>
>>
>> I think the problem is with *Spring bean Injection on executer node* ,
>> please help , what i am doing wrong .
>>
>>
>>
>>


[ANNOUNCE] Apache Ignite 2.7.0 Released

2018-12-05 Thread Nikolay Izhikov
The Apache Ignite Community is pleased to announce the release of 
Apache Ignite 2.7.0. 

Apache Ignite [1] is a memory-centric distributed database, caching, 
and processing platform for transactional, analytical, and streaming 
workloads delivering in-memory speeds at petabyte scale. 

This release introduce several major features and fixes some critical issues
https://ignite.apache.org/releases/2.7.0/release_notes.html

Download the latest Ignite version from here: 
https://ignite.apache.org/download.cgi

Please let us know [2] if you encounter any problems. 

Regards, 
Nikolay Izhikov on behalf of Apache Ignite community 

[1] https://ignite.apache.org
[2] https://ignite.apache.org/community/resources.html#ask


signature.asc
Description: This is a digitally signed message part


Re: Continuous query - Exactly once based event across multiple nodes..

2018-05-07 Thread Nikolay Izhikov
Hello, JP.

So what?
What is issue with multiple filter instance?

пн, 7 мая 2018 г., 12:47 JP :

> Thanks... This solution worked but problem is, it creating multiple remote
> filter instance..
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Continuous query - Exactly once based event across multiple nodes..

2018-05-06 Thread Nikolay Izhikov
Hello, JP.

You should use target node in remote filter.

You should check "Is primary node for some record equal to target node?" in 
your filter.
Please, see code below.
You can find related discussion and full example here [1].

@IgniteAsyncCallback
public static class RemoteFactory implements 
Factory> {
private final ClusterNode node;

public RemoteFactory(ClusterNode node) {
this.node = node;
}

@Override
public CacheEntryEventFilter create() {
return new CacheEntryEventFilter() {
@IgniteInstanceResource
private Ignite ignite;

@Override
public boolean evaluate(CacheEntryEvent cacheEntryEvent) {
Affinity aff = ignite.affinity("myCache");

ClusterNode primary = 
aff.mapKeyToNode(cacheEntryEvent.getKey());

return primary.id().equals(node.id());
}
};
}
}



[1] https://issues.apache.org/jira/browse/IGNITE-8035


В Вс, 06/05/2018 в 23:33 -0700, JP пишет:
> Using continuous query,
> 
> How to achieve event trigger for cache exactly only once per key even if
> continuous query is listening in multiple nodes or multiple listener.
> example:
> 1. Scenario 1:
>  Node A: Start Continuous query 
>  Node B: Start Continuous query 
>  Node C: Insert or Update or Delete record ex: number from 1 to 100
> 
> Expected Output should be as below
>  Node A - 1 ,2,3,4,550
>  Node B - 51, 52, 53, 54, ... 100
>  Above output is the expected output. Here, event per key should be
> triggered exactly once across nodes.
> 
> Actual Output should be as below
>  Node A - 1 ,2,3,4,5,100
>  Node B - 1, 2, 3, 4,5 ... 100
>  
> If this is not possible in Continuous query, then is there any way to
> achieve this.
> 
> 2. Scenario 2:
> To achieve expected output,
>  I am using singleton service per Cluster.
> Ex: Cluster A
>   - Singleton service with Continuous query for cache
> Here problem is, service is running in only one instance.
> How to achieve above output with multiple instance of service?
> 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/

signature.asc
Description: This is a digitally signed message part


Re: Saving a DataFrame from Spark and accessing data as key/value externally

2018-03-26 Thread Nikolay Izhikov
 org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.processNearSingleGetResponse(GridDhtCacheAdapter.java:349)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$1400(GridDhtAtomicCache.java:130)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$15.apply(GridDhtAtomicCache.java:422)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$15.apply(GridDhtAtomicCache.java:417)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1060)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090)
> at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:505)
> at java.lang.Thread.run(Thread.java:748)
> 
> Hope you can help me shed some light on this.
> 
> Thanks,
> Luca
> 
> 
> 2018-03-23 14:33 GMT+01:00 Nikolay Izhikov :
> > Hello, Luca.
> > 
> > Can you attach some simple reproducer or code piece that cause exception?
> > 
> > В Пт, 23/03/2018 в 14:31 +0100, Rosellini, Luca пишет:
> > > Hi all,
> > > I am using Apache Ignite 2.4 and I've successfully saved a Spark 
> > > Dataframe as a SQL table in the Ignite caching layer.
> > >
> > > I am trying to access the data from an external Java program (completely 
> > > unrelated to the Spark Job that produced and saved the table) using the 
> > > Cache API, as if it were a key/value store.
> > >
> > > The table, called 'PERSON', has a primary key field called UUID and maps 
> > > to an Ignite cache called SQL_PUBLIC_PERSON.
> > >
> > > Using the Ignite Cache API I am able to check that that a specific entry 
> > > exists in the cache calling:
> > > cache.containsKey(...)
> > >
> > > By the way, If I try to get the value calling cache.get(...) for a 
> > > specific key I get a ClassNotFoundException (full stacktrace is attached).
> > >
> > > Now, I guess Ignite dinamically generated a schema bean for my DataFrame 
> > > when saving the DataFrame itself in Spark.
> > > Since the generated bean class name also seems to be generated whith some 
> > > internal rule (in this example it's 
> > > 'SQL_PUBLIC_PERSON_da18b6a2_8b41_4c34_9451_6fd9ace8e73d') I am not sure 
> > > if this usage pattern does make sense at all.
> > >
> > > I am very new to Apache Ignite so I'd like to apologize if this is a 
> > > silly question, but I am not able to find any clue in the official 
> > > documentation.
> > >
> > > Thanks,
> > > Luca
> 
> 

signature.asc
Description: This is a digitally signed message part


Re: Different behavior when saving date from Dataframe API and RDD API

2018-03-26 Thread Nikolay Izhikov
Hell, ray.

> Please advise is this behavior expected?

I think this behavior is expected.
Because it's more efficient to query specific affinity key value.
Anyway, I'm not an expert in SQL engine, so I send your question to the 
dev-list.

Igniters,

I think this user question is related to the SQL engine, not the Data Frame 
integration.
So can some of SQL engine experts take a look.

In the first case SQL table will be created as follows `CREATE TABLE 
table_name() PRIMARY KEY a,b,c,d WITH "template=partitioned,affinitykey=a"`

В Пт, 23/03/2018 в 00:48 -0700, Ray пишет:
> I was trying out one of Ignite 2.4's new features - saving data from
> dataframe.
> But I found some inconsistency between the Dataframe API and RDD API.
> 
> This is the code from saving dataframe to Ignite.
> DF.write
> .format(FORMAT_IGNITE)
> .mode(SaveMode.Append)
> .option(OPTION_CONFIG_FILE, CONFIG)
> .option(OPTION_TABLE, "table_name")
> .option(OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS, "a,b,c,d")
> .option(OPTION_CREATE_TABLE_PARAMETERS,
> "template=partitioned,affinitykey=a")
> .option(OPTION_STREAMER_ALLOW_OVERWRITE, "true")
> .save()
> After data finished saving, I ran this command to create an index on field
> a.
> CREATE INDEX IF NOT EXISTS idx ON table_name (a);
> Then I run this query to see if the index is working.
> 
> explain select a from table_name where a = '303';
> PLAN  SELECT
> __Z0.a AS __C0_0
> FROM PUBLIC.table_name __Z0
> /* PUBLIC.AFFINITY_KEY: a = '303' */
> WHERE __Z0.a = '303'
> 
> But when I try query the data I insert in the old RDD way, the result is
> explain select a from table_name where a = '303';
> PLAN  SELECT
> __Z0.a AS __C0_0
> FROM PUBLIC.table_name __Z0
> /* PUBLIC.table_name_IDX: a = '303' */WHERE __Z0.a = '303'
> 
> The result shows with affinity key, the index created is not effective.
> I tried creating index on other non affinity key field, the index is
> working.
> Please advise is this behavior expected?
> 
> THanks
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/

signature.asc
Description: This is a digitally signed message part


Re: Saving a DataFrame from Spark and accessing data as key/value externally

2018-03-23 Thread Nikolay Izhikov
Hello, Luca.

Can you attach some simple reproducer or code piece that cause exception?

В Пт, 23/03/2018 в 14:31 +0100, Rosellini, Luca пишет:
> Hi all,
> I am using Apache Ignite 2.4 and I've successfully saved a Spark Dataframe as 
> a SQL table in the Ignite caching layer.
> 
> I am trying to access the data from an external Java program (completely 
> unrelated to the Spark Job that produced and saved the table) using the Cache 
> API, as if it were a key/value store.
> 
> The table, called 'PERSON', has a primary key field called UUID and maps to 
> an Ignite cache called SQL_PUBLIC_PERSON.
> 
> Using the Ignite Cache API I am able to check that that a specific entry 
> exists in the cache calling:
> cache.containsKey(...)
> 
> By the way, If I try to get the value calling cache.get(...) for a specific 
> key I get a ClassNotFoundException (full stacktrace is attached).
> 
> Now, I guess Ignite dinamically generated a schema bean for my DataFrame when 
> saving the DataFrame itself in Spark. 
> Since the generated bean class name also seems to be generated whith some 
> internal rule (in this example it's 
> 'SQL_PUBLIC_PERSON_da18b6a2_8b41_4c34_9451_6fd9ace8e73d') I am not sure if 
> this usage pattern does make sense at all.
> 
> I am very new to Apache Ignite so I'd like to apologize if this is a silly 
> question, but I am not able to find any clue in the official documentation.
> 
> Thanks,
> Luca

signature.asc
Description: This is a digitally signed message part


Re: DataFrame support for Apache Spark 1.6

2018-03-19 Thread Nikolay Izhikov
Hello, Ray.

Currently, There is no plans to support Spark 1.6 in Ignite.
I doubt if it can be done without significat changes in existing code base.

Anyway, you can create a ticket [1].
And I will try to look what can be done.

[1] https://issues.apache.org/jira/browse/IGNITE

В Пн, 19/03/2018 в 01:27 -0700, Ray пишет:
> I'm trying to save spark dataframe to Ignite 2.4 using Apache Spark 1.6.
> But it failed, with the following error
> Exception in thread "main" java.util.ServiceConfigurationError:
> org.apache.spark.sql.sources.DataSourceRegister: Provider
> org.apache.ignite.spark.impl.IgniteRelationProvider could not be
> instantiated
>   at java.util.ServiceLoader.fail(ServiceLoader.java:232)
>   at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
>   at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
>   at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
>   at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
>   at
> scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:42)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> scala.collection.TraversableLike$class.filter(TraversableLike.scala:263)
>   at scala.collection.AbstractTraversable.filter(Traversable.scala:105)
>   at
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:59)
>   at
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:102)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
>   at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:244)
>   at 
> IgniteDataFrameWriteExample$.main(IgniteDataFrameWriteExample.scala:40)
>   at IgniteDataFrameWriteExample.main(IgniteDataFrameWriteExample.scala)
> Caused by: java.lang.NoClassDefFoundError: org/apache/spark/internal/Logging
>   at java.lang.ClassLoader.defineClass1(Native Method)
>   at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
>   at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>   at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
>   at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.getDeclaredConstructors0(Native Method)
>   at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
>   at java.lang.Class.getConstructor0(Class.java:3075)
>   at java.lang.Class.newInstance(Class.java:412)
>   at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
>   ... 16 more
> Caused by: java.lang.ClassNotFoundException:
> org.apache.spark.internal.Logging
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 33 more
> 
> But it works fine under Spark 2.2.
> So I'm wondering will Spark dataframe feature supports Spark 1.6 in the
> future?
> I can't upgrade to Spark 2.2 because Cloudera won't upgrade.
> 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/

signature.asc
Description: This is a digitally signed message part


Re: About Apache Ignite 2.4 with jdk

2018-03-19 Thread Nikolay Izhikov
Hello, Lucky.

You can download source of Ignite from official repository:

https://ignite.apache.org/download.cgi#sources



В Пн, 19/03/2018 в 16:39 +0800, Lucky пишет:
> Hi,
>Apache Ignite 2.4 is built in JDK1.8.
>   Can you provide a JDK1.7 version?
>   Or Give the dependency jar  list ,then I can build it myself.
>   Thanks very much.
> 
> 
>  

signature.asc
Description: This is a digitally signed message part


Re: [Ignite 2.0.0] Stopping the node in order to prevent cluster wide instability.

2018-02-01 Thread Nikolay Izhikov
Hello, Valentin.

I try to take a look at this bug.


В Чт, 01/02/2018 в 12:35 -0700, vkulichenko пишет:
> Well, then you need IGNITE-3653 to be fixed I believe. Unfortunately, it's
> not assigned to anyone currently, so apparently no one is working on it. Are
> you willing to pick it up and contribute?
> 
> -Val
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/

signature.asc
Description: This is a digitally signed message part


Re: xml and java configuration

2018-01-09 Thread Nikolay Izhikov
Hello, Mikael.

Yes, it possible.

You can load IgniteConfiguration throw Ignition.loadSpringBean [1]
method.
After it you can modify result if you want.

Please, look at example:

```
final String cfg = "modules/yardstick/config/ignite-localhost-
config.xml";

IgniteConfiguration nodeCfg = Ignition.loadSpringBean(cfg, "grid.cfg");

nodeCfg.setIgniteInstanceName("MyNewName");

Ignition.start(nodeCfg);
```

[1] https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite
/Ignition.html#loadSpringBean(java.lang.String,%20java.lang.String)

В Вт, 09/01/2018 в 15:19 +0100, Mikael пишет:
> Hi!
> 
> Is it possible to mix Ignite configuration so I can have the basic 
> configuration in my xml files and add some extra configuration from
> the 
> Java application or do I have to use one or the other ?
> 
> Mikael
> 
> 


Re: Spark data frames integration merged

2018-01-05 Thread Nikolay Izhikov
Hello, guys.

Currently `getPreferredLocations` implemented in 
`IgniteRDD -> IgniteAbstractRDD`.

But DataFrame implementation uses 
`IgniteSQLDataFrameRDD -> IgniteSqlRDD -> IgniteAbstractRDD`

Where `->` is extension.

So, for now, getPreferredLocation doesn't implemented for a
IgniteDataFrame.

Please, take a look [1], [2].

I think it a very good idea to implement `getPreferredLocation` inside
`IgniteSQLDataFrameRDD` or event inside `IgniteAbstractRDD`

Can someone file a ticket? Or I can do it by myself.


[1] - https://github.com/apache/ignite/blob/master/modules/spark/src/ma
in/scala/org/apache/ignite/spark/IgniteRDD.scala#L50

[2] - https://github.com/apache/ignite/blob/master/modules/spark/src/ma
in/scala/org/apache/ignite/spark/impl/IgniteSQLDataFrameRDD.scala#L40


В Ср, 03/01/2018 в 15:35 -0800, Valentin Kulichenko пишет:
> Revin,
> 
> I doubt IgniteRDD#getPrefferredLocations has any affect on data
> frames, but this is an interesting point. Nikolay, as a developer of
> this functionality, can you please comment on this?
> 
> -Val
> 
> On Wed, Jan 3, 2018 at 1:22 PM, Revin Chalil 
> wrote:
> > Thanks Val for the info on indexes with DF. Do you know if adding
> > index / affinitykeys on the cache help with the join, when the
> > IgniteRDD is joined with a spark DF? The below from docs say that
> > 
> > “IgniteRDD also provides affinity information to Spark via
> > getPrefferredLocations method so that RDD computations use data
> > locality.”
> > 
> > I was wondering, if the affinitykey on the cache can be utilized in
> > the spark join?
> > 
> > 
> > On 1/3/18, 12:27 PM, "vkulichenko" 
> > wrote:
> > 
> > Indexes would not be used during joins, at least in current
> > implementation.
> > Current integration is implemented as a regular Spark data
> > source which
> > provides each relation separately. Spark then performs join by
> > itself, so
> > Ignite indexes do not help.
> > 
> > The easiest way to get binaries would be to use a nightly build
> > [1] , but it
> > seems to be broken for some reason (latest is from May 31). I
> > guess the only
> > option at the moment is to build from source.
> > 
> > [1]
> > https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-night
> > ly/lastSuccessfulBuild/
> > 
> > -Val
> > 
> > 
> > 
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> > 
> > 
> 
> 


Re: Spark data frames integration merged

2017-12-29 Thread Nikolay Izhikov
Thank you, guys.

Val, thanks for all reviews, advices and patience.

Anton, thanks for ignite wisdom you share with me.

Looking forward for next issues :)

P.S Happy New Year for all Ignite community!

В Пт, 29/12/2017 в 13:22 -0800, Valentin Kulichenko пишет:
> Igniters,
> 
> Great news! We completed and merged first part of integration with
> Spark data frames [1]. It contains implementation of Spark data
> source which allows to use DataFrame API to query Ignite data, as
> well as join it with other data frames originated from different
> sources.
> 
> Next planned steps are the following:
> - Implement custom execution strategy to avoid transferring data from
> Ignite to Spark when possible [2]. This should give serious
> performance improvement in cases when only Ignite tables participate
> in a query.
> - Implement ability to save a data frame into Ignite via
> DataFrameWrite API [3].
> 
> [1] https://issues.apache.org/jira/browse/IGNITE-3084
> [2] https://issues.apache.org/jira/browse/IGNITE-7077
> [3] https://issues.apache.org/jira/browse/IGNITE-7337
> 
> Nikolay Izhikov, thanks for the contribution and for all the hard
> work!
> 
> -Val


Re: List of running Continuous queries or CacheEntryListener per cache or node

2017-12-21 Thread Nikolay Izhikov
Hello, Dmitry.

I think it a great idea.

Do we a feature to list all running ComputeTasks?

I, personally, think we have to implement possibility to track all
user-provided tasks - CacheListener, ContinuousQuery, ComputeTasks,
etc.

В Чт, 21/12/2017 в 10:13 +0300, Dmitry Karachentsev пишет:
> Crossposting to devlist.
> 
> Hi Igniters!
> 
> It's might be a nice feature to have - get list of registered
> continuous 
> queries with ability to deregister them.
> 
> What do you think?
> 
> Thanks!
> -Dmitry
> 
> 20.12.2017 16:59, fefe пишет:
> > For sanity checks or tests. I want to be sure that I haven't forgot
> > to
> > deregister any listener.
> > 
> > Its also very important metric to see how many continuous
> > queries/listeners
> > are currently running.
> > 
> > 
> > 
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> 
> 


Re: Is there any way to listener to a specific remote cache's update events ?

2017-09-05 Thread Nikolay Izhikov

Hello, Aaron.

I think continuous query is what you need:

https://apacheignite.readme.io/docs/continuous-queries#section-local-listener

You can also use Ignite as jcache implementation and register jcache 
listener on IgniteCache:


https://static.javadoc.io/javax.cache/cache-api/1.0.0/javax/cache/Cache.html#registerCacheEntryListener(javax.cache.configuration.CacheEntryListenerConfiguration)

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html

05.09.2017 14:26, aa...@tophold.com пишет:

hi All,

I look around the event section try to find a way to catch a specific 
cache's update events, and persist those event to a historical database.


We have a instance update a market data Ignite cache; on another side we 
need persist all those historical events.


The listener side host in a standalone machine, it does not care data in 
cache only the updates.


we try several ways, local or remote but seem only the cache data node 
got the events, while the remote not;


also this cache data node may have multiple caches. while we only want 
monitor one of them,


Another way we try to define a specific topic and manually trigger a 
publish event, seem can work.


Thanks for your time!

Regards
Aaron

aa...@tophold.com