Re: is peerClassLoadingEnabled in client mode makes difference

2019-01-23 Thread Denis Magda
Shiva,

Just to clarify, the peer class loading (p2p) is a global setting and can
be set to on/off for particular nodes.

As for your case, the client has to send a compute tasks or execute
p2p-enabled API first and after that a server which will be processing the
task/request will preloading missing classes. Do you use p2p this way?

-
Denis


On Wed, Jan 23, 2019 at 10:15 AM shivakumar 
wrote:

> hi stan,
> thanks for your reply!!
>
> but in my case, i deployed ignite in kubernetes environment and starting a
> client node on VM, which connects to ignite server(client is configured to
> use  org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder
> as discovery mechanism and provided IP address of ignite server
> pods/containers )
> and i am placing "keytype" and "valuetype"(custom key and values) class
> files in clientnode's class path (under libs folder of apache ignite HOME
> path where i start client process) but servers are not loading these
> classes
> and throwing class not found exception for those classes and if i place
> those classes inside pods its loading the classes.
> why it is not loading the class files when i placed on client side even
> when
> peerClassLoading is enabled on client side?
> am I missing something?
>
> with thanks,
> shiva
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


RE: is peerClassLoadingEnabled in client mode makes difference

2019-01-23 Thread shivakumar
hi stan,
thanks for your reply!!

but in my case, i deployed ignite in kubernetes environment and starting a
client node on VM, which connects to ignite server(client is configured to
use  org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder
as discovery mechanism and provided IP address of ignite server
pods/containers ) 
and i am placing "keytype" and "valuetype"(custom key and values) class
files in clientnode's class path (under libs folder of apache ignite HOME
path where i start client process) but servers are not loading these classes
and throwing class not found exception for those classes and if i place
those classes inside pods its loading the classes.
why it is not loading the class files when i placed on client side even when
peerClassLoading is enabled on client side?
am I missing something?

with thanks,
shiva



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: is peerClassLoadingEnabled in client mode makes difference

2019-01-23 Thread Stanislav Lukyanov
Yes, that’s actually the intended usage.

Stan

From: shivakumar
Sent: 23 января 2019 г. 20:47
To: user@ignite.apache.org
Subject: is peerClassLoadingEnabled in client mode makes difference

when peerClassLoadingEnabled is enabled in client node which joins the
cluster of servers, if any class/jar placed in client class path, is it
possible to use those classes/jar by servers?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



is peerClassLoadingEnabled in client mode makes difference

2019-01-23 Thread shivakumar
when peerClassLoadingEnabled is enabled in client node which joins the
cluster of servers, if any class/jar placed in client class path, is it
possible to use those classes/jar by servers?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Baselined node rejoining crashes other baseline nodes - DuplicateKeyError

2019-01-23 Thread Stanislav Lukyanov
Hi,

I’ve reproduced this and have a fix – I guess it’ll be available with 2.8.
Meanwhile I can only suggest not to create indexes without an explicit name.

Stan

From: mahesh76private
Sent: 16 января 2019 г. 12:39
To: user@ignite.apache.org
Subject: RE: Baselined node rejoining crashes other baseline nodes - 
DuplicateKeyError

Stan, thanks for the visibility. 

-1-
Over the last year, we move from various versions of ignite 2.4, 2.5 to 2.7.
I always keep work folder in tact. 
-2-
Over a period of development, we might have tried to create index a second
or many times on the same column on which an index already existed. Now,
could that cause a confusion at ignite level, especially in a multi-node
scenario? Was something out of sync? Was a check missing?
-3-
Over a period of time, we dropped the table several times and recreated the
table several times and indexes. Was something stable left out in work
folder. We always used 2 or more nodes. 
-4-
Over a period of time, we saw issues with index creation as well. My
colleague posted another strange behaviour with index creation. See the
issue here,
http://apache-ignite-users.70518.x6.nabble.com/Failing-to-create-index-on-Ignite-table-column-td26252.html#a26258
Summary is if we don't give index names the ignite gives exceptions.
 

Something seems to be wrong with Ignite index handling in multi-node
environment. 

Regarding your point 2 (jira), absolutely, makes sense not to crash the node
on this exception. We have about 100GB data (tables) on ignite and the only
work around right now seems to be 

Boot node 1. Keep its work folder. 
Boot node 2 after removing its work folder

This scenario though works, gives the cluster a down-time of about 1-2 hours
and this is not acceptable for our customers. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Blocked system-critical thread has been detected. This can lead to cluster-wide undefined behaviour

2019-01-23 Thread Humphrey
Thanks Ilya,

It works with runAsync.
 
(Question) Can you clarify why does it work on a single node well and when
going on two nodes it doesn't, and we get the exceptions? I was expecting it
also to be happening on one server node.

Humphrey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Queryentities - inserts with the Enum fields - failing with cachestore turned on

2019-01-23 Thread michal23849
Hi,

When i specify as:


Then I got the error when converting the inputs to Enum:

2019-01-23T17:38:48,340 ERROR o.a.i.i.p.o.j.JdbcRequestHandler
[client-connector-#79] Failed to execute SQL query [reqId=0,
req=JdbcQueryExecuteRequest [schemaName=PUBLIC, pageSize=1024, maxRows=0,
sqlQry=INSERT INTO GEN.Entitlements  (NAME, TYPE) VALUES
('ent1','FUNCTION'), args=[], stmtType=ANY_STATEMENT_TYPE]]
org.apache.ignite.internal.processors.query.IgniteSQLException: Value
conversion failed [from=java.lang.String,
to=com.myproject.model.EntitlementType]
Caused by: org.h2.message.DbException: Hexadecimal string contains non-hex
character: "FUNCTION" [90004-195]

Unless there is another way to enforce the conversion in some more direct
way?

Regards
Michal




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Unsubscribe

2019-01-23 Thread Viraj Rathod
-- 
Regards,
Viraj Rathod


RE: Error while persisting from Ignite to Hive for a BinaryObject

2019-01-23 Thread Premachandran, Mahesh (Nokia - IN/Bangalore)
Hi Ilya,

The field apn_id is of type Long. I have been using the  CacheJdbcPojoStore, 
does that map the BinaryObjects to the database schema? or is it only for java 
pojos? I have attached the xml I am using with the client.

Mahesh

From: Ilya Kasnacheev 
Sent: Wednesday, January 23, 2019 6:43 PM
To: user@ignite.apache.org
Subject: Re: Error while persisting from Ignite to Hive for a BinaryObject

Hello!

I think that your CacheStore implementation is confused by nested fields or 
binary object values (what is the type of apn_id?). Consider using  
CacheJdbcBlobStoreFactory instead which will serialize value to one big field 
in BinaryObject formar.

Regards,
--
Ilya Kasnacheev


ср, 23 янв. 2019 г. в 15:47, Premachandran, Mahesh (Nokia - IN/Bangalore) 
mailto:mahesh.premachand...@nokia.com>>:
Hi all,

I am trying to stream some data from Kafka to Ignite using IgniteDataStreamer 
and use 3rd party persistence to move it to HIVE. The data on Kafka is in avro 
format, which I am deserailising, populating an Ignite BinaryObject using the 
binary builder and pushing it to Ignite. It works well when I do not enable 3rd 
party persistence, but once that is enabled, it throws the following exception.

[12:32:07] (err) Failed to execute compound future reducer: GridCompoundFuture 
[rdc=null, initFlag=1, lsnrCalls=2, done=true, cancelled=false, err=class 
o.a.i.IgniteCheckedException: DataStreamer request failed 
[node=292ab229-61fb-4d61-8f08-33c8abd310a2], futs=[true, true, true]]class 
org.apache.ignite.IgniteCheckedException: DataStreamer request failed 
[node=292ab229-61fb-4d61-8f08-33c8abd310a2]
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.onResponse(DataStreamerImpl.java:1912)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$3.onMessage(DataStreamerImpl.java:346)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1091)
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:511)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.cache.integration.CacheWriterException: class 
org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: 
Failed to update keys (retry update if possible).: [2]
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1280)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1734)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1087)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:788)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerCacheUpdaters$Individual.receive(DataStreamerCacheUpdaters.java:121)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:400)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:305)
   at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:60)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:90)
... 6 more
Caused by: class 
org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: 
Failed to update keys (retry update if possible).: [2]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.onPrimaryError(GridNearAtomicAbstractUpdateFuture.java:397)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:253)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:303)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:300)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture.map(GridDhtAtomicAbstractUpdateFuture.java:390)
at 

Re: Ignite with Spring cache

2019-01-23 Thread AndrewV
I found the mistake. I was testing this inside the real application and
caches had been evicted before I tried to iterate over caches. Thanks a lot
for your time.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Unsubscribe

2019-01-23 Thread Dharrao,Nitin




NOTICE:  This e-mail message, including any attachments and appended messages, 
is for the sole use of the intended recipients and may contain confidential and 
legally privileged information.
If you are not the intended recipient, any review, dissemination, distribution, 
copying, storage or other use of all or any portion of this message is strictly 
prohibited.
If you received this message in error, please immediately notify the sender by 
reply e-mail and delete this message in its entirety.


Re: Eviction policy is not working for default and new data region

2019-01-23 Thread vyhignite1
Thanks for the reply  and here is the link for the code in GitHub:

https://github.com/ghvchen/vfs4j-ignite


Thanks,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: nodes getting disconnected from cluster

2019-01-23 Thread Ilya Kasnacheev
Hello!

I don't see any lengthy GC pauses yet one node were segmented. It is
unclear what exactly would cause this.

Can you try increasing failureDetectionTimeout to 2 minutes (12) and
retrying? Please attach logs if there is failure again.

Regards,
-- 
Ilya Kasnacheev


вт, 8 янв. 2019 г. в 17:33, Akash Shinde :

> Hi Evgenii ,
>
> I am starting 7 ignite nodes on 7 VMs. But to narrow down the problem I
> started only two server nodes on two VMs, core03 and core04. Initially
> these VMs were on different VHS. So we moved these two VMs on same VHS (to
> avoid network issues) and checked the network bandwidth using iperf. Now
> the network bandwidth is 6.7 Gbps. Then started one client node from laptop
> just to check the cluster status.
>
> But even after doing this I am facing the same problem. The nodes are
> segmenting during the data loading.
>
> I have attached the logs for two server nodes. It also contains gc logs.
>
>
> Thanks,
> Akash
>
> On Tue, Jan 8, 2019 at 6:00 AM Evgenii Zhuravlev 
> wrote:
>
>> Hi,
>>
>> Can you share logs from all nodes, especially from node qagmscore02/
>> 10.114.113.53:47500 ?
>>
>> Evgenii
>>
>> пн, 7 янв. 2019 г. в 08:14, Akash Shinde :
>>
>>> Hi,
>>> Someone could please help me on this issue.
>>>
>>> Thanks,
>>> Akash
>>>
>>> On Thu, Jan 3, 2019 at 5:46 PM Akash Shinde 
>>> wrote:
>>>
 Hi,

 I am getting " Timed out waiting for message delivery receipt" WARN
 message in my logs.
 But I am sure that it is not happening because of long GC pause. I have
 check the memory utilization and it is very low.

 I also tried to check the connectivity between two nodes between which
 the timeout is happening.
 bandwidth is as shown below.

 [ ID] Interval   Transfer Bandwidth
 [  4]  0.0-10.1 sec   855 MBytes   708 Mbits/sec

 Many times I get following message in my logs. Is it because two nodes
 are not able communicate within given time limit?

 *ERROR:*
  Blocked system-critical thread has been detected. This can lead to
 cluster-wide undefined behaviour [threadName=tcp-disco-msg-worker,
 blockedFor=14s]

 I have also attached log snippet. Can some one please help to narrow
 down the issue?

 Thanks,
 Akash

>>>


Re: Eviction policy is not working for default and new data region

2019-01-23 Thread vyhignite1
Here is the code link in GitHub:


https://github.com/ghvchen/vfs4j-ignite


Thanks,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite with Spring cache

2019-01-23 Thread Ilya Kasnacheev
Hello!

You can do ignite().cache(cacheName).withKeepBinary().iterator() to capture
any entries if they are there. Can you try it and dump the contents
verbatim?

Regards,
-- 
Ilya Kasnacheev


ср, 23 янв. 2019 г. в 16:49, AndrewV :

> Yes, I've tried ScanQuery. I understand that TextQuery is not good and I
> use
> it just for testing.
> But in both cases, I don't have any results.
>
> Here i get cache object:
> IgniteCache cache = Ignition.ignite().cache(cacheName);
>
> but I don't have any results when I try to find something inside when data
> definitely in the cache.
> Probably I made mistake but I don't have an idea how to debug it.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Blocked system-critical thread has been detected. This can lead to cluster-wide undefined behaviour

2019-01-23 Thread Ilya Kasnacheev
Hello!

As a general principle you should avoid doing any blocking operations from
event handlers, which is precisely something that you are doing.

If you replace run() with runAsync() in your service impl, it will finish
all right with two ServerNode's.

Regards,
-- 
Ilya Kasnacheev


ср, 23 янв. 2019 г. в 16:34, Humphrey :

> Hello everyone,
>
> I'm getting the error below when running more than 1 ServerNode.
>
> The idea what we want to achive is the following:
>
> 1) A client node will be adding data (ALPHA) to a partitioned cache
> (CACHE_ALPHA).
> 2) In the cluster (server nodes) we have a Node-Singleton Service deployed,
> which has a continuous query to handle the CREATED events of the data added
> from the client on local cache (cache.setLocal(true)).
> 3) For each ALPHA event we should generate one (or more) BRAVO data and add
> them to the cache(CACHE_BRAVO), this is done by a compute task in the event
> handler.
>
> This seems to work fine until we start a second ServerNode. What are we
> doing wrong here? We would like to process the events generated by the
> CACHE_ALPHA with compute tasks. Eventually we would like to have another
> Service as well for handling events of CACHE_BRAVO, but we already facing
> problems handling events of the continuous query of one cache on multiple
> server nodes.
>
> I have a reproducer attached here.
>
> striped-pool-starvation.zip
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1004/striped-pool-starvation.zip>
>
>
> Humphrey
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: LoadCaches example fails

2019-01-23 Thread Ilya Kasnacheev
Hello!

So you are saying it works for you if you restart IDE? Have you tried doing
clean build instead? Or run maven "clean install" target? Where is that
properties file located?

Regards,
-- 
Ilya Kasnacheev


ср, 23 янв. 2019 г. в 16:31, arun jayapal :

> Yes. I figured that. But even if I update the file (secret.properties),
> intellij (the ide I use) doesn't accept the updated values.
>
> This is not an ignite problem per se, but it would be nice if someone gave
> me a workaround for it.
>
> Currently I restart the IDE. I have no idea how that helps...
>
> On Wed 23 Jan, 2019, 12:03 deostroll 
>> I created a cluster and imported a postgres database to the cluster. I
>> created two ignite instances using that cluster config (from the maven
>> project made available via download though the web-console). I even
>> started
>> a client instance (by running the ClientNodeCodeStartup file).
>>
>> The server as well as client reported their counts correctly.
>>
>> Next, I ran the LoadCaches file. This threw an error. My assumption was
>> this
>> was supposed to load the caches with data. I want to understand why this
>> file failed. Pasting the console output below between begin and end
>> pastes.
>>
>> ---BEGIN_PASTE---
>> "C:\Program Files\Java\jdk1.8.0_181\bin\java.exe" "-javaagent:C:\Program
>> Files\JetBrains\IntelliJ IDEA Community Edition
>> 2018.2.6\lib\idea_rt.jar=62690:C:\Program Files\JetBrains\IntelliJ IDEA
>> Community Edition 2018.2.6\bin" -Dfile.encoding=UTF-8 -classpath
>> "C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\charsets.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\deploy.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\ext\access-bridge-64.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\ext\cldrdata.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\ext\dnsns.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\ext\jaccess.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\ext\jfxrt.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\ext\localedata.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\ext\nashorn.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\ext\sunec.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\ext\sunjce_provider.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\ext\sunmscapi.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\ext\sunpkcs11.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\ext\zipfs.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\javaws.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\jce.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\jfr.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\jfxswt.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\jsse.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\management-agent.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\plugin.jar;C:\Program
>> Files\Java\jdk1.8.0_181\jre\lib\resources.jar;C:\Program
>>
>> 

Re: Ignite with Spring cache

2019-01-23 Thread AndrewV
Yes, I've tried ScanQuery. I understand that TextQuery is not good and I use
it just for testing.
But in both cases, I don't have any results.

Here i get cache object:
IgniteCache cache = Ignition.ignite().cache(cacheName);

but I don't have any results when I try to find something inside when data
definitely in the cache.
Probably I made mistake but I don't have an idea how to debug it. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Blocked system-critical thread has been detected. This can lead to cluster-wide undefined behaviour

2019-01-23 Thread Humphrey
Hello everyone,

I'm getting the error below when running more than 1 ServerNode.

The idea what we want to achive is the following:

1) A client node will be adding data (ALPHA) to a partitioned cache
(CACHE_ALPHA).
2) In the cluster (server nodes) we have a Node-Singleton Service deployed,
which has a continuous query to handle the CREATED events of the data added
from the client on local cache (cache.setLocal(true)).
3) For each ALPHA event we should generate one (or more) BRAVO data and add
them to the cache(CACHE_BRAVO), this is done by a compute task in the event
handler.

This seems to work fine until we start a second ServerNode. What are we
doing wrong here? We would like to process the events generated by the
CACHE_ALPHA with compute tasks. Eventually we would like to have another
Service as well for handling events of CACHE_BRAVO, but we already facing
problems handling events of the continuous query of one cache on multiple
server nodes.

I have a reproducer attached here.

striped-pool-starvation.zip

  

Humphrey






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite with Spring cache

2019-01-23 Thread Ilya Kasnacheev
Hello!

new TextQuery(Person.class, "PERSON_ID")

It looks that you are looking for a text "person_id" in all text fields of
Person.class. Are you sure this is something you want here? Perhaps you
wanted ScanQuery instead?

Regards,
-- 
Ilya Kasnacheev


вт, 22 янв. 2019 г. в 21:12, AndrewV :

> I have a Spring application with Ignite cache configured according to
> samples
> - https://apacheignite-mix.readme.io/docs/spring-caching
>
> *My configured Spring bean:*
> --
> /@Bean
> public SpringCacheManager springCacheManager() {
> SpringCacheManager cm = new SpringCacheManager();
> IgniteConfiguration igniteConf = new IgniteConfiguration();
>
> igniteConf.setClientMode(true);
>
> /* Dynamic cache configuration */
> CacheConfiguration dynamicCacheConfig = new CacheConfiguration();
>
>
> dynamicCacheConfig.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(Duration.TEN_MINUTES));
> dynamicCacheConfig.setCacheMode(CacheMode.PARTITIONED);
> dynamicCacheConfig.setBackups(0);
> dynamicCacheConfig.setIndexedTypes(Integer.class, Object.class);
> cm.setDynamicCacheConfiguration(dynamicCacheConfig);
> return cm;
> }/
> --
>
>
> For caching I use Spring Cache annotation in service layer:
> *Service example*:
>
> /@Cacheable("getPersonById_cache")
> Person getPersonById(int id) {...}
>
>
> @Cacheable("getPersonsByParams_cache")
> List getPersonsByParams(...) {...}/
>
>
> *What I want to achieve:*
> When I'm changing Person entity I want to revalidate
> getPersonsByParams_cache, but just records (List) which contain this
> Person.
>
> So I decided to use Cache Queries (scan or text) to find matching records
> in
> the cache, something like that:
> /IgniteCache cache = Ignition.ignite().cache(cacheName);
>
> QueryCursor> result =
> cache.query(new TextQuery(Person.class, "PERSON_ID"));
>
> System.out.println(result.getAll());/
>
> but my result is always empty...
> Maybe we have another way to achieve that?
> Thanks.
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Prevent automatic Ignite server node start for Spring Boot

2019-01-23 Thread Ilya Kasnacheev
Hello!

It seems that somebody (like spring boot) have started

org.apache.ignite.cache.CacheManager


How to prevent it I am unaware, you should probably consult with Spring
Boot docs.

Regards,
-- 
Ilya Kasnacheev


ср, 23 янв. 2019 г. в 09:13, Humphrey :

> Did you rebuild your project? Or update your maven dependencies? Should not
> happen if you removed the dependency of ignite.
>
> It also should only start a node if you have in your spring configuration
> something (bean) of ignite.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: LoadCaches example fails

2019-01-23 Thread arun jayapal
Yes. I figured that. But even if I update the file (secret.properties),
intellij (the ide I use) doesn't accept the updated values.

This is not an ignite problem per se, but it would be nice if someone gave
me a workaround for it.

Currently I restart the IDE. I have no idea how that helps...

On Wed 23 Jan, 2019, 12:03 deostroll  I created a cluster and imported a postgres database to the cluster. I
> created two ignite instances using that cluster config (from the maven
> project made available via download though the web-console). I even started
> a client instance (by running the ClientNodeCodeStartup file).
>
> The server as well as client reported their counts correctly.
>
> Next, I ran the LoadCaches file. This threw an error. My assumption was
> this
> was supposed to load the caches with data. I want to understand why this
> file failed. Pasting the console output below between begin and end pastes.
>
> ---BEGIN_PASTE---
> "C:\Program Files\Java\jdk1.8.0_181\bin\java.exe" "-javaagent:C:\Program
> Files\JetBrains\IntelliJ IDEA Community Edition
> 2018.2.6\lib\idea_rt.jar=62690:C:\Program Files\JetBrains\IntelliJ IDEA
> Community Edition 2018.2.6\bin" -Dfile.encoding=UTF-8 -classpath
> "C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\charsets.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\deploy.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\access-bridge-64.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\cldrdata.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\dnsns.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\jaccess.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\jfxrt.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\localedata.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\nashorn.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\sunec.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\sunjce_provider.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\sunmscapi.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\sunpkcs11.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\zipfs.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\javaws.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\jce.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\jfr.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\jfxswt.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\jsse.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\management-agent.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\plugin.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\resources.jar;C:\Program
>
> 

Re: LoadCaches example fails

2019-01-23 Thread Ilya Kasnacheev
Hello!

> Caused by: java.net.UnknownHostException: [host]:[port]

I think you are supposed to put Postgres host and port somewhere. Note that
it is not directly related to Apache Ignite.

Regards,
-- 
Ilya Kasnacheev


ср, 23 янв. 2019 г. в 09:33, deostroll :

> I created a cluster and imported a postgres database to the cluster. I
> created two ignite instances using that cluster config (from the maven
> project made available via download though the web-console). I even started
> a client instance (by running the ClientNodeCodeStartup file).
>
> The server as well as client reported their counts correctly.
>
> Next, I ran the LoadCaches file. This threw an error. My assumption was
> this
> was supposed to load the caches with data. I want to understand why this
> file failed. Pasting the console output below between begin and end pastes.
>
> ---BEGIN_PASTE---
> "C:\Program Files\Java\jdk1.8.0_181\bin\java.exe" "-javaagent:C:\Program
> Files\JetBrains\IntelliJ IDEA Community Edition
> 2018.2.6\lib\idea_rt.jar=62690:C:\Program Files\JetBrains\IntelliJ IDEA
> Community Edition 2018.2.6\bin" -Dfile.encoding=UTF-8 -classpath
> "C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\charsets.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\deploy.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\access-bridge-64.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\cldrdata.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\dnsns.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\jaccess.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\jfxrt.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\localedata.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\nashorn.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\sunec.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\sunjce_provider.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\sunmscapi.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\sunpkcs11.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\ext\zipfs.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\javaws.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\jce.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\jfr.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\jfxswt.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\jsse.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\management-agent.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\plugin.jar;C:\Program
> Files\Java\jdk1.8.0_181\jre\lib\resources.jar;C:\Program
>
> 

Re: Queryentities - inserts with the Enum fields - failing with cachestore turned on

2019-01-23 Thread Ilya Kasnacheev
Hello!

> 

Why do you specify type of 'type' as String? Have you tried specifying
actual enum type here? Do you have a reproducer project of this behavior,
with e.g. H2 used as underlying database?

Regards,
-- 
Ilya Kasnacheev


ср, 23 янв. 2019 г. в 11:40, michal23849 :

> Hi,
>
> I have a problem with very simple class being exposed to SQL with Enum
> field
> and want to use it togheter with custom CacheStore class managing the
> writeBehind to SQLServer database.
>
> My class is:
>
> public class Entitlement  {
> Type type;
> String name;
> (...)
> }
>
> Where type is an Enum:
>
> public enum Type {
> FUNCTION,
> INDEX
> }
>
> And I am exposing this through the Queryentities configuration where the
> enum is exposed as String:
> 
> 
>  class="org.apache.ignite.cache.QueryEntity"  lazy-init="true">
>  value="Entitlements"/>
>  value="java.lang.String" />
>  value="com.csg.ps.d1idxrds.entitlements.model.Entitlement"/>
>  name="keyFieldName" value="name"/>
> 
> 
>  value="java.lang.String" />
>  value="java.lang.String" />
> 
> 
> 
> 
>  class="org.apache.ignite.cache.QueryIndex">
> 
> 
> name
> 
> 
>  />
> 
> 
> 
> 
> 
> 
>
> The selects work, then I am running the following insert from SQL client
> (DBeaver):
> INSERT  INTO gen.ENTITLEMENTS(NAME, TYPE) VALUES ('PROF12', 'FUNCTION1');
>
> The cache has the native persistence turned off. The insert succeeds when
> there is no 3rd party persistence turned on.
>
> Even the following query succeeds with no cachestore (where FUNCTION1 is
> not
> in enum values):
> INSERT  INTO gen.ENTITLEMENTS(NAME, TYPE) VALUES ('PROF12', 'FUNCTION1');
>
> However when I switch on the third party persistence than the insert will
> fail complaining about the field being Enum not String.
>
> 2019-01-23T09:24:48,588 ERROR o.a.i.i.p.o.j.JdbcRequestHandler
> [client-connector-#43] Failed to execute SQL query [reqId=0,
> req=JdbcQueryExecuteRequest [schemaName=PUBLIC, pageSize=1024, maxRows=0,
> sqlQry=INSERT INTO GEN.Entitlements  (NAME, TYPE) VALUES
> ('ent1','FUNCTION'), args=[], stmtType=ANY_STATEMENT_TYPE]]
> org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to
> execute DML statement [stmt=INSERT INTO GEN.Entitlements  (NAME, TYPE)
> VALUES ('ent1','FUNCTION'), params=null]
>
> Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to
> deserialize object [typeName=com.myproject.model.Entitlement]
>
> Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to read
> field [name=type]
>
> Caused by: org.apache.ignite.binary.BinaryObjectException: Unexpected field
> type [pos=24, expected=Enum, actual=String]
>
> How can this be solved that the inserts using Enums are being fine by the
> CacheStore class and written to SQL DB?
>
> If i have the class witn no Enums - the inserts are working just fine and
> data are persisted in 3rd party SQLServer DB with my custom cachestore
> class' write methods.
>
> Thanks
> Michal
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Error while persisting from Ignite to Hive for a BinaryObject

2019-01-23 Thread Ilya Kasnacheev
Hello!

I think that your CacheStore implementation is confused by nested fields or
binary object values (what is the type of apn_id?). Consider using
CacheJdbcBlobStoreFactory instead which will serialize value to one big
field in BinaryObject formar.

Regards,
-- 
Ilya Kasnacheev


ср, 23 янв. 2019 г. в 15:47, Premachandran, Mahesh (Nokia - IN/Bangalore) <
mahesh.premachand...@nokia.com>:

> Hi all,
>
>
>
> I am trying to stream some data from Kafka to Ignite using
> IgniteDataStreamer and use 3rd party persistence to move it to HIVE. The
> data on Kafka is in avro format, which I am deserailising, populating an
> Ignite BinaryObject using the binary builder and pushing it to Ignite. It
> works well when I do not enable 3rd party persistence, but once that is
> enabled, it throws the following exception.
>
>
>
> [12:32:07] (err) Failed to execute compound future reducer:
> GridCompoundFuture [rdc=null, initFlag=1, lsnrCalls=2, done=true,
> cancelled=false, err=class o.a.i.IgniteCheckedException: DataStreamer
> request failed [node=292ab229-61fb-4d61-8f08-33c8abd310a2], futs=[true,
> true, true]]class org.apache.ignite.IgniteCheckedException: DataStreamer
> request failed [node=292ab229-61fb-4d61-8f08-33c8abd310a2]
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.onResponse(DataStreamerImpl.java:1912)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$3.onMessage(DataStreamerImpl.java:346)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)
>
> at
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1091)
>
> at
> org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:511)
>
> at java.lang.Thread.run(Thread.java:748)
>
> Caused by: javax.cache.integration.CacheWriterException: class
> org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException:
> Failed to update keys (retry update if possible).: [2]
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1280)
>
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1734)
>
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1087)
>
> at
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:788)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerCacheUpdaters$Individual.receive(DataStreamerCacheUpdaters.java:121)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:400)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:305)
>
>at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:60)
>
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:90)
>
> ... 6 more
>
> Caused by: class
> org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException:
> Failed to update keys (retry update if possible).: [2]
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.onPrimaryError(GridNearAtomicAbstractUpdateFuture.java:397)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:253)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:303)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:300)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture.map(GridDhtAtomicAbstractUpdateFuture.java:390)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1805)
>
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1628)
>
> 

Re: Eviction policy is not working for default and new data region

2019-01-23 Thread aealexsandrov
Hi,

Could you please attach full XML configuration and java code that can
reproduce this issue? You can upload your reproducer to GitHub for example.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Error while persisting from Ignite to Hive for a BinaryObject

2019-01-23 Thread Premachandran, Mahesh (Nokia - IN/Bangalore)
Hi all,

I am trying to stream some data from Kafka to Ignite using IgniteDataStreamer 
and use 3rd party persistence to move it to HIVE. The data on Kafka is in avro 
format, which I am deserailising, populating an Ignite BinaryObject using the 
binary builder and pushing it to Ignite. It works well when I do not enable 3rd 
party persistence, but once that is enabled, it throws the following exception.

[12:32:07] (err) Failed to execute compound future reducer: GridCompoundFuture 
[rdc=null, initFlag=1, lsnrCalls=2, done=true, cancelled=false, err=class 
o.a.i.IgniteCheckedException: DataStreamer request failed 
[node=292ab229-61fb-4d61-8f08-33c8abd310a2], futs=[true, true, true]]class 
org.apache.ignite.IgniteCheckedException: DataStreamer request failed 
[node=292ab229-61fb-4d61-8f08-33c8abd310a2]
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.onResponse(DataStreamerImpl.java:1912)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$3.onMessage(DataStreamerImpl.java:346)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)
at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1091)
at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:511)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.cache.integration.CacheWriterException: class 
org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: 
Failed to update keys (retry update if possible).: [2]
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1280)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1734)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1087)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:788)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerCacheUpdaters$Individual.receive(DataStreamerCacheUpdaters.java:121)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:400)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:305)
   at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:60)
at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:90)
... 6 more
Caused by: class 
org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: 
Failed to update keys (retry update if possible).: [2]
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.onPrimaryError(GridNearAtomicAbstractUpdateFuture.java:397)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.onPrimaryResponse(GridNearAtomicSingleUpdateFuture.java:253)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:303)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture$1.apply(GridNearAtomicAbstractUpdateFuture.java:300)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture.map(GridDhtAtomicAbstractUpdateFuture.java:390)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1805)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1628)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:299)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:483)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:443)
at 

Re: Ignite Inter node Communication security

2019-01-23 Thread Ilya Kasnacheev
Hello!

If you use SSL with non-disabled Trust Manager, then only nodes possessing
correct certificates will be admitted.

Regards,
-- 
Ilya Kasnacheev


ср, 23 янв. 2019 г. в 09:03, garima.j :

> Hello,
>
> Does Ignite version 2.7 support inter-node communication security?
> I have a 3 node cluster and want to secure the cluster, so that no new node
> can be added to the cluster without proper credentials.
>
> If yes, please let me know where to find the relevant documentation.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Inter node Communication security

2019-01-23 Thread aealexsandrov
Hi,

Ignite supports only simple password authentication from the box. It doesn't
contain any complex rights access.

You can read more about it here:

https://apacheignite.readme.io/docs/advanced-security

In case if you require for authorization options you can implement the
GridSecurityProcessor interface as part of a custom plugin or use some
existed plugins.  For example this one
  .

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: DatasetFactory.createSimpleDataset with a cache

2019-01-23 Thread Mehdi Seydali
i think classification and clustering is most important

On Wed, Jan 23, 2019 at 2:09 PM zaleslaw  wrote:

> Hi, what kind of ML algorithms are you going to use?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: DatasetFactory.createSimpleDataset with a cache

2019-01-23 Thread zaleslaw
Hi, what kind of ML algorithms are you going to use?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to use c++ client get cache data from java server

2019-01-23 Thread Igor Sapego
Hi,

BinaryObject in C++ is not the same thing as in Java right now.
Currently C++ client do not support "BinaryObject" construction
and access. You may now only operate on actual serializable
objects, for which BinaryObject class template has been specialized.

You may find details and examples on readme.io ([1] and [2]).

[1] - https://apacheignite-cpp.readme.io/docs/serialization
[2] -
https://apacheignite-cpp.readme.io/docs/cross-platform-interoperability

Best Regards,
Igor


On Wed, Jan 23, 2019 at 12:22 PM luodandan  wrote:

>  I am new to Ignite.
>
> Now, i want to load a CSV file into java server and access the cache using
> the C++ client. The example code is as below.
>
> Java Code
> CacheConfiguration cfg = new
> CacheConfiguration("binaryCache");
>
> // Sliding window of 1800 seconds.
> cfg.setExpiryPolicyFactory(FactoryBuilder.factoryOf(
> new CreatedExpiryPolicy(new Duration(SECONDS, 1800;
>
> IgniteCache binaryCache =
> ignite.getOrCreateCache(cfg);
> IgniteDataStreamer stmr =
> ignite.dataStreamer(binaryCache.getName());
>
> BinaryObjectBuilder builder = ignite.binary().builder("test");
> builder.setField("col1","luodan");
> builder.setField("col2",29);
> BinaryObject bObj = builder.build();
> stmr.addData(1L,bObj);
>
> C++ code
>Cache binaryCache = grid.GetCache BinaryObject>("binaryCache");
>BinaryObject obj = binaryCache.Get(1);
>
> However, there is a compile error
>
> error C2512: “ignite::binary::BinaryObject” do not has default-constructor
>
> How can i get the cache data from the java server using C++ client?
> can you give me a example?
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: PySpark: Failed to find data source: ignite

2019-01-23 Thread Balakumar
Hi Stephan,

Thanks for the reply.

I have actually added following lib settings in spark-defaults.conf

spark.driver.extraClassPath /opt/ignite/libs/*.jar
spark.driver.extraClassPath /opt/ignite/libs/ignite-indexing/*.jar
spark.driver.extraClassPath /opt/ignite/libs/ignite-spark/*.jar
spark.driver.extraClassPath /opt/ignite/libs/ignite-spring/*.jar

spark.executor.extraClassPath /opt/ignite/libs/*.jar
spark.executor.extraClassPath /opt/ignite/libs/ignite-indexing/*.jar
spark.executor.extraClassPath /opt/ignite/libs/ignite-spark/*.jar
spark.executor.extraClassPath /opt/ignite/libs/ignite-spring/*.jar

spark.driver.extraLibraryPath /opt/ignite/libs
spark.driver.extraLibraryPath /opt/ignite/libs/ignite-indexing
spark.driver.extraLibraryPath /opt/ignite/libs/ignite-spark
spark.driver.extraLibraryPath /opt/ignite/libs/ignite-spring

spark.executor.extraLibraryPath /opt/ignite/libs
spark.executor.extraLibraryPath /opt/ignite/libs/ignite-indexing
spark.executor.extraLibraryPath /opt/ignite/libs/ignite-spark
spark.executor.extraLibraryPath /opt/ignite/libs/ignite-spring

I got the way you initialized in command line, I actually initialize the
context/session through application (python script).

I have tried the same thing while creating spark session also (passing as
configs)

Is there any other way to get the ignite format working for Spark.

Thanks,
Bala



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Introducing Typescript for the Node.js Client

2019-01-23 Thread Igor Sapego
Great job!

Forwarding to devlist.

Are there Node.js guys that can take a look?

Best Regards,
Igor


On Tue, Jan 22, 2019 at 10:47 PM thavlik 
wrote:

> https://github.com/thavlik/ignite/tree/master/modules/platforms/nodejs
>
> https://issues.apache.org/jira/browse/IGNITE-11032
>
> Please let me know if you have any issues with it. I've only put a couple
> hours of work into this, but it already functions as a drop-in
> replacement.
>
> Tom
>
> Mid Continent Controls, Inc.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: PySpark: Failed to find data source: ignite

2019-01-23 Thread Stephen Darlington
You don’t say what your full CLASSPATH is but you’re clearly missing something. 
Here’s how I did it:

https://medium.com/@sdarlington/the-trick-to-successfully-integrating-apache-ignite-and-pyspark-890e436d09ba

Regards,
Stephen

> On 23 Jan 2019, at 05:49, Balakumar  
> wrote:
> 
> Hi,
> 
> I'm trying to put parquet in to Ignite table, but getting the below error.
> 
> java.lang.ClassNotFoundException: Failed to find data source: ignite. Please
> find packages at http://spark.apache.org/third-party-projects.html
> 
> Spark: 2.3.1
> Trying with PySpark
> I have copied the ignite-spark from optional to lib folder, and followed
> classpath inclusion for Ignite for spark-env.sh.
> 
> Here is the code
> 
> 
> 
> Thanks,
> Bala
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/




How to use c++ client get cache data from java server

2019-01-23 Thread luodandan
 I am new to Ignite.

Now, i want to load a CSV file into java server and access the cache using
the C++ client. The example code is as below.

Java Code
CacheConfiguration cfg = new
CacheConfiguration("binaryCache");

// Sliding window of 1800 seconds.
cfg.setExpiryPolicyFactory(FactoryBuilder.factoryOf(
new CreatedExpiryPolicy(new Duration(SECONDS, 1800;

IgniteCache binaryCache =
ignite.getOrCreateCache(cfg);
IgniteDataStreamer stmr =
ignite.dataStreamer(binaryCache.getName());

BinaryObjectBuilder builder = ignite.binary().builder("test");
builder.setField("col1","luodan");
builder.setField("col2",29);
BinaryObject bObj = builder.build();
stmr.addData(1L,bObj);

C++ code
   Cache binaryCache = grid.GetCache("binaryCache");
   BinaryObject obj = binaryCache.Get(1);

However, there is a compile error

error C2512: “ignite::binary::BinaryObject” do not has default-constructor

How can i get the cache data from the java server using C++ client?
can you give me a example?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Queryentities - inserts with the Enum fields - failing with cachestore turned on

2019-01-23 Thread michal23849
Hi,

I have a problem with very simple class being exposed to SQL with Enum field
and want to use it togheter with custom CacheStore class managing the
writeBehind to SQLServer database. 

My class is:

public class Entitlement  {
Type type;
String name;
(...)
}

Where type is an Enum:

public enum Type {
FUNCTION,
INDEX
}

And I am exposing this through the Queryentities configuration where the
enum is exposed as String:


















name










The selects work, then I am running the following insert from SQL client
(DBeaver):
INSERT  INTO gen.ENTITLEMENTS(NAME, TYPE) VALUES ('PROF12', 'FUNCTION1');

The cache has the native persistence turned off. The insert succeeds when
there is no 3rd party persistence turned on.

Even the following query succeeds with no cachestore (where FUNCTION1 is not
in enum values): 
INSERT  INTO gen.ENTITLEMENTS(NAME, TYPE) VALUES ('PROF12', 'FUNCTION1');

However when I switch on the third party persistence than the insert will
fail complaining about the field being Enum not String.

2019-01-23T09:24:48,588 ERROR o.a.i.i.p.o.j.JdbcRequestHandler
[client-connector-#43] Failed to execute SQL query [reqId=0,
req=JdbcQueryExecuteRequest [schemaName=PUBLIC, pageSize=1024, maxRows=0,
sqlQry=INSERT INTO GEN.Entitlements  (NAME, TYPE) VALUES
('ent1','FUNCTION'), args=[], stmtType=ANY_STATEMENT_TYPE]]
org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to
execute DML statement [stmt=INSERT INTO GEN.Entitlements  (NAME, TYPE)
VALUES ('ent1','FUNCTION'), params=null]

Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to
deserialize object [typeName=com.myproject.model.Entitlement]

Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to read
field [name=type]

Caused by: org.apache.ignite.binary.BinaryObjectException: Unexpected field
type [pos=24, expected=Enum, actual=String]

How can this be solved that the inserts using Enums are being fine by the
CacheStore class and written to SQL DB?

If i have the class witn no Enums - the inserts are working just fine and
data are persisted in 3rd party SQLServer DB with my custom cachestore
class' write methods.

Thanks 
Michal




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Queryentities - inserts with the Enum fields - failing with cachestore turned on

2019-01-23 Thread michal23849
Hi,

I have a problem with very simple class being exposed to SQL with Enum field
and want to use it togheter with custom CacheStore class managing the
writeBehind to SQLServer database. 

My class is:

public class Entitlement  {
Type type;
String name;
(...)
}

Where type is an Enum:

public enum Type {
FUNCTION,
INDEX
}

And I am exposing this through the Queryentities configuration where the
enum is exposed as String:


















name










The selects work, then I am running the following insert from SQL client
(DBeaver):
INSERT  INTO gen.ENTITLEMENTS(NAME, TYPE) VALUES ('PROF12', 'FUNCTION1');

The cache has the native persistence turned off. The insert succeeds when
there is no 3rd party persistence turned on.

Even the following query succeeds with no cachestore (where FUNCTION1 is not
in enum values): 
INSERT  INTO gen.ENTITLEMENTS(NAME, TYPE) VALUES ('PROF12', 'FUNCTION1');

However when I switch on the third party persistence than the insert will
fail complaining about the field being Enum not String.

2019-01-23T09:24:48,588 ERROR o.a.i.i.p.o.j.JdbcRequestHandler
[client-connector-#43] Failed to execute SQL query [reqId=0,
req=JdbcQueryExecuteRequest [schemaName=PUBLIC, pageSize=1024, maxRows=0,
sqlQry=INSERT INTO GEN.Entitlements  (NAME, TYPE) VALUES
('ent1','FUNCTION'), args=[], stmtType=ANY_STATEMENT_TYPE]]
org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to
execute DML statement [stmt=INSERT INTO GEN.Entitlements  (NAME, TYPE)
VALUES ('ent1','FUNCTION'), params=null]

Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to
deserialize object [typeName=com.myproject.model.Entitlement]

Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to read
field [name=type]

Caused by: org.apache.ignite.binary.BinaryObjectException: Unexpected field
type [pos=24, expected=Enum, actual=String]

How can this be solved that the inserts using Enums are being fine by the
CacheStore class and written to SQL DB?

If i have the class witn no Enums - the inserts are working just fine and
data are persisted in 3rd party SQLServer DB with my custom cachestore
class' write methods.

Thanks 
Michal




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Native persistence and 3rd party persistence on one cluster?

2019-01-23 Thread michal23849
Hi,

thanks for the advice. I have configured to regions and they are having
their own independent persistence settings.

Thanks
Michal



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/