Re: How can I obtain a list of executing jobs on an ignite node

2016-11-11 Thread Vinay B,
Hi,
I tried that and it "sort of works" in that I can get a map of task
futures. However, what we really want is an ability to drill down and
cancel a a particular job associated with the task. This post says that it
is not possible -
http://apache-ignite-users.70518.x6.nabble.com/Cancel-tasks-on-Ignite-compute-grid-worker-nodes-td5027.html

Our use case is that a task can have many jobs, and sometimes a job might
be in a bad state (hung , running for too long etc etc). We would like the
ability to cancel these jobs .

Thanks


 ClusterGroup clusterGroup = grid.cluster();
final Map
computeTaskFutures = grid.compute(clusterGroup).activeTaskFutures();

for (Map.Entry e :
computeTaskFutures.entrySet()) {
System.out.println("!!! " + e.getKey() + " : " +
e.getValue().getTaskSession().getAttributes());
}

On Wed, Nov 9, 2016 at 10:13 AM, Alexey Kuznetsov 
wrote:

> Hi Vinay,
>
> I think IgniteCompute.activeTaskFutures() will give you map of active
> tasks.
> And you may iterate over that map and cancel that tasks.
> But you should do it on all nodes with tasks, because in javadoc I see
> "Gets tasks future for active tasks started on local node."
>
> /**
>  * Gets tasks future for active tasks started on local node.
>  *
>  * @return Map of active tasks keyed by their task task session ID.
>  */
> public  Map activeTaskFutures();
>
>
> Hope this help.
>
> On Wed, Nov 9, 2016 at 11:02 PM, Vinay B,  wrote:
>
>> Could someone point me to the applicable API that can return  a list of
>> executing jobs on an ignite node?
>>
>> Additionally, given a list of executing jobs, I would like to be able to
>> cancel selected jobs. What is the API I should be looking at?
>>
>>
>> Thanks in advance
>>
>>
>>
>
>
> --
> Alexey Kuznetsov
>


Re: rest-http can't get data if key is Integer or others

2016-11-11 Thread victor.x.qu
sorry,i did't clear to this.
the rest did't handle the key when not string, i show the code just for tell 
you where is the problem.
my code did't handle about zero deployment,if the key object with distributed 
class loader, it maybe not work.
so i think my code can't be use.
can do this about when key not string both zero deploy and deploy jar ?



发自我的小米手机
在 "ptupitsyn [via Apache Ignite Users]" 
,2016年11月10日 下午10:36写道:

Hi Victor,

Please have a look at our contribution guidelines:
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute

* You are writing to user list, but this discussion should be on the dev list 
(d...@ignite.apache.org)
* Pull request name should include JIRA ticket (IGNITE-4195)
* JIRA ticket should be moved to Patch Available status
* Make sure you follow the coding guidelines

Thank you for your interest in Ignite,

Pavel


If you reply to this email, your message will be added to the discussion below:
http://apache-ignite-users.70518.x6.nabble.com/rest-http-can-t-get-data-if-key-is-Integer-or-others-tp8762p8883.html
To unsubscribe from rest-http can't get data if key is Integer or others, click 
here.
NAML




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/rest-http-can-t-get-data-if-key-is-Integer-or-others-tp8762p8922.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Connection Info

2016-11-11 Thread Anil
Hi,

I am seeing few logs in ignite cluster as below. Should we really worry
about these ?

2016-11-11 11:48:14 WARN  grid-nio-worker-2-#42%my-grid%
TcpCommunicationSpi:480 -
>> Selector info [idx=2, keysCnt=1]
Connection info [rmtAddr=/X.X.X.X:47100, locAddr=/X.X.X.X:54216,
msgsSent=6519, msgsAckedByRmt=6519, msgsRcvd=6537, descIdHash=1793687276,
bytesRcvd=2017, bytesSent=1744, opQueueSize=0,
msgWriter=DirectMessageWriter [state=DirectMessageState [pos=0,
stack=[StateItem [stream=DirectByteBufferStreamImplV2
[buf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
baseOff=140290207782352, arrOff=-1, tmpArrOff=0, tmpArrBytes=0,
msgTypeDone=false, msg=null, mapIt=null, it=null, arrPos=-1, keyDone=false,
readSize=-1, readItems=0, prim=0, primShift=0, uuidState=0, uuidMost=0,
uuidLeast=0, uuidLocId=0, lastFinished=true], state=0, hdrWritten=false],
StateItem [stream=DirectByteBufferStreamImplV2
[buf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
baseOff=140290207782352, arrOff=-1, tmpArrOff=0, tmpArrBytes=0,
msgTypeDone=false, msg=null, mapIt=null, it=null, arrPos=-1, keyDone=false,
readSize=-1, readItems=0, prim=0, primShift=0, uuidState=0, uuidMost=0,
uuidLeast=0, uuidLocId=0, lastFinished=true], state=0, hdrWritten=false],
StateItem [stream=DirectByteBufferStreamImplV2
[buf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
baseOff=140290207782352, arrOff=-1, tmpArrOff=0, tmpArrBytes=0,
msgTypeDone=false, msg=null, mapIt=null, it=null, arrPos=-1, keyDone=false,
readSize=-1, readItems=0, prim=0, primShift=0, uuidState=0, uuidMost=0,
uuidLeast=0, uuidLocId=0, lastFinished=true], state=0, hdrWritten=false],
StateItem [stream=DirectByteBufferStreamImplV2
[buf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
baseOff=140290207782352, arrOff=-1, tmpArrOff=0, tmpArrBytes=0,
msgTypeDone=false, msg=null, mapIt=null, it=null, arrPos=-1, keyDone=false,
readSize=-1, readItems=0, prim=0, primShift=0, uuidState=0, uuidMost=0,
uuidLeast=0, uuidLocId=0, lastFinished=true], state=0, hdrWritten=false],
null, null, null, null, null, null]]], msgReader=DirectMessageReader
[state=DirectMessageState [pos=0, stack=[StateItem
[stream=DirectByteBufferStreamImplV2 [buf=java.nio.DirectByteBuffer[pos=0
lim=32768 cap=32768], baseOff=140290207815136, arrOff=-1, tmpArrOff=0,
tmpArrBytes=0, msgTypeDone=false, msg=null, mapIt=null, it=null, arrPos=-1,
keyDone=false, readSize=-1, readItems=0, prim=0, primShift=0, uuidState=0,
uuidMost=0, uuidLeast=0, uuidLocId=0, lastFinished=true], state=0],
StateItem [stream=DirectByteBufferStreamImplV2
[buf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
baseOff=140290207815136, arrOff=-1, tmpArrOff=0, tmpArrBytes=0,
msgTypeDone=false, msg=null, mapIt=null, it=null, arrPos=-1, keyDone=false,
readSize=-1, readItems=0, prim=0, primShift=0, uuidState=0, uuidMost=0,
uuidLeast=0, uuidLocId=0, lastFinished=true], state=0], StateItem
[stream=DirectByteBufferStreamImplV2 [buf=java.nio.DirectByteBuffer[pos=0
lim=32768 cap=32768], baseOff=140290207815136, arrOff=-1, tmpArrOff=0,
tmpArrBytes=0, msgTypeDone=false, msg=null, mapIt=null, it=null, arrPos=-1,
keyDone=false, readSize=-1, readItems=0, prim=0, primShift=0, uuidState=0,
uuidMost=0, uuidLeast=0, uuidLocId=0, lastFinished=true], state=0],
StateItem [stream=DirectByteBufferStreamImplV2
[buf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
baseOff=140290207815136, arrOff=-1, tmpArrOff=0, tmpArrBytes=0,
msgTypeDone=false, msg=null, mapIt=null, it=null, arrPos=-1, keyDone=false,
readSize=-1, readItems=0, prim=0, primShift=0, uuidState=0, uuidMost=0,
uuidLeast=0, uuidLocId=0, lastFinished=true], state=0], null, null, null,
null, null, null]], lastRead=true]]


Thanks,


Re: Creating cache on client node in xml not working

2016-11-11 Thread Evans, Charlie
Hi,


No the server does not. I was hoping the bean would be added to the server 
automatically somehow. I guess not?


If I had multiple nodes as servers would they all need access to the bean xml. 
So the same xml file on each node?


Thanks


From: Andrey Gura 
Sent: 11 November 2016 18:05:30
To: user@ignite.apache.org
Subject: Re: Creating cache on client node in xml not working

Hi,

Does your Ignite server have cassandraAdminDataSource and other Cassandra 
related beans in classpath?

On Fri, Nov 11, 2016 at 7:53 PM, Evans, Charlie 
> wrote:

Hi all,


I've been trying to create a cache in my application with Cassandra as the 
persistent storage.


My current setup is:

- starting Ignite on the server with default configs.

- application connects to the ignite server as a client and attempts to load 
the cache configuration and create the cache

- the cache configuration uses CassandraCacheStoreFactory.


I'm aware I cannot do this programmatically because DataSource is not 
serializable (until 1.8) so have been trying to use xml files.


When my application starts it seems to create the cache (

[17:30:44,732][INFO][main][GridCacheProcessor] Started cache 
[name=ctntimestamp, mode=PARTITIONED]) but later just gets stuck with 
"[WARNING][main][GridCachePartitionExchangeManager] Still waiting for initial 
partition map exchange" every 40 seconds. In the logs for the server I see the 
error message

"class org.apache.ignite.IgniteCheckedException: Spring bean with provided name 
doesn't exist , beanName=cassandraAdminDataSource]".

My xml file is below and is in the src/main/resources folder. It is loaded with 
Ignition.start(getClass.getResource("/ignite-cass.xml"))

Any ideas what the problem could be?

P.S. When will 1.8 be released? I tried doing it all programmatically with 1.8 
SNAPSHOT and it works fine.




http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd;>








127.0.0.1













































127.0.0.1













Re: Creating cache on client node in xml not working

2016-11-11 Thread Andrey Gura
Hi,

Does your Ignite server have cassandraAdminDataSource and other Cassandra
related beans in classpath?

On Fri, Nov 11, 2016 at 7:53 PM, Evans, Charlie  wrote:

> Hi all,
>
>
> I've been trying to create a cache in my application with Cassandra as the
> persistent storage.
>
>
> My current setup is:
>
> - starting Ignite on the server with default configs.
>
> - application connects to the ignite server as a client and attempts to
> load the cache configuration and create the cache
>
> - the cache configuration uses CassandraCacheStoreFactory.
>
>
> I'm aware I cannot do this programmatically because DataSource is not
> serializable (until 1.8) so have been trying to use xml files.
>
>
> When my application starts it seems to create the cache (
> [17:30:44,732][INFO][main][GridCacheProcessor] Started cache
> [name=ctntimestamp, mode=PARTITIONED]) but later just gets stuck with "
> [WARNING][main][GridCachePartitionExchangeManager] Still waiting for
> initial partition map exchange" every 40 seconds. In the logs for the
> server I see the error message
>
> "class org.apache.ignite.IgniteCheckedException: Spring bean with
> provided name doesn't exist , beanName=cassandraAdminDataSource]".
>
> My xml file is below and is in the src/main/resources folder. It is loaded
> with Ignition.start(getClass.getResource("/ignite-cass.xml"))
>
> Any ideas what the problem could be?
>
> P.S. When will 1.8 be released? I tried doing it all programmatically with
> 1.8 SNAPSHOT and it works fine.
>
> 
>
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
> xmlns:util="http://www.springframework.org/schema/util;
>xsi:schemaLocation="http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd
> http://www.springframework.org/schema/util
> http://www.springframework.org/schema/util/spring-util.xsd;>
>
>  class="com.datastax.driver.core.policies.TokenAwarePolicy">
>  type="com.datastax.driver.core.policies.LoadBalancingPolicy">
> 
> 
> 
>
> 
> 127.0.0.1
> 
>
>  class="org.apache.ignite.cache.store.cassandra.datasource.DataSource">
> 
> 
> 
> 
> 
>
>  class="org.apache.ignite.cache.store.cassandra.persistence.KeyValuePersistenceSettings">
> 
> 
> 
> 
> 
>
> 
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
>
> 
>
> 
> 
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
>  class="org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory">
>  value="cassandraAdminDataSource"/>
>  value="cache1_persistence_settings"/>
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
> 
> 127.0.0.1
> 
> 
> 
> 
> 
> 
> 
> 
>
>
>
>


Creating cache on client node in xml not working

2016-11-11 Thread Evans, Charlie
Hi all,


I've been trying to create a cache in my application with Cassandra as the 
persistent storage.


My current setup is:

- starting Ignite on the server with default configs.

- application connects to the ignite server as a client and attempts to load 
the cache configuration and create the cache

- the cache configuration uses CassandraCacheStoreFactory.


I'm aware I cannot do this programmatically because DataSource is not 
serializable (until 1.8) so have been trying to use xml files.


When my application starts it seems to create the cache (

[17:30:44,732][INFO][main][GridCacheProcessor] Started cache 
[name=ctntimestamp, mode=PARTITIONED]) but later just gets stuck with 
"[WARNING][main][GridCachePartitionExchangeManager] Still waiting for initial 
partition map exchange" every 40 seconds. In the logs for the server I see the 
error message

"class org.apache.ignite.IgniteCheckedException: Spring bean with provided name 
doesn't exist , beanName=cassandraAdminDataSource]".

My xml file is below and is in the src/main/resources folder. It is loaded with 
Ignition.start(getClass.getResource("/ignite-cass.xml"))

Any ideas what the problem could be?

P.S. When will 1.8 be released? I tried doing it all programmatically with 1.8 
SNAPSHOT and it works fine.




http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; 
xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd;>








127.0.0.1













































127.0.0.1












Re: How is maxMemorySize calculated in the presence of backups?

2016-11-11 Thread Andrey Gura
Josh,

Eviction policies track entries on local node only regardless of fact that
this node is primary or backup for particular entry. So the policy will
evict entries on particular node when this node will have 1G of heap.

On Fri, Nov 11, 2016 at 6:49 PM, Josh Cummings 
wrote:

> For example, if I have an LruEvictionPolicy on a cache with one backup,
> and I say the maxMemorySize is 1G, will it evict when the primary has 1G or
> when the primary and the backup together have 1G?
>
> --
>
> *JOSH CUMMINGS*
>
> Principal Engineer
>
> [image: Workfront] 
>
> *O*  801.477.1234  |  *M*  8015562751
>
> joshcummi...@workfront.com | www.workfront.com
> Address   |  Twitter
>   |  LinkedIn
>   |  Facebook
> 
>
> [image: Workfront] 
>


How is maxMemorySize calculated in the presence of backups?

2016-11-11 Thread Josh Cummings
For example, if I have an LruEvictionPolicy on a cache with one backup, and
I say the maxMemorySize is 1G, will it evict when the primary has 1G or
when the primary and the backup together have 1G?

-- 

*JOSH CUMMINGS*

Principal Engineer

[image: Workfront] 

*O*  801.477.1234  |  *M*  8015562751

joshcummi...@workfront.com | www.workfront.com
Address   |  Twitter
  |  LinkedIn
  |  Facebook


[image: Workfront] 


Re: Connection Cleanup when IgniteCallable Killed

2016-11-11 Thread Dmitriy Karachentsev
Hi Jaime.

Actually tasks are interrupted, you can find how it works in following
code: (for Ignite 1.7) in GridJobProcessor.onEvent() calls
GridJobProcessor.cancelJob(GridJobWorker, boolean) for each task that leads
to interruption.

If there down on your code was caught InterruptedException but not called
Thread.currentThread().interrupt() the interrupted flag will be false.
Please check if you can get exceptions that have InterruptedException in
their cause and close your resources properly.

If it's not possible, you may use a workaround, for example, pass to
callable client's ClusterNode and check periodically if it's alive:

// Call on client side before broadcasting
final ClusterNode locNode = ignite.cluster().localNode();

// Check in callable if it's not null
if (Ignition.localIgnite().cluster().forNode(locNode).node() == null)
  // Close resources

But it's better to handle interruption correctly, because task may be
cancelled due to various reasons.

Hope that helps.

On Fri, Nov 11, 2016 at 6:59 AM, jaime spicciati 
wrote:

> All,
> I currently broadcast an IgniteCallable in my cluster which opens
> connections to various resources, specifically Zookeeper via Apache
> Curator.
>
> If the originating node (the client that launched the IgniteCallable) is
> stopped prematurely I see that Ignite will rightfully cancel the broadcast
> call within the cluster. This is all great but Apache Curator has a thread
> in the background watching Zookeeper. So when Ignite stops the
> IgniteCallable in the cluster the connection to Zookeeper is still open
> which is keeping ephemeral nodes from being deleted.
>
> I tried implementing logic to handle thread interrupts to close the
> zookeeper connection but it doesn't look like IgniteCallable is cancelled
> through interrupts. I looked through the Ignite code base and can't quite
> figure out how it is cancelling my IgniteCallable so that I can hook into
> the IgniteCallable life cycle.
>
> Long story short, how do I do resource/connection cleanup in an
> IgniteCallable when the client disconnects ungracefully, and the connection
> is held by a thread launched from within the IgniteCallable?
>
> Thanks
>


Re: Ignite Jdbc connection

2016-11-11 Thread Anil
HI Andrey,

Thanks for your response. #2 answered from other answers

You are right. i created only one connection and it looks good. thanks.

On 11 November 2016 at 16:59, Andrey Gura  wrote:

> Hi,
>
>
> 1. Ignite client node is thread-safe and you can create multiple
> statements in order to query execution. So, from my point of view, you
> should close connection when finish all your queries.
> 2. Could you please clarify your question?
> 3. I don't think that pooling is required.
> 4. Ignite client will try to reconnect to the Ignite cluster in case of
> server node fails. All you need is proper IP finder configuration.
>
>
> On Thu, Nov 10, 2016 at 5:01 PM, Anil  wrote:
>
>> Any help in understanding below ?
>>
>> On 10 November 2016 at 16:31, Anil  wrote:
>>
>>> I have couple of questions on ignite jdbc connection. Could you please
>>> clarify ?
>>>
>>> 1. Should connection be closed like other jdbc db connection ? - I see
>>> connection close is shutdown of ignite client node.
>>> 2. Connection objects are not getting released and all connections are
>>> busy ?
>>> 3. Connection pool is really required for ignite client ? i hope one
>>> ignite connection can handle number of queries in parallel.
>>> 4. What is the recommended configuration for ignite client to support
>>> failover ?
>>>
>>> Thanks.
>>>
>>
>>
>


Re: errors when building from source,?

2016-11-11 Thread Alexey Kuznetsov
I updated  http://apacheignite.gridgain.org/v1.7/docs/getting-started#
section-building-from-source.

Also I downloaded
http://apache-mirror.rbc.ru/pub/apache//ignite/1.7.0/apache-ignite-1.7.0-src.zip
Unpacked and executed "mvn clean package -DskipClientDocs
-Dmaven.javadoc.skip=true -DskipTests"
And indded I see mentioned exception im maven logs "[INFO] An Ant
BuildException has occured: exec returned: 128 ..."
But that was [INFO] and build finished successfully.
So for now you may ignore that error.

On Fri, Nov 11, 2016 at 8:37 PM, Alexey Kuznetsov 
wrote:

> Hi,
>
> As far as I see you are following instruction from site.
> >> I followed this tutorial: http://apacheignite.gridgain.
> org/v1.7/docs/getting-started#section-building-from-source  to
>
> But I think they are outdated. Version ignite-1.3.0 is very old.
> Please try with latest ignite-1.7.0 (http://apache-mirror.rbc.ru/
> pub/apache//ignite/1.7.0/apache-ignite-1.7.0-src.zip)
>
> On Fri, Nov 11, 2016 at 8:33 PM, dkarachentsev  > wrote:
>
>> Hi!
>>
>> It seems that dependency is broken. Try remove it from maven repository
>> and
>> rebuild. For more information please refer
>> http://stackoverflow.com/questions/13846357/can-maven-3-
>> redownload-broken-files-instead-of-failing-the-build
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/errors-when-building-from-source-tp8890p8910.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
> Alexey Kuznetsov
>



-- 
Alexey Kuznetsov


Re: errors when building from source,?

2016-11-11 Thread Alexey Kuznetsov
Hi,

As far as I see you are following instruction from site.
>> I followed this tutorial:
http://apacheignite.gridgain.org/v1.7/docs/getting-started#section-building-from-source
 to

But I think they are outdated. Version ignite-1.3.0 is very old.
Please try with latest ignite-1.7.0 (
http://apache-mirror.rbc.ru/pub/apache//ignite/1.7.0/apache-ignite-1.7.0-src.zip
)

On Fri, Nov 11, 2016 at 8:33 PM, dkarachentsev 
wrote:

> Hi!
>
> It seems that dependency is broken. Try remove it from maven repository and
> rebuild. For more information please refer
> http://stackoverflow.com/questions/13846357/can-maven-
> 3-redownload-broken-files-instead-of-failing-the-build
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/errors-when-building-from-source-tp8890p8910.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Alexey Kuznetsov


Re: Hive job submsiion failed with exception ”java.io.UTFDataFormatException“

2016-11-11 Thread Andrey Mashenkov
Hi lapalette,

Would you like to explain the way you have change Marshaller? It seems
OptimizedMarshaler does not have these limitations, and its possible it was
not configured in correct way.

On Fri, Nov 11, 2016 at 4:30 PM, Andrey Mashenkov 
wrote:

> Hi lapalette,
>
> 1. Does this error appears in ignite 1.7 and it's not present in version
> 1.6?
> 2. Did you get same error with another marshallers? Would you please
> provide stacktraces with other marshallers?
> 3. What version of JVM do you use? Have you try to upgrade JVM? So, its
> JDK internals limitation could be differ from version to version.
> 4. As I understand right, you try to run some performance test, isn't it?
> Anyway, would you please provide a code so I could reproduce this error.
>
>
> On Fri, Nov 11, 2016 at 5:21 AM, lapalette  wrote:
>
>> Hi,Andery:
>>  Thanks for your attention, but I tried use  “OptimizedMarshaller”
>> and
>> "JdkMarshaller", but it did not work. I use the ignite version 1.6, and
>> do I
>> need to upgrade to 1.7? Or how can I modify the limitation of the
>> ObjectOutputStream.
>> Thanks.
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Hive-job-submsiion-failed-with-exception-
>> java-io-UTFDataFormatException-tp8863p8893.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: errors when building from source,?

2016-11-11 Thread dkarachentsev
Hi!

It seems that dependency is broken. Try remove it from maven repository and
rebuild. For more information please refer
http://stackoverflow.com/questions/13846357/can-maven-3-redownload-broken-files-instead-of-failing-the-build



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/errors-when-building-from-source-tp8890p8910.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Hive job submsiion failed with exception ”java.io.UTFDataFormatException“

2016-11-11 Thread Andrey Mashenkov
Hi lapalette,

1. Does this error appears in ignite 1.7 and it's not present in version
1.6?
2. Did you get same error with another marshallers? Would you please
provide stacktraces with other marshallers?
3. What version of JVM do you use? Have you try to upgrade JVM? So, its JDK
internals limitation could be differ from version to version.
4. As I understand right, you try to run some performance test, isn't it?
Anyway, would you please provide a code so I could reproduce this error.


On Fri, Nov 11, 2016 at 5:21 AM, lapalette  wrote:

> Hi,Andery:
>  Thanks for your attention, but I tried use  “OptimizedMarshaller”  and
> "JdkMarshaller", but it did not work. I use the ignite version 1.6, and do
> I
> need to upgrade to 1.7? Or how can I modify the limitation of the
> ObjectOutputStream.
> Thanks.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Hive-job-submsiion-failed-with-exception-java-io-
> UTFDataFormatException-tp8863p8893.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Very high memory consumption in apache ignite

2016-11-11 Thread Andrey Mashenkov
Hi rishi007bansod.

Are you sure all these memory consumped with java process?
You can try to analyze pmap or vmmap tool report? https://plumbr.eu/
blog/memory-leaks/why-does-my-java-process-consume-more-memory-than-xmx

Please, let me know if you find any suspicious thing.

On Thu, Nov 10, 2016 at 5:19 PM, rishi007bansod 
wrote:

> Cache configuration I have used is,
>
> CacheConfiguration ccfg_order_line = new
> CacheConfiguration<>();
> ccfg_order_line.setIndexedTypes(order_lineKey.class,
> order_line.class);
> ccfg_order_line.setName("order_line_cache");
> ccfg_order_line.setCopyOnRead(false);
> ccfg_order_line.setMemoryMode(CacheMemoryMode.ONHEAP_TIERED);
> ccfg_order_line.setSwapEnabled(false);
> ccfg_order_line.setBackups(0);
> IgniteCache cache_order_line =
> ignite.createCache(ccfg_order_line);
>
> JVM configuration I have used is,
>
> -server
> -Xms10g
> -Xmx10g
> -XX:+UseParNewGC
> -XX:+UseConcMarkSweepGC
> -XX:+UseTLAB
> -XX:NewSize=128m
> -XX:MaxNewSize=128m
> -XX:MaxTenuringThreshold=0
> -XX:SurvivorRatio=1024
> -XX:+UseCMSInitiatingOccupancyOnly
> -XX:CMSInitiatingOccupancyFraction=40
> -XX:MaxGCPauseMillis=1000
> -XX:InitiatingHeapOccupancyPercent=50
> -XX:+UseCompressedOops
> -XX:ParallelGCThreads=8
> -XX:ConcGCThreads=8
> -XX:+DisableExplicitGC
>
> same as provided at link
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Very-high-memory-consumption-in-apache-
> ignite-tp8822p8880.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


start C++ server in linux?? it report "Failed to initialize JVM" error

2016-11-11 Thread smile
Hi, all
 when I used jdk1.7.0 buiding ignite core?? and building C++ server in 
linux(Center OS) , when I start C++ server?? it report error as follow:


~/apache-ignite-1.7.0-src/modules/platforms/cpp/ignite]$ ./ignite 
   ERROR: Failed to initialize JVM 
[errCls=java.lang.UnsupportedClassVersionError, 
errMsg=org/apache/ignite/internal/processors/platform/utils/PlatformUtils : 
Unsupported major.minor version 51.0]


And I have search this problem in google, which replies that because the 
JVM version is lesser than the jar??however, build ignite must be jdk1.7.0 or 
greater;


how can I solve it ?


Thank you very much !

Re: Visor console

2016-11-11 Thread Paolo Di Tommaso
Wow, easy! I will give it a try soon.


Thanks,
Paolo


On Fri, Nov 11, 2016 at 10:19 AM, Alexey Kuznetsov 
wrote:

> Hi Paolo!
>
> Yes, it is possible.
>
> See attached pom.xml and VisorStartup.java example.
>
>
> On Fri, Nov 11, 2016 at 4:59 AM, Paolo Di Tommaso <
> paolo.ditomm...@gmail.com> wrote:
>
>> Hi,
>>
>> Is it possible to deploy the visor console in a embedded manner? I mean
>> just including the visor dependencies into an application classpath and
>> launch it?
>>
>>
>> Is there any example of that?
>>
>>
>> Cheers,
>> Paolo
>>
>>
>
>
>
> --
> Alexey Kuznetsov
>


Re: Ignite Jdbc connection

2016-11-11 Thread Andrey Gura
Hi,


1. Ignite client node is thread-safe and you can create multiple statements
in order to query execution. So, from my point of view, you should close
connection when finish all your queries.
2. Could you please clarify your question?
3. I don't think that pooling is required.
4. Ignite client will try to reconnect to the Ignite cluster in case of
server node fails. All you need is proper IP finder configuration.


On Thu, Nov 10, 2016 at 5:01 PM, Anil  wrote:

> Any help in understanding below ?
>
> On 10 November 2016 at 16:31, Anil  wrote:
>
>> I have couple of questions on ignite jdbc connection. Could you please
>> clarify ?
>>
>> 1. Should connection be closed like other jdbc db connection ? - I see
>> connection close is shutdown of ignite client node.
>> 2. Connection objects are not getting released and all connections are
>> busy ?
>> 3. Connection pool is really required for ignite client ? i hope one
>> ignite connection can handle number of queries in parallel.
>> 4. What is the recommended configuration for ignite client to support
>> failover ?
>>
>> Thanks.
>>
>
>


Re: Cache Memory Behavior \ GridDhtLocalPartition

2016-11-11 Thread Andrey Mashenkov
Hi Isaeed Mohanna,

I don't see any eviction or expired policy configured. Is entry deletion
performed by you application?

Have you try to detect which of caches id grows unexpectedly?
Have you analyse GC logs or tried to tune GC? Actually, you can putting
data faster as garbage is collecting. This page may be helpful
http://apacheignite.gridgain.org/v1.7/docs/performance-tips#tune-garbage-collection
.

Also you can get profile (with e.g. JavaFlightRecorder) of grid under load
to understand what is really going on.

Please let me know, if there are any issues.



On Thu, Nov 10, 2016 at 10:10 AM, Isaeed Mohanna  wrote:

> Hi
> My cache configurations appear below.
>
> // Cache 1 - a cache of ~15 entities that has a date stamp that is updated
> every 30 - 120 seconds
> CacheConfiguration Cache1Cfg = new CacheConfiguration<>();
> Cache1Cfg cheCfg.setName("Cache1Name");
> Cache1Cfg .setCacheMode(CacheMode.REPLICATED);
> Cache1Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
> Cache1Cfg .setStartSize(50);
>
> // Cache 2 - A cache used as an ignite queue with frequent inserts and
> removal from the queue
> CacheConfiguration Cache2Cfg = new CacheConfiguration<>();
> Cache2Cfg .setName("Cache2Name");
> Cache2Cfg .setCacheMode(CacheMode.REPLICATED);
> Cache2Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
>
> // Cache 3 - hundreds of entities updated daily
> CacheConfiguration Cache3Cfg = new CacheConfiguration<>();
> Cache3Cfg .setName("Cache3Name");
> Cache3Cfg .setCacheMode(CacheMode.REPLICATED);
> Cache3Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
> Cache3Cfg .setIndexedTypes(UUID.class, SomeClass.class);
>
> // Cache 4 - Cache with very few writes and reads
> CacheConfiguration Cache4Cfg = new CacheConfiguration<>();
> Cache4Cfg .setName("Cache4Name");
> Cache4Cfg .setCacheMode(CacheMode.REPLICATED);
> Cache4Cfg .setAtomicityMode(CacheAtomicityMode.ATOMIC);
>
> // Events Cache - cache with very frequent writes and delete, acts as
> events queue
> CacheConfiguration eventsCacheConfig= new CacheConfiguration<>();
> eventsCacheConfig.setName("EventsCache");
> eventsCacheConfig.setCacheMode(CacheMode.PARTITIONED);
> eventsCacheConfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
> eventsCacheConfig.setIndexedTypes(UUID.class, SomeClass.class);
> eventsCacheConfig.setBackups(1);
> eventsCacheConfig.setOffHeapMaxMemory(0);
>
> // Failed Events Cache - cache with less writes and reads stores failed
> events
> CacheConfiguration failedEventsCacheConfig = new
> CacheConfiguration<>();
> failedEventsCacheConfig.setName("FailedEventsCache");
> failedEventsCacheConfig.setCacheMode(CacheMode.PARTITIONED);
> failedEventsCacheConfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
> failedEventsCacheConfig.setIndexedTypes(UUID.class, EventEntity.class);
> failedEventsCacheConfig.setBackups(1);
> failedEventsCacheConfig.setOffHeapMaxMemory(0);
>
> // In addition i have one atomic reference
> AtomicConfiguration atomicCfg = new AtomicConfiguration();
> atomicCfg.setCacheMode(CacheMode.REPLICATED);
> Thanks again
>
> On Wed, Nov 9, 2016 at 5:26 PM, Andrey Mashenkov 
> wrote:
>
>> Hi Isaeed Mohanna,
>>
>> Would you please provide your cache configurations?
>>
>>
>> On Wed, Nov 9, 2016 at 5:37 PM, Isaeed Mohanna  wrote:
>>
>>> Hi
>>> i have an ignite 1.7.0 cluster with 3 nodes running , i have 3
>>> PARTITIONED
>>> ATOMIC CACHES and 2 REPLICATED ATOMIC CACHES, Most of these caches are
>>> populated with events data, so each cache entry is short lived its
>>> inserted,
>>> processed later by some task and removed. so the caches are pretty much
>>> very
>>> dynamic.
>>> Recently the load in our system has increased (more events were received
>>> and
>>> generated) and we started experiencing out of memory fails once in while
>>> (several days depending on machine size).
>>> I have created several heap dumps and noticed the largest retained
>>> objects
>>> in memory is by the following classes: GridDhtLocalPartition,
>>> ConccurentHashMap8,ConccurentHashMap8$Node[].
>>> I can see the GridDhtLocalPartition has a ConccurentHashMap8 so most
>>> likely
>>> all three reference the same thing.
>>> My question what is this class and why does it retain memory, entities
>>> in my
>>> caches are usually short lived (several minutes in most caches) so i
>>> would
>>> expect the memory to be released? any hints on how to continue my
>>> investigation would be great.
>>> Thanks
>>>
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-ignite-users.705
>>> 18.x6.nabble.com/Cache-Memory-Behavior-GridDhtLocalPartition-tp8835.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>
>>
>


Re: Ignite Cache setIndexedTypes Question

2016-11-11 Thread vdpyatkov
Hi,

It is should not be very slow.
If you will not use index, then SQL will work slow.

You can try to  use QueryEntity[1] without indexs on fields.

How many index do you use?
Could you please provide this classes?

Please properly subscribe to the user list so that we can see your questions
as soon as possible and provide answers on them quicker. All you need to do
is send an email to ì user-subscr...@ignite.apache.orgî and follow simple
instructions in the reply.

[1]:
https://apacheignite.readme.io/docs/sql-queries#configuring-sql-indexes-using-queryentity


junyoung.kang wrote
> Hello
> 
> In my test that I put data(datasize is 500,000,000) into cache. 
> 
> and I use this data connected apache zeppelin . and use data with sql.
> 
> But  cacheconfiguration set setIndexedTypes ( key, value). very very
> slow.
> 
> so I removed setIndexedTypes(in cacheConfiguration).. But I can't use sql. 
> 
> How to resolve this problem??
> 
> In other words, I want to put data high performance when using
> setIndexedTypes or I want to use Sql without setIndexedTypes.





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cache-setIndexedTypes-Question-tp8895p8903.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Visor console

2016-11-11 Thread Alexey Kuznetsov
Hi Paolo!

Yes, it is possible.

See attached pom.xml and VisorStartup.java example.


On Fri, Nov 11, 2016 at 4:59 AM, Paolo Di Tommaso  wrote:

> Hi,
>
> Is it possible to deploy the visor console in a embedded manner? I mean
> just including the visor dependencies into an application classpath and
> launch it?
>
>
> Is there any example of that?
>
>
> Cheers,
> Paolo
>
>



-- 
Alexey Kuznetsov




http://maven.apache.org/POM/4.0.0; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd;>
4.0.0

org.apache.ignite
My_cluster-project
1.6.9



GridGain External Repository
http://www.gridgainsystems.com/nexus/content/repositories/external





org.apache.ignite
ignite-core
1.6.9



org.apache.ignite
ignite-core
1.6.9



org.apache.ignite
ignite-ssh
1.6.9



org.apache.ignite
ignite-spring
1.6.9



org.apache.ignite
ignite-visor-console
1.6.9



org.springframework
spring-core
4.1.0.RELEASE



org.springframework
spring-beans
4.1.0.RELEASE



org.springframework
spring-context
4.1.0.RELEASE



org.springframework
spring-expression
4.1.0.RELEASE



org.scala-lang
scala-library
2.11.7



jline
jline
2.12.1






src/main/java

**/*.java



src/main/resources




maven-dependency-plugin


copy-libs
test-compile

copy-dependencies


org.apache.ignite,org.gridgain
target/libs
compile
true





maven-compiler-plugin
3.1

1.7
1.7







VisorStartup.java
Description: Binary data


Re: DataStreamer is closed

2016-11-11 Thread Anil
HI Anton,

Sounds perfect !

#1 - i will reproduce and share you the logs. i have project commitment for
coming Monday
#2 - I will work on this from coming Tuesday.

Thanks

On 11 November 2016 at 14:25, Anton Vinogradov 
wrote:

> Anil,
>
>
>> I suspect there is a problem when node rejoins the cluster and streamer
>> is already closed and not recreated. Correct ?
>
>
> Correct, this seems to be linked somehow. I need logs and sourcess to tell
> more.
>
> I had to implement my own kafka streamer because of
>> https://issues.apache.org/jira/browse/IGNITE-4140
>
>
> I'd like to propose you to refactor streamer according to this issue and
> contribute solution. I can help you with tips and review.
> Sounds good?
>
> On Fri, Nov 11, 2016 at 11:41 AM, Anil  wrote:
>
>> HI Anton,
>>
>> Thanks for responding. i will check if i can reproduce with issue with
>> reproducer.
>>
>> I had to implement my own kafka streamer because of
>> https://issues.apache.org/jira/browse/IGNITE-4140
>>
>> I suspect there is a problem when node rejoins the cluster and streamer
>> is already closed and not recreated. Correct ?
>>
>> In the above case, kafka streamer tries to getStreamer and push the data
>> but streamer is not available.
>>
>> Thanks.
>>
>>
>>
>> On 11 November 2016 at 14:00, Anton Vinogradov  wrote:
>>
>>> Anil,
>>>
>>> Unfortunately,
>>>   at com.test.cs.cache.KafkaCacheDataStreamer.addMessage(KafkaCac
>>> heDataStreamer.java:149)
>>> does not fits on attached sources.
>>>
>>> But,
>>> java.lang.IllegalStateException: Cache has been closed or destroyed:
>>> PERSON_CACHE
>>> is a reason of closed datastreamer.
>>>
>>> It it possible to write reproducible example or to attach both (full,
>>> all) logs and sourcess?
>>>
>>>
>>> BTW, we already have Kafka streamer, why you decided to reimplement it?
>>>
>>>
>>>
>>> On Wed, Nov 9, 2016 at 5:39 PM, Anil  wrote:
>>>
 Would there be any issues because of size of data ?
 i loaded around 80 gb on 4 node cluster. each node is of 8 CPU and 32
 GB RAM configuration.

 and cache configuration -

 CacheConfiguration pConfig = new
 CacheConfiguration();
 pConfig.setName("Person_Cache");
 pConfig.setIndexedTypes(String.class, Person.class);
 pConfig.setBackups(1);
 pConfig.setCacheMode(CacheMode.PARTITIONED);
 pConfig.setCopyOnRead(false);
 pConfig.setSwapEnabled(true);
 pConfig.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
 pConfig.setSqlOnheapRowCacheSize(100_000);
 pConfig.setOffHeapMaxMemory(10 * 1024 * 1024 * 1024);
 pConfig.setStartSize(200);
 pConfig.setStatisticsEnabled(true);

 Thanks for your help.

 On 9 November 2016 at 19:56, Anil  wrote:

> HI,
>
> Data streamer closed exception is very frequent. I did not see any
> explicit errors/exception about data streamer close. the excption i see
> only when message is getting added.
>
> I have 4 node ignite cluster and each node have consumer to connection
> and push the message received to streamer.
>
> What if the node is down and re-joined when message is getting added
> cache.
>
> Following is the exception from logs -
>
> 2016-11-09 05:55:55 ERROR pool-6-thread-1 KafkaCacheDataStreamer:146 -
> Exception while adding to streamer
> java.lang.IllegalStateException: Data streamer has been closed.
> at org.apache.ignite.internal.pro
> cessors.datastreamer.DataStreamerImpl.enterBusy(DataStreamer
> Impl.java:360)
> at org.apache.ignite.internal.pro
> cessors.datastreamer.DataStreamerImpl.addData(DataStreamerIm
> pl.java:507)
> at org.apache.ignite.internal.pro
> cessors.datastreamer.DataStreamerImpl.addData(DataStreamerIm
> pl.java:498)
> at com.test.cs.cache.KafkaCacheDa
> taStreamer.addMessage(KafkaCacheDataStreamer.java:140)
> at com.test.cs.cache.KafkaCacheDa
> taStreamer$1.run(KafkaCacheDataStreamer.java:197)
> at java.util.concurrent.Executors
> $RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoo
> lExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoo
> lExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 2016-11-09 05:55:55 ERROR pool-6-thread-1 KafkaCacheDataStreamer:200 -
> Message is ignored due to an error 
> [msg=MessageAndMetadata(TestTopic,1,Message(magic
> = 0, attributes = 0, crc = 2111790081, key = null, payload =
> java.nio.HeapByteBuffer[pos=0 lim=1155 cap=1155]),2034,kafka.serializ
> er.StringDecoder@3f77f0b,kafka.serializer.StringDecoder@67fd2da0)]
> 

Re: DataStreamer is closed

2016-11-11 Thread Anton Vinogradov
Anil,


> I suspect there is a problem when node rejoins the cluster and streamer is
> already closed and not recreated. Correct ?


Correct, this seems to be linked somehow. I need logs and sourcess to tell
more.

I had to implement my own kafka streamer because of
> https://issues.apache.org/jira/browse/IGNITE-4140


I'd like to propose you to refactor streamer according to this issue and
contribute solution. I can help you with tips and review.
Sounds good?

On Fri, Nov 11, 2016 at 11:41 AM, Anil  wrote:

> HI Anton,
>
> Thanks for responding. i will check if i can reproduce with issue with
> reproducer.
>
> I had to implement my own kafka streamer because of
> https://issues.apache.org/jira/browse/IGNITE-4140
>
> I suspect there is a problem when node rejoins the cluster and streamer is
> already closed and not recreated. Correct ?
>
> In the above case, kafka streamer tries to getStreamer and push the data
> but streamer is not available.
>
> Thanks.
>
>
>
> On 11 November 2016 at 14:00, Anton Vinogradov  wrote:
>
>> Anil,
>>
>> Unfortunately,
>>   at com.test.cs.cache.KafkaCacheDataStreamer.addMessage(KafkaCac
>> heDataStreamer.java:149)
>> does not fits on attached sources.
>>
>> But,
>> java.lang.IllegalStateException: Cache has been closed or destroyed:
>> PERSON_CACHE
>> is a reason of closed datastreamer.
>>
>> It it possible to write reproducible example or to attach both (full,
>> all) logs and sourcess?
>>
>>
>> BTW, we already have Kafka streamer, why you decided to reimplement it?
>>
>>
>>
>> On Wed, Nov 9, 2016 at 5:39 PM, Anil  wrote:
>>
>>> Would there be any issues because of size of data ?
>>> i loaded around 80 gb on 4 node cluster. each node is of 8 CPU and 32 GB
>>> RAM configuration.
>>>
>>> and cache configuration -
>>>
>>> CacheConfiguration pConfig = new
>>> CacheConfiguration();
>>> pConfig.setName("Person_Cache");
>>> pConfig.setIndexedTypes(String.class, Person.class);
>>> pConfig.setBackups(1);
>>> pConfig.setCacheMode(CacheMode.PARTITIONED);
>>> pConfig.setCopyOnRead(false);
>>> pConfig.setSwapEnabled(true);
>>> pConfig.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
>>> pConfig.setSqlOnheapRowCacheSize(100_000);
>>> pConfig.setOffHeapMaxMemory(10 * 1024 * 1024 * 1024);
>>> pConfig.setStartSize(200);
>>> pConfig.setStatisticsEnabled(true);
>>>
>>> Thanks for your help.
>>>
>>> On 9 November 2016 at 19:56, Anil  wrote:
>>>
 HI,

 Data streamer closed exception is very frequent. I did not see any
 explicit errors/exception about data streamer close. the excption i see
 only when message is getting added.

 I have 4 node ignite cluster and each node have consumer to connection
 and push the message received to streamer.

 What if the node is down and re-joined when message is getting added
 cache.

 Following is the exception from logs -

 2016-11-09 05:55:55 ERROR pool-6-thread-1 KafkaCacheDataStreamer:146 -
 Exception while adding to streamer
 java.lang.IllegalStateException: Data streamer has been closed.
 at org.apache.ignite.internal.processors.datastreamer.DataStrea
 merImpl.enterBusy(DataStreamerImpl.java:360)
 at org.apache.ignite.internal.processors.datastreamer.DataStrea
 merImpl.addData(DataStreamerImpl.java:507)
 at org.apache.ignite.internal.processors.datastreamer.DataStrea
 merImpl.addData(DataStreamerImpl.java:498)
 at com.test.cs.cache.KafkaCacheDataStreamer.addMessage(KafkaCac
 heDataStreamer.java:140)
 at com.test.cs.cache.KafkaCacheDataStreamer$1.run(KafkaCacheDat
 aStreamer.java:197)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executor
 s.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
 Executor.java:1142)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
 lExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
 2016-11-09 05:55:55 ERROR pool-6-thread-1 KafkaCacheDataStreamer:200 -
 Message is ignored due to an error 
 [msg=MessageAndMetadata(TestTopic,1,Message(magic
 = 0, attributes = 0, crc = 2111790081, key = null, payload =
 java.nio.HeapByteBuffer[pos=0 lim=1155 cap=1155]),2034,kafka.serializ
 er.StringDecoder@3f77f0b,kafka.serializer.StringDecoder@67fd2da0)]
 java.lang.IllegalStateException: Cache has been closed or destroyed:
 PERSON_CACHE
 at org.apache.ignite.internal.processors.cache.GridCacheGateway
 .enter(GridCacheGateway.java:160)
 at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
 .onEnter(IgniteCacheProxy.java:2103)
 at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
 

Re: DataStreamer is closed

2016-11-11 Thread Anil
HI Anton,

Thanks for responding. i will check if i can reproduce with issue with
reproducer.

I had to implement my own kafka streamer because of
https://issues.apache.org/jira/browse/IGNITE-4140

I suspect there is a problem when node rejoins the cluster and streamer is
already closed and not recreated. Correct ?

In the above case, kafka streamer tries to getStreamer and push the data
but streamer is not available.

Thanks.



On 11 November 2016 at 14:00, Anton Vinogradov  wrote:

> Anil,
>
> Unfortunately,
>   at com.test.cs.cache.KafkaCacheDataStreamer.addMessage(KafkaCac
> heDataStreamer.java:149)
> does not fits on attached sources.
>
> But,
> java.lang.IllegalStateException: Cache has been closed or destroyed:
> PERSON_CACHE
> is a reason of closed datastreamer.
>
> It it possible to write reproducible example or to attach both (full, all)
> logs and sourcess?
>
>
> BTW, we already have Kafka streamer, why you decided to reimplement it?
>
>
>
> On Wed, Nov 9, 2016 at 5:39 PM, Anil  wrote:
>
>> Would there be any issues because of size of data ?
>> i loaded around 80 gb on 4 node cluster. each node is of 8 CPU and 32 GB
>> RAM configuration.
>>
>> and cache configuration -
>>
>> CacheConfiguration pConfig = new
>> CacheConfiguration();
>> pConfig.setName("Person_Cache");
>> pConfig.setIndexedTypes(String.class, Person.class);
>> pConfig.setBackups(1);
>> pConfig.setCacheMode(CacheMode.PARTITIONED);
>> pConfig.setCopyOnRead(false);
>> pConfig.setSwapEnabled(true);
>> pConfig.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
>> pConfig.setSqlOnheapRowCacheSize(100_000);
>> pConfig.setOffHeapMaxMemory(10 * 1024 * 1024 * 1024);
>> pConfig.setStartSize(200);
>> pConfig.setStatisticsEnabled(true);
>>
>> Thanks for your help.
>>
>> On 9 November 2016 at 19:56, Anil  wrote:
>>
>>> HI,
>>>
>>> Data streamer closed exception is very frequent. I did not see any
>>> explicit errors/exception about data streamer close. the excption i see
>>> only when message is getting added.
>>>
>>> I have 4 node ignite cluster and each node have consumer to connection
>>> and push the message received to streamer.
>>>
>>> What if the node is down and re-joined when message is getting added
>>> cache.
>>>
>>> Following is the exception from logs -
>>>
>>> 2016-11-09 05:55:55 ERROR pool-6-thread-1 KafkaCacheDataStreamer:146 -
>>> Exception while adding to streamer
>>> java.lang.IllegalStateException: Data streamer has been closed.
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl.enterBusy(DataStreamerImpl.java:360)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl.addData(DataStreamerImpl.java:507)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl.addData(DataStreamerImpl.java:498)
>>> at com.test.cs.cache.KafkaCacheDataStreamer.addMessage(KafkaCac
>>> heDataStreamer.java:140)
>>> at com.test.cs.cache.KafkaCacheDataStreamer$1.run(KafkaCacheDat
>>> aStreamer.java:197)
>>> at java.util.concurrent.Executors$RunnableAdapter.call(Executor
>>> s.java:511)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>> Executor.java:1142)
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>> lExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>> 2016-11-09 05:55:55 ERROR pool-6-thread-1 KafkaCacheDataStreamer:200 -
>>> Message is ignored due to an error 
>>> [msg=MessageAndMetadata(TestTopic,1,Message(magic
>>> = 0, attributes = 0, crc = 2111790081, key = null, payload =
>>> java.nio.HeapByteBuffer[pos=0 lim=1155 cap=1155]),2034,kafka.serializ
>>> er.StringDecoder@3f77f0b,kafka.serializer.StringDecoder@67fd2da0)]
>>> java.lang.IllegalStateException: Cache has been closed or destroyed:
>>> PERSON_CACHE
>>> at org.apache.ignite.internal.processors.cache.GridCacheGateway
>>> .enter(GridCacheGateway.java:160)
>>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>>> .onEnter(IgniteCacheProxy.java:2103)
>>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>>> .size(IgniteCacheProxy.java:826)
>>> at com.test.cs.cache.KafkaCacheDataStreamer.addMessage(KafkaCac
>>> heDataStreamer.java:149)
>>> at com.test.cs.cache.KafkaCacheDataStreamer$1.run(KafkaCacheDat
>>> aStreamer.java:197)
>>> at java.util.concurrent.Executors$RunnableAdapter.call(Executor
>>> s.java:511)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>>> Executor.java:1142)
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>>> lExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>>
>>> I have attached 

Re: java.lang.ClassNotFoundException: Failed to peer load class

2016-11-11 Thread Dmitriy Karachentsev
Hi Alex.
It looks like problem on client, because it was disconnected from server
after try to retrieve class. Please, check if there are no other errors on
client.

Could you, please, repeat your test with -DIGNITE_QUIET=false and provide
full client and server logs?

Thanks!

On Wed, Nov 9, 2016 at 4:38 PM, alex  wrote:

> sorry for that. last post not add to mailing list. So I post it again and
> subscribe to mailing list.
> And the new one give more detail information.
> Thank for replying @vdpyatkov
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/java-lang-ClassNotFoundException-Failed-
> to-peer-load-class-tp8811p8831.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: DataStreamer is closed

2016-11-11 Thread Anton Vinogradov
Anil,

Unfortunately,
  at com.test.cs.cache.KafkaCacheDataStreamer.addMessage(
KafkaCacheDataStreamer.java:149)
does not fits on attached sources.

But,
java.lang.IllegalStateException: Cache has been closed or destroyed:
PERSON_CACHE
is a reason of closed datastreamer.

It it possible to write reproducible example or to attach both (full, all)
logs and sourcess?


BTW, we already have Kafka streamer, why you decided to reimplement it?



On Wed, Nov 9, 2016 at 5:39 PM, Anil  wrote:

> Would there be any issues because of size of data ?
> i loaded around 80 gb on 4 node cluster. each node is of 8 CPU and 32 GB
> RAM configuration.
>
> and cache configuration -
>
> CacheConfiguration pConfig = new
> CacheConfiguration();
> pConfig.setName("Person_Cache");
> pConfig.setIndexedTypes(String.class, Person.class);
> pConfig.setBackups(1);
> pConfig.setCacheMode(CacheMode.PARTITIONED);
> pConfig.setCopyOnRead(false);
> pConfig.setSwapEnabled(true);
> pConfig.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);
> pConfig.setSqlOnheapRowCacheSize(100_000);
> pConfig.setOffHeapMaxMemory(10 * 1024 * 1024 * 1024);
> pConfig.setStartSize(200);
> pConfig.setStatisticsEnabled(true);
>
> Thanks for your help.
>
> On 9 November 2016 at 19:56, Anil  wrote:
>
>> HI,
>>
>> Data streamer closed exception is very frequent. I did not see any
>> explicit errors/exception about data streamer close. the excption i see
>> only when message is getting added.
>>
>> I have 4 node ignite cluster and each node have consumer to connection
>> and push the message received to streamer.
>>
>> What if the node is down and re-joined when message is getting added
>> cache.
>>
>> Following is the exception from logs -
>>
>> 2016-11-09 05:55:55 ERROR pool-6-thread-1 KafkaCacheDataStreamer:146 -
>> Exception while adding to streamer
>> java.lang.IllegalStateException: Data streamer has been closed.
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl.enterBusy(DataStreamerImpl.java:360)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl.addData(DataStreamerImpl.java:507)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl.addData(DataStreamerImpl.java:498)
>> at com.test.cs.cache.KafkaCacheDataStreamer.addMessage(KafkaCac
>> heDataStreamer.java:140)
>> at com.test.cs.cache.KafkaCacheDataStreamer$1.run(KafkaCacheDat
>> aStreamer.java:197)
>> at java.util.concurrent.Executors$RunnableAdapter.call(
>> Executors.java:511)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1142)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>> 2016-11-09 05:55:55 ERROR pool-6-thread-1 KafkaCacheDataStreamer:200 -
>> Message is ignored due to an error 
>> [msg=MessageAndMetadata(TestTopic,1,Message(magic
>> = 0, attributes = 0, crc = 2111790081, key = null, payload =
>> java.nio.HeapByteBuffer[pos=0 lim=1155 cap=1155]),2034,kafka.serializ
>> er.StringDecoder@3f77f0b,kafka.serializer.StringDecoder@67fd2da0)]
>> java.lang.IllegalStateException: Cache has been closed or destroyed:
>> PERSON_CACHE
>> at org.apache.ignite.internal.processors.cache.GridCacheGateway
>> .enter(GridCacheGateway.java:160)
>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>> .onEnter(IgniteCacheProxy.java:2103)
>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>> .size(IgniteCacheProxy.java:826)
>> at com.test.cs.cache.KafkaCacheDataStreamer.addMessage(KafkaCac
>> heDataStreamer.java:149)
>> at com.test.cs.cache.KafkaCacheDataStreamer$1.run(KafkaCacheDat
>> aStreamer.java:197)
>> at java.util.concurrent.Executors$RunnableAdapter.call(
>> Executors.java:511)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1142)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> I have attached the KafkaCacheDataStreamer class and let me know if you
>> need any additional details. thanks.
>>
>>
>