Re: Invalid message type: -84 error

2017-01-04 Thread Nikolai Tikhonov
I see that ports from 47100 to 47109 enlisted in ip finder. By default the
addresses used by CommunicationSpi. Could you leave only one port range?



192.168.10.231:47500..47509



>  However, it still takes a very long time to connect.

It's related with windows specific. Long waiting when trying to connect to
the vacant port. For avoiding it need to decrease port range. Also if you
start node under powershell could you set java.encoding (-Dfile.encoding=UTF
-8


On Wed, Jan 4, 2017 at 4:01 PM, mark_balmer <
mark.bal...@moodinternational.com> wrote:

> An update on this, it seems I can only get the exception if I have the
> second
> node in the cluster. If I just run the Java code against a single server
> node it doesnt throw the error. However, it still takes a very long time to
> connect.
>
> Here is the output from the topology change in the console, it seems a bit
> odd that its assigning 10 cpus for such a simple hello world operation!
>
> [12:56:41] Topology snapshot [ver=6, servers=1, clients=0, CPUs=2,
> heap=1.0GB]
> [12:58:34] Topology snapshot [ver=7, servers=1, clients=1, CPUs=10,
> heap=4.5GB]
> [12:58:35] Topology snapshot [ver=8, servers=1, clients=0, CPUs=2,
> heap=1.0GB]
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Invalid-message-type-84-error-tp9869p9872.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Invalid message type: -84 error

2017-01-04 Thread Nikolai Tikhonov
Hi,

Could you provide reproducible example?

On Wed, Jan 4, 2017 at 3:12 PM, mark_balmer <
mark.bal...@moodinternational.com> wrote:

> I'm evaluating Ignite and am currently the exception Inavlid Message type:
> -84 when the topology changes on the Ignite cluster. I'm using the
> default-config.xml on Ignite version 1.7. Ive setup 2 servers to each have
> an ignite server node on them, and the nodes are "seeing" each other (after
> a very large exception message is shown in the console, see below exception
> message).
>
> The other time I get this message is by running the example "Hello World"
> java code. Again, the code does actually run fine and return the Hello
> World
> from the cache, however it takes a very long time to connect the Ignite
> cluster (approx. 30 secs) and I get the same huge list of exceptions (as
> below) and then the cache returns the correct results.
>
> Can anyone help me with this as I dont have a clue why its happenning?
>
>
> Exception messages:
>
> [12:01:17,551][SEVERE][grid-nio-worker-0-#37%null%][TcpCommunicationSpi]
> Closing NIO session because of unhandled exception.
> class org.apache.ignite.internal.util.nio.GridNioException: Invalid
> message
> type: -84
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.
> processSelectedKeysOptimized(GridNioServer.java:1595)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.
> bodyInternal(GridNioServer.java:1516)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$
> AbstractNioClientWorker.body(GridNioServer.java:1289)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: class org.apache.ignite.IgniteException: Invalid message type:
> -84
> at
> org.apache.ignite.internal.managers.communication.
> GridIoMessageFactory.create(GridIoMessageFactory.java:775)
> at
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$5.create(
> TcpCommunicationSpi.java:1614)
> at
> org.apache.ignite.internal.util.nio.GridDirectParser.
> decode(GridDirectParser.java:76)
> at
> org.apache.ignite.internal.util.nio.GridNioCodecFilter.onMessageReceived(
> GridNioCodecFilter.java:104)
> at
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.
> proceedMessageReceived(GridNioFilterAdapter.java:107)
> at
> org.apache.ignite.internal.util.nio.GridConnectionBytesVerifyFilte
> r.onMessageReceived(GridConnectionBytesVerifyFilter.java:113)
> at
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.
> proceedMessageReceived(GridNioFilterAdapter.java:107)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$
> HeadFilter.onMessageReceived(GridNioServer.java:2332)
> at
> org.apache.ignite.internal.util.nio.GridNioFilterChain.onMessageReceived(
> GridNioFilterChain.java:173)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.
> processRead(GridNioServer.java:918)
> at
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.
> processSelectedKeysOptimized(GridNioServer.java:1583)
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Invalid-message-type-84-error-tp9869.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: NearCache can be used through ODBC interface

2017-01-04 Thread Nikolai Tikhonov
SQL query distributed on whole cluster, near cache not needed for it.
Records will be read on data node (usually cache, not near) in any case for
queries and usage data from near cache will not provide performance benefit.

On Wed, Jan 4, 2017 at 2:43 PM, Navneet Kumar 
wrote:

> In that case I cannot read the records from near cache using the Near
> cache.
> Is it true?
>
>
>
> --
> View this message in context: http://apache-ignite-users.705
> 18.x6.nabble.com/NearCache-can-be-used-through-ODBC-interfac
> e-tp9859p9865.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: NearCache can be used through ODBC interface

2017-01-04 Thread Nikolai Tikhonov
Hi Kumar!

You can use ODBC interface for querying to any cache. But keep in your mind
that Ignite SQL engine sends request on all nodes and data will be got from
usual cache (not near).

Thanks,
Nikolay.

On Wed, Jan 4, 2017 at 11:31 AM, Navneet Kumar  wrote:

> Hi Val,
> LocalCache works well with ODBC interface. But Is there any issue using the
> NearCache through ODBC interface? Please give me descriptive idea about how
> to use it via ODBC interface.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/NearCache-can-be-used-through-ODBC-
> interface-tp9859.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Error with ignite-rest-http

2017-01-03 Thread Nikolai Tikhonov
Sorry, I'm missed that you don't use ignite as maven dependency in your
project but running it how standalone server. Could you please show which
modules in your C:\Users\D-NX29AE\Project\Software\shielding-gridgain-
enterprise-fabric-7.5.26\libs folder? (output dir commond in console)

On Tue, Jan 3, 2017 at 3:34 PM, Gaurav Bajaj  wrote:

> Hello Nikolai,
>
> Sorry I am confused, where do I run this command? On my IntelliJ workspace?
>
> On Tue, Jan 3, 2017 at 1:22 PM, Nikolai Tikhonov 
> wrote:
>
>> Hi,
>>
>> It's really like on jar hell. You can use maven dependency tree which
>> allows to find conflict:
>>
>> *mvn dependency:tree -Dverbose -Dincludes=jetty-server*
>>
>> On Tue, Jan 3, 2017 at 2:30 PM, dkarachentsev > > wrote:
>>
>>> It looks like you have another version of Jetty in classpath. Do you have
>>> IGNITE_HOME environment variable set?
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-ignite-users.705
>>> 18.x6.nabble.com/Error-with-ignite-rest-http-tp9835p9838.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>
>>
>


Re: Error with ignite-rest-http

2017-01-03 Thread Nikolai Tikhonov
Hi,

It's really like on jar hell. You can use maven dependency tree which
allows to find conflict:

*mvn dependency:tree -Dverbose -Dincludes=jetty-server*

On Tue, Jan 3, 2017 at 2:30 PM, dkarachentsev 
wrote:

> It looks like you have another version of Jetty in classpath. Do you have
> IGNITE_HOME environment variable set?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Error-with-ignite-rest-http-tp9835p9838.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: class org.apache.ignite.binary.BinaryObjectException: Wrong value has been set

2017-01-03 Thread Nikolai Tikhonov
Hi,

Is it possible that you build your object by binary builder and set null
value for "product" field? In this case will be created metadata where type
of field will be assigned Object and when we will create new object with
non-null value (for example String) you got this exception.

Thanks,
Nikolay

On Tue, Jan 3, 2017 at 10:56 AM, Shawn Du  wrote:

> Hi,
>
>
>
> I met very strange issues.  With the same code, on one machine, it works
> fine, but on another cluster, it always failed with exception:
>
>
>
> java.lang.RuntimeException: class 
> org.apache.ignite.binary.BinaryObjectException:
> Wrong value has been set [typeName=streams, fieldName=product,
> fieldType=Object, assignedValueType=String] at
>
> I met this issue before, I remembered that I fixed it by renaming a class
> field.  But this time, I can’t, for we create it dynamically with binary
> object.
>
>
>
> Please help.
>
>
>
> Thanks
>
> Shawn
>
>
>
>
>
>
>


Re: Affinity

2017-01-02 Thread Nikolai Tikhonov
Hi Anil,
It seems that it will not work correctly but I'm not sure 100%. Let's ask
guru SQL.

Sergi,
Could you please look at the following query? (We have two simple table:
Person and PersonDetail collocated by p.equivalentId)

SELECT p.name as name, dupPerson.dupCount as count, pd.startDate as sdt,
pd.endDate as edt  from PERSON_CACHE.PERSON p
JOIN (select equivalentId, count(*) dupCount from PERSON_CACHE.PERSON group
by equivalentId ) dupPerson on p.equivalentId = dupPerson.equivalentId //
to get the number of persons with same equivalent Idjoin
DETAILS_CACHE.PersonDetail pd on p.equivalentId = pd.equivalentId
JOIN (select equivalentId, max(enddate) as enddate from
DETAILS_CACHE.PersonDetail  group by equivalentId) maxPd on p.equivalentId
= maxPd.equivalentId and maxPd.endDate = pd.endDate
WHERE p.personId = '100'"

On Mon, Jan 2, 2017 at 4:06 PM, Anil  wrote:

> Hi,
>
> I did not use group by query to determine the duplicate count or latest
> details as sub query. i used it as join.
>
> if group by query works in ignite , my join query also should work.
>
> I am not sure if IGNITE-3860 relates to my query. Correct me if I am
> wrong.
>
> Thanks.
>


Re: Affinity

2017-01-02 Thread Nikolai Tikhonov
I see in your code that you implemented collocation correctly. Person and
PersondDetail with the same equivalentId were mapped on the same partition.
But I don't know what you want to get by sql (don't know your use case).
SQL returns incorrect data?

On Mon, Jan 2, 2017 at 3:21 PM, Anil  wrote:

> Hi Nikolay,
>
> If i am not wrong, unit test case worked. i am able to see the count from
> the group by query. Am I missing anything ?
>
> Thanks.
>
> On 2 January 2017 at 17:43, Nikolai Tikhonov  wrote:
>
>> Hi Anil!
>>
>> In your case we faced with the following issue:
>> https://issues.apache.org/jira/browse/IGNITE-3860. It's mean that
>>  ignite will not execute any distributed joins inside of this subquery. For
>> more details about the issue you can read the following thread:
>> http://apache-ignite-developers.2346864.n4.nabble.co
>> m/SELECT-subqueries-in-DML-statements-td13298.html
>>
>> Thanks,
>> Nikolay
>>
>> On Mon, Jan 2, 2017 at 2:21 PM, Anil  wrote:
>>
>>> attached the files. thanks.
>>>
>>>
>>> On 2 January 2017 at 12:07, Anil  wrote:
>>>
>>>> Hi Val,
>>>>
>>>> I created sample unit test and it is working i guess. Not sure why it
>>>> is not working on actual cluster. i will take a look.
>>>>
>>>> Can you please check if that test is correct or not ? thanks.
>>>>
>>>> On 30 December 2016 at 01:24, vkulichenko <
>>>> valentin.kuliche...@gmail.com> wrote:
>>>>
>>>>> Anil,
>>>>>
>>>>> Can you create a unit test that will demonstrate the problem?
>>>>>
>>>>> -Val
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> View this message in context: http://apache-ignite-users.705
>>>>> 18.x6.nabble.com/Affinity-tp9744p9803.html
>>>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>>>
>>>>
>>>>
>>>
>>
>


Re: Affinity

2017-01-02 Thread Nikolai Tikhonov
Hi Anil!

In your case we faced with the following issue:
https://issues.apache.org/jira/browse/IGNITE-3860. It's mean that  ignite
will not execute any distributed joins inside of this subquery. For more
details about the issue you can read the following thread:
http://apache-ignite-developers.2346864.n4.nabble.com/SELECT-subqueries-in-DML-statements-td13298.html

Thanks,
Nikolay

On Mon, Jan 2, 2017 at 2:21 PM, Anil  wrote:

> attached the files. thanks.
>
>
> On 2 January 2017 at 12:07, Anil  wrote:
>
>> Hi Val,
>>
>> I created sample unit test and it is working i guess. Not sure why it is
>> not working on actual cluster. i will take a look.
>>
>> Can you please check if that test is correct or not ? thanks.
>>
>> On 30 December 2016 at 01:24, vkulichenko 
>> wrote:
>>
>>> Anil,
>>>
>>> Can you create a unit test that will demonstrate the problem?
>>>
>>> -Val
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-ignite-users.705
>>> 18.x6.nabble.com/Affinity-tp9744p9803.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>
>>
>


Re: Issue while using Affinity function for mapping keys with nodes

2016-12-16 Thread Nikolai Tikhonov
Hi!

In your case key and value classes are not deployed on server nodes. You
need to deploy classes on all nodes or to use IgniteCache.withKeepBinary.
Also you don't need to implement visitUsingMapKeysToNodes methods. You can
use IgniteCompute#affinityRun or IgniteCompute#affinityCall methods. They
work with custom affinity function correctly.


Re: Not able to join cluster with Zookeeper based IP finder

2016-12-14 Thread Nikolai Tikhonov
I understand your problem and seems we got bottom this issue. Your
implementation is incorrect. Address resolver invoked not only for local
address and also for external addresses. You need to change the logic from

AddressResolver addressResolver = (InetSocketAddress address) ->
Collections.singleton(externalAddress);

to like this:

public class AddressResolverImpl implements AddressResolver {
/** Internal address on external address. */
private static Map> maps =
new HashMap<>();

static {
maps.put(new InetSocketAddress("192.168.0.1", 47500), F.asList(new
InetSocketAddress("10.0.0.1", 31183)));
maps.put(new InetSocketAddress("192.168.0.2", 47500), F.asList(new
InetSocketAddress("10.0.0.2", 30112)));
}

/** {@inheritDoc} */
@Override public Collection
getExternalAddresses(InetSocketAddress addr)
throws IgniteCheckedException {
return maps.get(addr);
}
}



On Wed, Dec 14, 2016 at 5:43 PM, ghughal  wrote:

> Nikolai Tikhonov-2 wrote
> > Hi!
> >
> > It's right way to use AddressResolver for deployment in docker. Could you
> > please share your address resolver implementation?
>
> Here's code we are using to configure AddressResolver. Like I mentioned
> earlier, it's not that this code is not working. It works fine when we run
> single instance. The problem appears when we try to start multiple instance
> AT THE SAME TIME using marathon.
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Not-able-to-join-cluster-with-
> Zookeeper-based-IP-finder-tp9311p9532.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Not able to join cluster with Zookeeper based IP finder

2016-12-14 Thread Nikolai Tikhonov
Hi!

It's right way to use AddressResolver for deployment in docker. Could you
please share your address resolver implementation?

On Sat, Dec 10, 2016 at 9:41 AM, Yakov Zhdanov  wrote:

> Nikolay, can you please join this thread and point out how to build
> cluster using docker?
>
> --Yakov
>
> 2016-12-10 5:48 GMT+07:00 ghughal :
>
>> Yakov - I'll try to see if I can remove AddressResolver. The main reason
>> for
>> adding it was because we are running it inside docker container and if we
>> run docker in BRIDGE mode then the IP address of host is not available to
>> ignite running inside docker container.
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Not-able-to-join-cluster-with-Zookeeper-
>> based-IP-finder-tp9311p9467.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: Encyrption of Data at REST in Apache Ignite

2016-12-13 Thread Nikolai Tikhonov
Hi Sridhar!

You can enable ssl in ConnectionConfiguration. Please look at
ConnectorConfiguration#setSslEnabled and
 ConnectorConfiguration#setSslFactory methods.

Also Ignite provides SSL\TSL securing for communication between nodes. See
https://apacheignite.readme.io/docs/ssltls

Thanks,
Nikolay.

On Tue, Dec 13, 2016 at 2:56 PM, voli.sri  wrote:

> Hi,
>
> Does apache ignite provide for a way to support encryption of data at REST.
>
> I am looking for a way to transparently encrypt the stored data so as to
> ensure the integrity of data.
>
> I believe such a solution should also have the option to decrypt and return
> the data when required.
>
> I haven't found anything other using SSL/HTTPS in securing the network
> connections between nodes in the cluster.
>
> Has anyone used something similar. Can you please help me on this.
>
>
> Thanks,
> Sridhar
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Encyrption-of-Data-at-REST-in-
> Apache-Ignite-tp9508.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Cluster can not let more than one client to do continuous queries

2016-12-02 Thread Nikolai Tikhonov
Hi,

You start grid with disabled peer class loading  . In this case you need to
deploy CacheEntryEventFilterFactory class on all nodes in topology. Please,
make sure that all nodes contain this jar with the factory in classpath or
set peerClassLoadingEnabled to true.



On Mon, Nov 28, 2016 at 10:47 AM, ght230  wrote:

> I had modified the ignite config as following, but it did not work.
>
> http://www.springframework.org/schema/beans";
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>xsi:schemaLocation="
> http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd";>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
> 
> 
> 
>  value="querycache" />
> 
> 
> 
> 
> 
>
> 
>  class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
> 
>  />
>  value="false" />
>  value="5000" />
>  value="32" />
>  value="#{64 * 1024}" />
>  value="#{64 * 1024}" />
>  value="48100" />
>  value="256" />
>  value="1" />
>  />
>  value="2048" />
>  value="0" />
> 
> 
> 
>  class="org.apache.ignite.spi.eventstorage.memory.MemoryEventStorageSpi">
>  />
> 
> 
> 
> 
> 
>  value="20"/>
>  />
>
>  value="3" />
>  />
>  />
>  value="3" />
>  value="5" />
>  value="10" />
> 
>
>
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.
> TcpDiscoveryVmIpFinder">
>
> 
> 
> 
> 192.168.37.103:47500..47520
> 
> 
> 
> 
> 
> 
> 
> 
>
>
> Following is the log of the second client:
> [14:34:28,565][WARN ][pub-#26%null%][GridDiagnostic] Initial heap size is
> 126MB (should be no less than 512MB, use -Xms512m -Xmx512m).
> [14:34:30,308][INFO ][main][IgniteKernal] Non-loopback local IPs:
> 192.168.37.103, 2001:0:b499:3d88:3016:3975:3f57:da98,
> fe80:0:0:0:3016:3975:3f57:da98%13, fe80:0:0:0:602e:de10:7049:3e4%14,
> fe80:0:0:0:615f:2201:b3fc:7555%19, fe80:0:0:0:e805:e97b:b4ab:3310%17
> [14:34:30,308][INFO ][main][IgniteKernal] Enabled local MACs:
> 00E0, 00FF3E61AAFB, 54EE758A7FE6, C8FF2861A4FB
> [14:34:30,308][INFO ][main][IgniteKernal] Set
> localHost@IgniteConfiguration='192.168.37.103' because of
> localIpStartsWith='192.168.37' match.
> [14:34:30,308][INFO ][main][IgniteKernal] Set
> localAddress@TcpCommunicationSpi='192.168.37.103' because of
> localIpStartsWith='192.168.37' match.
> [14:34:30,308][INFO ][main][IgniteKernal] Set
> localAddress@TcpDiscoverySpi='192.168.37.103' because of
> localIpStartsWith='192.168.37' match.
> [14:34:30,351][INFO ][main][IgnitePluginProcessor] Configured plugins:
> [14:34:30,351][INFO ][main][IgnitePluginProcessor]   ^-- None
> [14:34:30,351][INFO ][main][IgnitePluginProcessor]
> [14:34:30,392][WARN ][main][TcpCommunicationSpi] Failure detection timeout
> will be ignored (one of SPI parameters has been set explicitly)
> [14:34:32,074][INFO ][main][TcpCommunicationSpi] Successfully bound to TCP
> port [port=47102, locHost=/192.168.37.103]
> [14:34:32,120][WARN ][main][NoopCheckpointSpi] Checkpoints are disabled (to
> enable configure any GridCheckpointSpi implementation)
> [14:34:32,176][WARN ][main][GridCollisionManager] Collision resolution is
> disabled (all jobs will be activated upon arrival).
> [14:34:32,182][WARN ][main][NoopSwapSpaceSpi] Swap space is disabled. To
> enable use FileSwapSpaceSpi.
> [14:34:32,184][INFO ][main][IgniteKernal] Secur

Re: CacheContinuousQuery did not work after the second servernodejoinned into the topology.

2016-10-13 Thread Nikolai Tikhonov
Hi Lin,

In your case autoUnsubsribe flag should be set to false.

Could you describe how change performance after you enable cache events?

Thanks,
Nikolay

On Mon, Oct 10, 2016 at 6:59 AM, Lin  wrote:

> Hi Nikolay,
>
> I have a requirement on CQ to implement some functions like event
> listener. The client initializes and adds one listener to ther cluster, and
> hope to can recevie the expected CacheEntryEvent persistently without
> considering the leaving or adding nodes.
>
> firstly, i implement this feacture with the Ignite.events, but the
> performance is unacceptable.
>
> Any advices are welcome.
>
> Lin.
>
>
> ------ Original --
> *From: * "Nikolai Tikhonov";;
> *Date: * Fri, Oct 7, 2016 09:34 PM
> *To: * "user";
> *Subject: * Re: CacheContinuousQuery did not work after the second
> servernodejoinned into the topology.
>
> Hi Lin!
>
> It's bug. I've create ticket and you can track progress there
> https://issues.apache.org/jira/browse/IGNITE-4047. How workaround you can
> start CQ with setAutoUnsubscribe(true).
>
> BTW: Why you use CQ with auto unsubscribe false?
>
> Thanks,
> Nikolay
>
> On Fri, Sep 30, 2016 at 7:18 AM, Lin  wrote:
>
>> Hi Vladislav,
>>
>> Thank you for your response. I can reproduce this issue with the maven
>> project you gave.
>>
>> My problems is that: after the second server node joinned into the
>> topology, I put some data into the cache, the result is that the CQ query
>> works in the first and second server nodes (the remote filter procuduced
>> the system output as expceted), but the CQ query client node was not
>> working as expected ( the CacheEntryUpdatedListener was not trigged any
>> more).
>>
>> I have modified the pom with my enviroment (only some modification about
>> package versions), and add some shell script in windows to reproduce the
>> issue easily.
>> My enviroment is ignite ver. 1.6.0#20160518-sha1:0b22c45b, the details
>> can be found in the log file in "log/s1.debug.log" which was produced with
>> the "-X" parameter in maven (see script server.bat).
>>
>> Here is the steps about how to reproduce the issue in my envrioment.
>> 1. mvn compile, the produce the target classes and ignite-*.xml.
>> 2. the first test
>> 2.1 run the server.bat to start the first server node, the console
>> outputs were piped into s1.log.
>> 2.2 run the CQClient.bat to create a client with cq query, when the
>> CacheContinuousQueryEvent is received, it will produce outputs like
>> `
>> sys-#5%null% receive CacheEntryEvent CacheContinuousQueryEvent
>> [evtType=CREATED, key=5, newVal=0, oldVal=null]
>> sys-#5%null% receive CacheEntryEvent CacheContinuousQueryEvent
>> [evtType=UPDATED, key=5, newVal=1, oldVal=0]
>> `
>> in the client node, and the server node will produce outputs like
>> `
>> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
>> [evtType=CREATED, key=5, newVal=0, oldVal=null], with ret true
>> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
>> [evtType=UPDATED, key=5, newVal=1, oldVal=0], with ret true
>> `
>> 2.3 run the DataClient.bat to put 2 kv pairs( (5, 0), (5,1)) into given
>> cache and exit. This will cause the server1 producing outputs from remote
>> filter
>> `
>> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
>> [evtType=CREATED, key=5, newVal=0, oldVal=null], with ret true
>> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
>> [evtType=UPDATED, key=5, newVal=1, oldVal=0], with ret true
>> `
>> and cause the CQ client producing outputs from CacheEntryUpdatedListener
>> in the client,
>> `
>> sys-#5%null% receive CacheEntryEvent CacheContinuousQueryEvent
>> [evtType=CREATED, key=5, newVal=0, oldVal=null]
>> sys-#5%null% receive CacheEntryEvent CacheContinuousQueryEvent
>> [evtType=UPDATED, key=5, newVal=1, oldVal=0]
>> `
>>
>>
>> 3. continue to start the second test, and the issue is occurred,
>> 3.1 run the server.bat to start the second server node, and piped its
>> output into s2.log.
>> 3.2 run the DataClient.bat to put the same 2 kv pairs into cache, and in
>> server1 and server2's outputs, the remote filters output are the same,
>> `
>> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
>> [evtType=UPDATED, key=5, newVal=0, oldVal=1], with ret true
>> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
>> [evtType=UPDATED, key=5, newVal=1, oldVal=0], with ret true
>> `
>>

Re: CacheContinuousQuery did not work after the second server nodejoinned into the topology.

2016-10-07 Thread Nikolai Tikhonov
Hi Lin!

It's bug. I've create ticket and you can track progress there
https://issues.apache.org/jira/browse/IGNITE-4047. How workaround you can
start CQ with setAutoUnsubscribe(true).

BTW: Why you use CQ with auto unsubscribe false?

Thanks,
Nikolay

On Fri, Sep 30, 2016 at 7:18 AM, Lin  wrote:

> Hi Vladislav,
>
> Thank you for your response. I can reproduce this issue with the maven
> project you gave.
>
> My problems is that: after the second server node joinned into the
> topology, I put some data into the cache, the result is that the CQ query
> works in the first and second server nodes (the remote filter procuduced
> the system output as expceted), but the CQ query client node was not
> working as expected ( the CacheEntryUpdatedListener was not trigged any
> more).
>
> I have modified the pom with my enviroment (only some modification about
> package versions), and add some shell script in windows to reproduce the
> issue easily.
> My enviroment is ignite ver. 1.6.0#20160518-sha1:0b22c45b, the details
> can be found in the log file in "log/s1.debug.log" which was produced with
> the "-X" parameter in maven (see script server.bat).
>
> Here is the steps about how to reproduce the issue in my envrioment.
> 1. mvn compile, the produce the target classes and ignite-*.xml.
> 2. the first test
> 2.1 run the server.bat to start the first server node, the console outputs
> were piped into s1.log.
> 2.2 run the CQClient.bat to create a client with cq query, when the
> CacheContinuousQueryEvent is received, it will produce outputs like
> `
> sys-#5%null% receive CacheEntryEvent CacheContinuousQueryEvent
> [evtType=CREATED, key=5, newVal=0, oldVal=null]
> sys-#5%null% receive CacheEntryEvent CacheContinuousQueryEvent
> [evtType=UPDATED, key=5, newVal=1, oldVal=0]
> `
> in the client node, and the server node will produce outputs like
> `
> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
> [evtType=CREATED, key=5, newVal=0, oldVal=null], with ret true
> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
> [evtType=UPDATED, key=5, newVal=1, oldVal=0], with ret true
> `
> 2.3 run the DataClient.bat to put 2 kv pairs( (5, 0), (5,1)) into given
> cache and exit. This will cause the server1 producing outputs from remote
> filter
> `
> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
> [evtType=CREATED, key=5, newVal=0, oldVal=null], with ret true
> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
> [evtType=UPDATED, key=5, newVal=1, oldVal=0], with ret true
> `
> and cause the CQ client producing outputs from CacheEntryUpdatedListener
> in the client,
> `
> sys-#5%null% receive CacheEntryEvent CacheContinuousQueryEvent
> [evtType=CREATED, key=5, newVal=0, oldVal=null]
> sys-#5%null% receive CacheEntryEvent CacheContinuousQueryEvent
> [evtType=UPDATED, key=5, newVal=1, oldVal=0]
> `
>
>
> 3. continue to start the second test, and the issue is occurred,
> 3.1 run the server.bat to start the second server node, and piped its
> output into s2.log.
> 3.2 run the DataClient.bat to put the same 2 kv pairs into cache, and in
> server1 and server2's outputs, the remote filters output are the same,
> `
> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
> [evtType=UPDATED, key=5, newVal=0, oldVal=1], with ret true
> CacheEntryEventRemoteFilter.evaluate CacheContinuousQueryEvent
> [evtType=UPDATED, key=5, newVal=1, oldVal=0], with ret true
> `
> but in the CQClient's output, there is nothing, the expected output should
> be something like
> `
> sys-#5%null% receive CacheEntryEvent CacheContinuousQueryEvent
> [evtType=UPDATED, key=5, newVal=0, oldVal=1]
> sys-#5%null% receive CacheEntryEvent CacheContinuousQueryEvent
> [evtType=UPDATED, key=5, newVal=1, oldVal=0]
> `
> but not.
>
> It looks like that the remote filter is initialized in the server2 node
> with method org.apache.ignite.internal.processors.cache.query.continuous.
> CacheContinuousQueryHandlerV2#readExternal,
> but the links between remote filter and local listener is broken? The
> CacheContinuousQueryEvent didn't pass to the client.
>
> And in the meanwhile, why the two server node process the same data?
>
> Hope for your help.
>
>
> Lin.
>


Re: Apache Ignite cluster freeze after a period of time

2016-09-14 Thread Nikolai Tikhonov
Hi,

How I see from logs you are performing putAll from many threads and I think
that you got deadlock because try to get locks on keys in random order. For
example: first thread try to lock K1 and K2 and second thread K2 and K1.
Please, make sure that you pass sorted map to putAll (for example TreeMap
instead of HashMap).

On Wed, Sep 14, 2016 at 7:53 PM, qwertywx  wrote:

> Hello,
>
> I got the thread dump but in there I found something strange.  dump.log
> 
>
> It seems there is a deadlock as the dump says. But I do not know the reason
> of it :(
>
> Any ideas?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Apache-Ignite-cluster-freeze-after-a-
> period-of-time-tp7726p7749.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Query does not include objects added into Cache from within a transaction

2016-09-14 Thread Nikolai Tikhonov
Hi,

I'm not sure that exist workaround without additional efforts.
BTW it's correct link on this ticket
https://issues.apache.org/jira/browse/IGNITE-3478

On Tue, Sep 13, 2016 at 9:43 PM, vkulichenko 
wrote:

> If you're executing a query from the same transaction where you updated the
> data, I'm pretty sure you can find a workaround, because you know
> everything
> about the updates made within the transaction. Transactional SQL feature is
> mostly designed to avoid dirty reads in case a query transaction and update
> transaction are two separate concurrent transactions.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Query-does-not-include-objects-
> added-into-Cache-from-within-a-transaction-tp7651p7717.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite Atomic Long

2016-09-14 Thread Nikolai Tikhonov
Hi,

I'm using a wrapper to start the nodes as service.
>

Could you check that the wrapper doesn't stop this thread? Also could you
share full logs?


> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-Atomic-Long-tp7706p7720.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Yarn Ignite Container Automatically exit when other yarn application running

2016-09-07 Thread Nikolai Tikhonov
And could you clarify which version of hadoop are you using?

On Wed, Sep 7, 2016 at 3:05 PM, Nikolai Tikhonov 
wrote:

> Hi,
>
> Did you use kerberos authentication for YARN?
>
> On Fri, Sep 2, 2016 at 3:46 AM, percent620  wrote:
>
>> but from yarn contains, i can't find any errors.
>>
>>
>> sometimes the yarn am ignite was shutdown down(and sometimes restart a new
>> AM, i don't know why)?
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Yarn-Ignite-Container-Automatically-exit-
>> when-other-yarn-application-running-tp7335p7469.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: Yarn Ignite Container Automatically exit when other yarn application running

2016-09-07 Thread Nikolai Tikhonov
Hi,

Did you use kerberos authentication for YARN?

On Fri, Sep 2, 2016 at 3:46 AM, percent620  wrote:

> but from yarn contains, i can't find any errors.
>
>
> sometimes the yarn am ignite was shutdown down(and sometimes restart a new
> AM, i don't know why)?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Yarn-Ignite-Container-Automatically-exit-when-other-
> yarn-application-running-tp7335p7469.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: why allocated containers much more than INGNITE_NODE_COUNT?

2016-09-07 Thread Nikolai Tikhonov
Val, Shirely

Yes. our YARN integration does not support win environment. I've created
ticket on it https://issues.apache.org/jira/browse/IGNITE-3850

On Wed, Sep 7, 2016 at 5:23 AM, vkulichenko 
wrote:

> Nikolai,
>
> Is this a bug? Should we create a ticket?
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/why-allocated-containers-much-
> more-than-INGNITE-NODE-COUNT-tp7226p7562.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: why allocated containers much more than INGNITE_NODE_COUNT?

2016-08-31 Thread Nikolai Tikhonov
Hi,

Could you try to run examples over YARN (for example
http://wiki.apache.org/hadoop/WordCount)? If it works could you share your
ignite-yarn configuration file?

On Tue, Aug 30, 2016 at 12:19 PM, shirely  wrote:

> hi,
> Thanks for your advise. I found that launch container always failed.
> because
> there is a syntax error in launcontainer.cmd.  In launchcontainer.cmd,
> there
> are some lines with syntax error:
>
> @set IGNITE_RELEASES_DIR=/ignite/releases/
> @if %errorlevel% neq 0 exit /b %errorlevel%
> @set ComSpec=C:\Windows\system32\cmd.exe
> @if %errorlevel% neq 0 exit /b %errorlevel%
> *@set
> =D:=D:\data\yarn\nm-local-dir\usercache\SYSTEM\appcache\
> application_1471608576294_12568\container_e48_
> 1471608576294_12568_01_01*
> @if %errorlevel% neq 0 exit /b %errorlevel%
> @set HADOOP_LOGLEVEL=INFO
> @if %errorlevel% neq 0 exit /b %errorlevel%
>
> Has Anyone else met the problem before? I'm really confused. What makes the
> line like this "set
> =D:=D:\data\yarn\nm-local-dir\usercache\SYSTEM\appcache\
> application_1471608576294_12568\container_e48_
> 1471608576294_12568_01_01",
> Acutally, the path is not valid in executor, only valid in
> driver.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/why-allocated-containers-much-
> more-than-INGNITE-NODE-COUNT-tp7226p7402.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Spring application context resource is not injected exception while starting ignite in jdbc driver mode

2016-08-26 Thread Nikolai Tikhonov
Andery,

It's for guarantee consistency. We should update entries in store in one
transaction how in Ignite.

On Fri, Aug 26, 2016 at 2:01 PM, Andrey Gura  wrote:

> Val,
>
> Why we need store on client node in case of partitioned or replicated
> cache?
>
> On Fri, Aug 26, 2016 at 4:53 AM, vkulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
>> Hi,
>>
>> This happens because JDBC driver tries to initialize the store. This is
>> needed for regular client nodes, but for the driver this doesn't make much
>> sense. I created a ticket: https://issues.apache.org/jira
>> /browse/IGNITE-3771
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Spring-application-context-resource-is-not-
>> injected-exception-while-starting-ignite-in-jdbc-driver
>> -me-tp7299p7328.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
> Andrey Gura
> GridGain Systems, Inc.
> www.gridgain.com
>


Re: why allocated containers much more than INGNITE_NODE_COUNT?

2016-08-26 Thread Nikolai Tikhonov
Hi,

It's looks like your containers failed and ignite yarn tries to start new
containers. Could you share your configurations and logs from containers?

On Tue, Aug 23, 2016 at 5:18 AM, shirely  wrote:

> well, I tried to integrate ignite with yarn and set IGNITE_NODE_COUNT
> equals
> 2, but when running the ignite yarn application, I found the total
> allocated
> containers is 156 and kept increasing. I'm really confused, what is the
> relationship between ignite node and allocated containers?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/why-allocated-containers-much-
> more-than-INGNITE-NODE-COUNT-tp7226.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Yarn Ignite Container Automatically exit when other yarn application running

2016-08-26 Thread Nikolai Tikhonov
Hi,

Could you please show logs from container which was failed?

On Fri, Aug 26, 2016 at 11:36 AM, percent620  wrote:

> Hello,
>
> I faced important issues. I have deployed yarn ignite application
> successfully. everything is okay.
>
>
> But today, when others running spark job on yarn(this job can't contain
> ignite),and faced error message as below
>
> *16/08/26 16:23:52 ERROR YarnScheduler: Lost executor 1 on : Container
> marked as failed: container_1455892346017_5494_01_02 on host:
> vmsecdomain010194054060.cm10. Exit status: -100. Diagnostics: Container
> released on a *lost* node
> *16/08/26 16:23:52 ERROR YarnScheduler: Lost an executor 1 (already
> removed): Pending loss reason.
>
> this is contain is yarn ignite container. I saw yarn contain logs and found
> that lost 1 electors on ignite.
>
>
> Why? Can anyone help me?
>
>
> Regarding yarn ignite integration, I use static ip discovery. I have
> specified all the workers ip ignite configuration, is this lead to error?
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Yarn-Ignite-Container-Automatically-exit-when-other-
> yarn-application-running-tp7335.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Yarn deployment for memory capacity make a bigger than before: Urgent!!!

2016-08-24 Thread Nikolai Tikhonov
Hi,

How I see from your logs, server node (which probably deployed by YARN)
consumes 1Gb memory (per node). But client node consumes more memory how I
see 4Gb. You can decrease memory consumption using -Xmx and -Xms jvm
options for client nodes.

On Wed, Aug 24, 2016 at 5:28 PM, percent620  wrote:

> Here is my detailed steps
> 1)root@sparkup1 config]# cat default-config.xml
> 
> http://www.springframework.org/schema/beans";
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
> xmlns:util="http://www.springframework.org/schema/util";
>xsi:schemaLocation="http://www.springframework.org/schema/beans
>http://www.springframework.org/schema/beans/spring-beans-4.1.xsd";>
> 
> 
>   class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.
> TcpDiscoveryVmIpFinder">
> 
>
>
> 172.16.186.200:47500..47509
>
> 172.16.186.201:47500..47509
>
> 172.16.186.202:47500..47509
>
> 
>
> 
>  
>
>   
> 
> [root@sparkup1 config]#
>
> 2)yarn contains log as below
>
> i started 3 ignite node and for every is 1024G
>
>
> $cat cluster.properties
> # The number of nodes in the cluster.
> IGNITE_NODE_COUNT=3
> # The number of CPU Cores for each Apache Ignite node.
> IGNITE_RUN_CPU_PER_NODE=1
> # The number of Megabytes of RAM for each Apache Ignite node.
> IGNITE_MEMORY_PER_NODE=1024
> # The version ignite which will be run on nodes.
> IGNITE_VERSION=1.0.6
> IGNITE_WORK_DIR=/u01/yueyi/apache-ignite-hadoop-1.6.0-bin
> IGNITE_XML_CONFIG=/ignite/releases/apache-ignite-hadoop-
> 1.6.0-bin/config/default-config.xml
> IGNITE_RELEASES_DIR=/ignite/releases/
> IGNITE_USERS_LIBS=/ignite/releases/apache-ignite-hadoop-1.6.0-bin/libs/
> #IGNITE_HOSTNAME_CONSTRAINT=vmsecdomain010194070026.cm10
> IGNITE_PATH=/ignite/releases/
>
>
> [root@sparkup3 config]# tail -f
> /usr/hadoop-2.4.1/logs/userlogs/application_1472047995043_0001/container_
> 1472047995043_0001_01_03/stdout
> [07:13:45] Configured plugins:
> [07:13:45]   ^-- None
> [07:13:45]
> [07:13:46] Security status [authentication=off, tls/ssl=off]
> [07:13:47] To start Console Management & Monitoring run
> ignitevisorcmd.{sh|bat}
> [07:13:47]
> [07:13:47] Ignite node started OK (id=20fb73be)
> [07:13:47] Topology snapshot [ver=1, servers=1, clients=0, CPUs=1,
> heap=1.0GB]
> [07:13:48] Topology snapshot [ver=2, servers=2, clients=0, CPUs=2,
> heap=2.0GB]
> [07:13:50] Topology snapshot [ver=3, servers=3, clients=0, CPUs=3,
> heap=3.0GB]
> ==
> Thanks is ok for above steps
>
>
>
> 3)
> spark-submit *--driver-memory 4G* --class com.ignite.testIgniteSharedRDD
> --master yarn --executor-cores 2 --executor-memory 1000m --num-executors 2
> --conf spark.rdd.compress=false --conf spark.shuffle.compress=false --conf
> spark.broadcast.compress=false
> /root/limu/ignite/spark-project-jar-with-dependencies.jar
>
>
> 4)Yarn logs become is
> [07:13:46] Security status [authentication=off, tls/ssl=off]
> [07:13:47] To start Console Management & Monitoring run
> ignitevisorcmd.{sh|bat}
> [07:13:47]
> [07:13:47] Ignite node started OK (id=20fb73be)
> [07:13:47] Topology snapshot [ver=1, servers=1, clients=0, CPUs=1,
> heap=1.0GB]
> [07:13:48] Topology snapshot [ver=2, servers=2, clients=0, CPUs=2,
> heap=2.0GB]
> [07:13:50] Topology snapshot [ver=3, servers=3, clients=0, CPUs=3,
> heap=3.0GB]
> *[07:16:54] Topology snapshot [ver=4, servers=3, clients=1, CPUs=3,
> heap=7.0GB] correct
>
> *
> /[07:17:06] Topology snapshot [ver=5, servers=4, clients=1, CPUs=3,
> heap=8.0GB]
> [07:17:07] Topology snapshot [ver=6, servers=5, clients=1, CPUs=3,
> heap=9.0GB]/
>
> is not correct, why become 5 servers and 9GB memory
>
>
>
> details log
> nohup: ignoring input
> 16/08/24 07:16:17 INFO spark.SparkContext: Running Spark version 1.6.1
> 16/08/24 07:16:18 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> 16/08/24 07:16:18 INFO spark.SecurityManager: Changing view acls to: root
> 16/08/24 07:16:18 INFO spark.SecurityManager: Changing modify acls to: root
> 16/08/24 07:16:18 INFO spark.SecurityManager: SecurityManager:
> authentication disabled; ui acls disabled; users with view permissions:
> Set(root); users with modify permissions: Set(root)
> 16/08/24 07:16:19 INFO util.Utils: Successfully started service
> 'sparkDriver' on port 56368.
> 16/08/24 07:16:20 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 16/08/24 07:16:20 INFO Remoting: Starting remoting
> 16/08/24 

Re: Yarn deployment for static TcpDiscoverySpi issues:Urgent In Production

2016-08-22 Thread Nikolai Tikhonov
Hi,

How I see in your logs that your "file:/usr/apache-ignite-
fabric-1.6.0-bin/config/default-config.xml" is incorrect.

invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 6;
columnNumber: 71; cvc-elt.1: Cannot find the declaration of element 'beans'.
at
org.springframework.beans.factory.xml.XmlBeanDefinitionReader.
doLoadBeanDefinitions(XmlBeanDefinitionReader.java:398)
at
org.springframework.beans.factory.xml.XmlBeanDefinitionReader.
loadBeanDefinitions(XmlBeanDefinitionReader.java:335)
at
org.springframework.beans.factory.xml.XmlBeanDefinitionReader.
loadBeanDefinitions(XmlBeanDefinitionReader.java:303)
at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.
applicationContext(IgniteSpringHelperImpl.java:379)
... 19 more
Caused by: org.xml.sax.SAXParseException; lineNumber: 6; columnNumber: 71;
cvc-elt.1: Cannot find the declaration of element 'beans'.

Could you share this file?

On Mon, Aug 22, 2016 at 3:45 PM, percent620  wrote:

> Hello,
>
> Everything is okay for me to integrate ignite with yarn on *Multicast Based
> Discovery* in my local spark and yarn cluster , but in our production env,
> some of ports could't be opened .So, I need to specify a static ip address
> to discovery each other.
>
> but when running my configuration and encountered the following issue. List
> my detailed steps as below.
>
> 1、config/default-config.mxl
> 
> http://www.springframework.org/schema/beans";
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>xsi:schemaLocation="
>http://www.springframework.org/schema/beans
>http://www.springframework.org/schema/beans/spring-beans.xsd";>
> 
> 
>   class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.
> TcpDiscoveryVmIpFinder">
> 
>
>
> 172.16.186.200:47500..47509
>
> 172.16.186.201:47500..47509
>
> 172.16.186.202:47500..47509
>
> 
>
> 
>  
>
>   
> 
>
> 2、my java code on idea
> ackage com.ignite
> import org.apache.ignite.spark._
> import org.apache.ignite.configuration._
> import org.apache.spark.{SparkConf, SparkContext}
> /**
>   * Created by limu on 2016/8/14.
>   */
> object testIgniteSharedRDD {
>   def main(args: Array[String]): Unit = {
> val conf = new SparkConf().setAppName("testIgniteSharedRDD")
> val sc = new SparkContext(conf)
>
>   /*  val cfg = new IgniteConfiguration()
> cfg.setIgniteHome("/usr/apache-ignite-fabric-1.6.0-bin")
> */
> //val ic = new IgniteContext[Integer, Integer](sc, () => new
> IgniteConfiguration())
>   val ic = new IgniteContext[Integer, Integer](sc,
> "/usr/apache-ignite-fabric-1.6.0-bin/config/default-config.xml")
> val sharedRDD = ic.fromCache("sharedIgniteRDD-ling-sha111o")
> println("original.sharedCounter=> " + sharedRDD.count())
>
> sharedRDD.savePairs(sc.parallelize(1 to 77000, 10).map(i => (new
> Integer(i), new Integer(i
> println("final.sharedCounter=> " + sharedRDD.count())
>
> println("final.condition.couner=> " + sharedRDD.filter(_._2 >
> 21000).count )
>   }
>
>
> 3、Yarn container logs
> Logs for container_1471869381289_0001_01_01
>
> About Apache Hadoop
> ResourceManager
>
> RM Home
>
> NodeManager
> Tools
>
>
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/usr/hadoop-2.4.1/share/hadoop/common/lib/slf4j-
> log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/hadoop-2.4.1/tmp/nm-local-dir/usercache/
> root/appcache/application_1471869381289_0001/filecache/
> 10/ignite-yarn.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 16/08/22 05:38:52 INFO impl.ContainerManagementProtocolProxy:
> yarn.client.max-nodemanagers-proxies : 500
> 16/08/22 05:38:52 INFO client.RMProxy: Connecting to ResourceManager at
> sparkup1/172.16.186.200:8030
> Aug 22, 2016 5:38:53 AM org.apache.ignite.yarn.ApplicationMaster run
> INFO: Application master registered.
> Aug 22, 2016 5:38:53 AM org.apache.ignite.yarn.ApplicationMaster run
> INFO: Making request. Memory: 1,908, cpu 1.
> Aug 22, 2016 5:38:53 AM org.apache.ignite.yarn.ApplicationMaster run
> INFO: Making request. Memory: 1,908, cpu 1.
> Aug 22, 2016 5:38:53 AM org.apache.ignite.yarn.ApplicationMaster run
> INFO: Making request. Memory: 1,908, cpu 1.
> 16/08/22 05:38:54 INFO im

Re: ignition on yarn taking up all memory

2016-08-19 Thread Nikolai Tikhonov
Hi,
Ignite which running over YARN should occupies memory how set in
IGNITE_MEMORY_PER_NODE property. Could you please share your config file?

On Fri, Aug 19, 2016 at 9:24 AM, prasanth  wrote:

> I started ignite on yarn and then tried to submit a sample spark job. The
> ignition job took all available memory and so spark job was in "accepted"
> state forever. Spark job never had enough resources to run.
>
> So, I copied the following file to hdfs path /tmp/ignite and gave
> IGNITE_XML_CONFIG=/tmp/ignite/cache-settings.xml and then started ignite.
> It
> still takes up all available cluster memory (which is less around 8.5 GB).
>
> How can I manage the memory that Ignite takes up, so that I can start other
> jobs?
>
> 
>
> http://www.springframework.org/schema/beans";
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>xsi:schemaLocation="
>http://www.springframework.org/schema/beans
>http://www.springframework.org/schema/beans/spring-beans.xsd";>
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
>   
> 
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
>
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/ignition-on-yarn-taking-up-all-memory-tp7161.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Re:Re: Ignite for Spark on YARN Deployment

2016-08-12 Thread Nikolai Tikhonov
Hi,

Could you make sure that you have access from "driver" machine to machines
on which running YARN? Could you ping them?

On Fri, Aug 12, 2016 at 11:26 AM, percent620  wrote:

> scala> [14:52:04] New version is available at ignite.apache.org: 1.7.0
> 16/08/12 14:52:46 ERROR TcpDiscoverySpi: Failed to reconnect to cluster
> (consider increasing 'networkTimeout' configuration property)
> [networkTimeout=5000]
> 16/08/12 15:37:54 ERROR GridClockSyncProcessor: Failed to send time sync
> snapshot to remote node (did not leave grid?)
> [nodeId=20d8035e-abec-44ca-a5c0-7e3308984d83,
> msg=GridClockDeltaSnapshotMessage [snapVer=GridClockDeltaVersion [ver=34,
> topVer=13], deltas={3286d19e-72d5-4353-86d2-03ffdb6c4733=0,
> ac2c7723-fa93-49d5-92c3-1d815b6b178b=0,
> 7d7790d2-a67f-4d76-b36f-47dc08025594=0,
> 3cbcbe73-f29d-4051-952a-9ba7b80cf1c3=0,
> 20d8035e-abec-44ca-a5c0-7e3308984d83=0,
> f94defdc-ca9b-450b-91c0-6a6b26f5d553=0}], err=Failed to send message (node
> may have left the grid or TCP connection cannot be established due to
> firewall issues) [node=TcpDiscoveryNode
> [id=20d8035e-abec-44ca-a5c0-7e3308984d83, addrs=[XXX, 127.0.0.1],
> sockAddrs=[/Z:47500, /XXX:47500, /127.0.0.1:47500],
> discPort=47500,
> order=1, intOrder=1, lastExchangeTime=1470987075851, loc=false,
> ver=1.6.0#20160518-sha1:0b22c45b, isClient=false], topic=TOPIC_TIME_SYNC,
> msg=GridClockDeltaSnapshotMessage [snapVer=GridClockDeltaVersion [ver=34,
> topVer=13], deltas={3286d19e-72d5-4353-86d2-03ffdb6c4733=0,
> ac2c7723-fa93-49d5-92c3-1d815b6b178b=0,
> 7d7790d2-a67f-4d76-b36f-47dc08025594=0,
> 3cbcbe73-f29d-4051-952a-9ba7b80cf1c3=0,
> 20d8035e-abec-44ca-a5c0-7e3308984d83=0,
> f94defdc-ca9b-450b-91c0-6a6b26f5d553=0}], policy=2]]
> 16/08/12 15:38:02 ERROR TcpDiscoverySpi: Failed to reconnect to cluster
> (consider increasing 'networkTimeout' configuration property)
> [networkTimeout=5000]
> 16/08/12 15:50:02 ERROR TcpDiscoverySpi: Failed to reconnect to cluster
> (consider increasing 'networkTimeout' configuration property)
> [networkTimeout=5000]
> Exception in thread "ignite-update-notifier-timer" class
> org.apache.ignite.IgniteClientDisconnectedException: Client node
> disconnected: null
> at
> org.apache.ignite.internal.GridKernalGatewayImpl.readLock(
> GridKernalGatewayImpl.java:87)
> at
> org.apache.ignite.internal.cluster.ClusterGroupAdapter.
> guard(ClusterGroupAdapter.java:170)
> at
> org.apache.ignite.internal.cluster.ClusterGroupAdapter.
> nodes(ClusterGroupAdapter.java:288)
> at
> org.apache.ignite.internal.processors.cluster.ClusterProcessor$
> UpdateNotifierTimerTask.safeRun(ClusterProcessor.java:224)
> at org.apache.ignite.internal.util.GridTimerTask.run(
> GridTimerTask.java:34)
> at java.util.TimerThread.mainLoop(Timer.java:555)
> at java.util.TimerThread.run(Timer.java:505)
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-for-Spark-on-YARN-Deployment-tp6910p7010.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Re:Re: Ignite for Spark on YARN Deployment

2016-08-11 Thread Nikolai Tikhonov
>
> 3) I'm running spark-shell on driver machine NOT yarn cluster?
>

ignite-spark starts Ignite client node which should have direct access to
YARN ignite cluster (have network access, open ports and etc).


Re: Re:Re: Ignite for Spark on YARN Deployment

2016-08-11 Thread Nikolai Tikhonov
Could you show logs from containers? spark-shell just hangs or print some
logs? Also are you sure that you have direct access to machine on which
running yarn cluster?

On Thu, Aug 11, 2016 at 3:43 PM, percent620  wrote:

> Hello, Nikolai,
> 1、
> Just updated configuration default-config.xm on
> /u01/XXX/apache-ignite-fabric-1.6.0-bin/config/default-config.xm
>
> 2)./hdfs dfs -put
> /u01/XXX/apache-ignite-fabric-1.6.0-bin/config/default-config.xm
> /ignite/release16/apache-ignite-fabric-1.6.0-bin/config/
>
>
>
> 3)
> scala> import org.apache.ignite.spark._
> import org.apache.ignite.spark._
>
> scala> import org.apache.ignite.configuration._
> import org.apache.ignite.configuration._
>
> scala> val ic = new IgniteContext[Integer, Integer](sc,
> "/u01/XXX/apache-ignite-fabric-1.6.0-bin/config/default-config.xml")
>
>
> it also hanging, 
>
> Can I miss some files to be changed? thanks!!!
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-for-Spark-on-YARN-Deployment-tp6910p6974.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Re:Re: Ignite for Spark on YARN Deployment

2016-08-11 Thread Nikolai Tikhonov
Hi,

You can use hdfs path as
hdfs://your_host:9000/ignite/release16/apache-ignite-fabric-1.6.0-bin/config/default-config.xml.
Or you can just copy this config file on local disk. ;) You should use the
same configuration, but it can be different files with the same content.

On Thu, Aug 11, 2016 at 3:14 PM, percent620  wrote:

> Thanks Nikolai very much.
>
> As your request, and changed configuration
> /ignite/release16/apache-ignite-fabric-1.6.0-bin/config/default-config.xml
>
> but this file is hfs file
>
> 1)
> $./hdfs dfs -text
> /ignite/release16/apache-ignite-fabric-1.6.0-bin/config/default-config.xml
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/u01/hadoop-2.6.0-cdh5.5.0/share/hadoop/common/
> lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/u01/hadoop-2.6.0-cdh5.5.0/share/hadoop/common/
> lib/tachyon-client-0.9.0-SNAPSHOT-jar-with-dependencies.jar!/org/slf4j/
> impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 
>
>
>
> http://www.springframework.org/schema/beans";
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>xsi:schemaLocation="
>http://www.springframework.org/schema/beans
>http://www.springframework.org/schema/beans/spring-beans.xsd";>
>
>
>
>   
>   
>class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>   
>class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.
> TcpDiscoveryMulticastIpFinder"/>
>
> 
>
> 
>
> 
>
> 
> 
>
>
> 2) get the following error message as below
>
> scala> val ic = new IgniteContext[Integer, Integer](sc,
> "/ignite/release16/apache-ignite-fabric-1.6.0-bin/
> config/default-config.xml")
> class org.apache.ignite.IgniteCheckedException: Spring XML configuration
> path is invalid:
> /ignite/release16/apache-ignite-fabric-1.6.0-bin/
> config/default-config.xml.
> Note that this path should be either absolute or a relative local file
> system path, relative to META-INF in classpath or valid URL to IGNITE_HOME.
> at
> org.apache.ignite.internal.util.IgniteUtils.resolveSpringUrl(IgniteUtils.
> java:3580)
> at
> org.apache.ignite.internal.IgnitionEx.loadConfigurations(
> IgnitionEx.java:678)
> at
> org.apache.ignite.internal.IgnitionEx.loadConfiguration(
> IgnitionEx.java:717)
> at
> org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$2.apply(
> IgniteContext.scala:85)
> at
> org.apache.ignite.spark.IgniteContext$$anonfun$$lessinit$greater$2.apply(
> IgniteContext.scala:85)
> at org.apache.ignite.spark.Once.apply(IgniteContext.scala:198)
> at org.apache.ignite.spark.IgniteContext.ignite(
> IgniteContext.scala:138)
> at org.apache.ignite.spark.IgniteContext.(
> IgniteContext.scala:59)
> at org.apache.ignite.spark.IgniteContext.(
> IgniteContext.scala:85)
> at
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.
> (:33)
> at
> $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<
> init>(:38)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(<
> console>:40)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(<
> console>:42)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:44)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:46)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:48)
> at $iwC$$iwC$$iwC$$iwC$$iwC.(:50)
> at $iwC$$iwC$$iwC$$iwC.(:52)
> at $iwC$$iwC$$iwC.(:54)
> at $iwC$$iwC.(:56)
> at $iwC.(:58)
> at (:60)
> at .(:64)
> at .()
> at .(:7)
> at .()
> at $print()
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
> at
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
> at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(
> SparkIMain.scala:840)
> at org.apache.spark.repl.SparkIMain.interpret(
> SparkIMain.scala:871)
> at org.apache.spark.repl.SparkIMain.interpret(
> SparkIMain.scala:819)
> at org.apache.spark.repl.SparkILoop.reallyInterpret$1(
> SparkILoop.scala:857)
> at
> org.apache.spark.repl.SparkILoop.interpretStartingWith(
> SparkILoop.scala:902)
> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
> at org.apache.spark.repl.SparkILoop.processLine$

Re: Re:Re: Ignite for Spark on YARN Deployment

2016-08-11 Thread Nikolai Tikhonov
Great! Ignite YARN cluster successfully started, but ignite spark shell
doesn't see server nodes. You need to configure IP finder, by default YARN
cluster using VmIpFinder, but ignite-spark MulticastIpFinder. Could you
change configuration for YARN cluster (IGNITE_XML_CONFIG=/ignite/
release16/apache-ignite-fabric-1.6.0-bin/config/default-config.xml) to the
following?



http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xsi:schemaLocation="http://www.springframework.org/schema/beans

http://www.springframework.org/schema/beans/spring-beans.xsd";>











and also start IgniteContext with the same configuration?

val ic = new IgniteContext[Integer, Integer](sc,"path to configuration file"
)


On Thu, Aug 11, 2016 at 1:54 PM, percent620  wrote:

> 1、Adjusted cluster.properites as below
> $cat cluster16.properties
> # The number of nodes in the cluster.
> IGNITE_NODE_COUNT=4
> # The number of CPU Cores for each Apache Ignite node.
> IGNITE_RUN_CPU_PER_NODE=1
> # The number of Megabytes of RAM for each Apache Ignite node.
> IGNITE_MEMORY_PER_NODE=2048
> # The version ignite which will be run on nodes.
> IGNITE_VERSION=1.6.0
> IGNITE_WORK_DIR=/u01/yueyi/apache-ignite-fabric-1.6.0-bin/
> IGNITE_XML_CONFIG=/ignite/release16/apache-ignite-fabric-1.6.0-bin/config/
> default-config.xml
> IGNITE_RELEASES_DIR=/ignite/release16/
> #IGNITE_USERS_LIBS=/u01/yueyi/apache-ignite-fabric-1.6.0-bin/libs/
> #IGNITE_HOSTNAME_CONSTRAINT=vmsecdomain010194070026.cm10
> IGNITE_PATH=/ignite/release16/apache-ignite-fabric-1.6.0-bin.zip
>
> 2、hdfs directory
> $./hdfs dfs -ls /ignite/release16
> drwxr-xr-x   - hbase hbase  0 2016-08-11 12:44
> /ignite/release16/apache-ignite-fabric-1.6.0-bin
> -rw-r--r--   3 hbase hbase  175866626 2016-08-11 18:31
> /ignite/release16/apache-ignite-fabric-1.6.0-bin.zip
>
> 3、yarn console
> INFO: Application master registered.
> Aug 11, 2016 6:32:57 PM org.apache.ignite.yarn.ApplicationMaster run
> INFO: Making request. Memory: 2,432, cpu 1.
> Aug 11, 2016 6:32:57 PM org.apache.ignite.yarn.ApplicationMaster run
> INFO: Making request. Memory: 2,432, cpu 1.
> Aug 11, 2016 6:32:57 PM org.apache.ignite.yarn.ApplicationMaster run
> INFO: Making request. Memory: 2,432, cpu 1.
> Aug 11, 2016 6:32:57 PM org.apache.ignite.yarn.ApplicationMaster run
> INFO: Making request. Memory: 2,432, cpu 1.
> 16/08/11 18:32:57 INFO impl.AMRMClientImpl: Received new token for :
> xx1:29077
> 16/08/11 18:32:57 INFO impl.AMRMClientImpl: Received new token for :
> xxx2:57492
> 16/08/11 18:32:57 INFO impl.AMRMClientImpl: Received new token for :
> xxx3:59929
> 16/08/11 18:32:57 INFO impl.AMRMClientImpl: Received new token for :
> xx4:23159
>
> 5、contain logs as below
> 1)
> [18:33:13] Security status [authentication=off, tls/ssl=off]
> [18:33:18] To start Console Management & Monitoring run
> ignitevisorcmd.{sh|bat}
> [18:33:18]
> [18:33:18] Ignite node started OK (id=a060a3ee)
> [18:33:18] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72,
> heap=6.0GB]
> [18:33:19] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96,
> heap=8.0GB]
>
> 2)[18:33:13] Security status [authentication=off, tls/ssl=off]
> [18:33:19] To start Console Management & Monitoring run
> ignitevisorcmd.{sh|bat}
> [18:33:19]
> [18:33:19] Ignite node started OK (id=5c8dfd50)
> [18:33:19] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96,
> heap=8.0GB]
>
> 3)[18:33:12] Ignite node started OK (id=4e75d238)
> [18:33:12] Topology snapshot [ver=1, servers=1, clients=0, CPUs=24,
> heap=2.0GB]
> [18:33:14] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48,
> heap=4.0GB]
> [18:33:17] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72,
> heap=6.0GB]
> [18:33:19] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96,
> heap=8.0GB]
>
> 4)[18:33:14] Ignite node started OK (id=250fcb93)
> [18:33:14] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48,
> heap=4.0GB]
> [18:33:17] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72,
> heap=6.0GB]
> [18:33:19] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96,
> heap=8.0GB]
>
> faced issues=
>
> 1、spark-shell test
> ./spark-shell --jars
> /u01/xxx/apache-ignite-hadoop-1.6.0-bin/libs/ignite-core-1.
> 6.0.jar,/u01/xxx/apache-ignite-hadoop-1.6.0-bin/libs/
> ignite-spark/ignite-spark-1.6.0.jar,/u01/xxx/apache-ignite-
> hadoop-1.6.0-bin/libs/cache-api-1.0.0.jar,/u01/xxx/apache-
> ignite-hadoop-1.6.0-bin/libs/ignite-log4j/ignite-log4j-1.6.
> 0.jar,/u01/xxx/apache-ignite-hadoop-1.6.0-bin/libs/ignite-
> log4j/log4j-1.2.17.jar
> --packages
> org.apache.ignite:ignite-spark:1.6.0,org.apache.ignite:ignite-spring:1.6.0
>
> 2、SQL context available as sqlContext.
>
> scala> import org.apache.ignite.spark._
> import org.apache.ignite.spark._
>
> scala> import org.apache.ignite.configuration._
> import org.apache.ignite.conf

Re: Why there are no official 1.6.0 and 1.7.0 ignite docker images on Docker Hub?

2016-08-11 Thread Nikolai Tikhonov
Hi,

I've updated apache ignite images on docker hub. Thank you for your
attention.

On Sun, Aug 7, 2016 at 12:18 AM, zshamrock 
wrote:

> Why there is no official 1.6.0 and 1.7.0 ignite docker images on Docker Hub
> https://hub.docker.com/r/apacheignite/ignite/tags/?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Why-there-are-no-official-1-6-0-and-1-7-
> 0-ignite-docker-images-on-Docker-Hub-tp6832.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite for Spark on YARN Deployment

2016-08-11 Thread Nikolai Tikhonov
Hi,

Could you please provide logs from containers?
Also IGNITE_PATH property incorrect. The property should contains path to
apache ignite zip archive. For example:
/ignite/apache-ignite-fabric-1.7.0-bin.zip.
Also IGNITE_USERS_LIBS not needed here. The property need to use when you
want deploy to cluster your own libs.

On Thu, Aug 11, 2016 at 7:54 AM, percent620  wrote:

> Thanks vkulichenko's quick response.
>
> Here is my detailed steps about how to deploy and integration spark with
> ignite as below.
>
> 1、Followed this guidelines about how to deploy ignite-yarn application.
>  >
>
> it's successfully and the log is displayed ok
> ==
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/u01/hbase/hadoop-2.5.0-cdh5.3.0/share/hadoop/
> common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/
> StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/
> appcache/application_1455892346017_5077/filecache/
> 10/ignite-yarn.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 16/08/10 22:54:57 INFO impl.ContainerManagementProtocolProxy:
> yarn.client.max-cached-nodemanagers-proxies : 0
> Aug 10, 2016 10:54:58 PM org.apache.ignite.yarn.ApplicationMaster run
> INFO: Application master registered.
> Aug 10, 2016 10:54:58 PM org.apache.ignite.yarn.ApplicationMaster run
> INFO: Making request. Memory: 2,432, cpu 1.
> Aug 10, 2016 10:54:58 PM org.apache.ignite.yarn.ApplicationMaster run
> INFO: Making request. Memory: 2,432, cpu 1.
> 16/08/10 22:54:59 INFO impl.AMRMClientImpl: Received new token for :
> vmsecdomain010194062066.cm10:61362
> 16/08/10 22:54:59 INFO impl.AMRMClientImpl: Received new token for :
> vmsecdomain010194062042.cm10:42077
> Aug 10, 2016 10:54:59 PM org.apache.ignite.yarn.ApplicationMaster
> onContainersAllocated
> INFO: Launching container: container_1455892346017_5077_02_02.
> 16/08/10 22:54:59 INFO impl.ContainerManagementProtocolProxy: Opening
> proxy
> : vmsecdomain010194062066.cm10:61362
> Aug 10, 2016 10:54:59 PM org.apache.ignite.yarn.ApplicationMaster
> onContainersAllocated
> INFO: Launching container: container_1455892346017_5077_02_03.
> 16/08/10 22:54:59 INFO impl.ContainerManagementProtocolProxy: Opening
> proxy
> : vmsecdomain010194062042.cm10:42077
> Aug 10, 2016 10:55:08 PM org.apache.ignite.yarn.ApplicationMaster
> onContainersCompleted
> INFO: Container completed. Container id:
> container_1455892346017_5077_02_02. State: COMPLETE.
> Aug 10, 2016 10:55:09 PM org.apache.ignite.yarn.ApplicationMaster
> onContainersCompleted
> INFO: Container completed. Container id:
> container_1455892346017_5077_02_03. State: COMPLETE.
>
>
> 2、downloaded this apache-ignite-fabric-1.6.0-bin.zip and unzip this file
> to
> the /u01/XXX/apache-ignite-fabric-1.6.0-bin directory.
>
> 3、cluster16.properis content is as below
> $cat cluster16.properties
> # The number of nodes in the cluster.
> IGNITE_NODE_COUNT=2
> # The number of CPU Cores for each Apache Ignite node.
> IGNITE_RUN_CPU_PER_NODE=1
> # The number of Megabytes of RAM for each Apache Ignite node.
> IGNITE_MEMORY_PER_NODE=2048
> # The version ignite which will be run on nodes.
> IGNITE_VERSION=1.6.0
> IGNITE_WORK_DIR=/u01/XXX/apache-ignite-fabric-1.6.0-bin/
> IGNITE_XML_CONFIG=/ignite/release16/apache-ignite-fabric-1.6.0-bin/config/
> default-config.xml
> IGNITE_RELEASES_DIR=/ignite/release16/
> IGNITE_USERS_LIBS=/u01/XXX/apache-ignite-fabric-1.6.0-bin/libs/
> IGNITE_PATH=/ignite/release16/
>
>
> hdfs directory is as below
> ===
> ./hdfs dfs -ls /ignite/
> drwxr-xr-x   - hbase hbase  0 2016-08-10 17:07 /ignite/release16
> drwxr-xr-x   - hbase hbase  0 2016-08-02 06:25 /ignite/releases
> drwxr-xr-x   - hbase hbase  0 2016-08-10 17:24 /ignite/workdir
> -rw-r--r--   3 hbase hbase   27710331 2016-08-10 17:04 /ignite/yarn
> =
> $./hdfs dfs -ls /ignite/release16
> drwxr-xr-x   - hbase hbase  0 2016-08-11 12:44
> /ignite/release16/apache-ignite-fabric-1.6.0-bin
>
> =
>
> 4、Running spark code on yarn and the code is as below
> val igniteContext = new IgniteContext[String, BaseLine](sc,() => new
> IgniteConfiguration())
>
>
> the code is hanging on this and i think that this client can't connect this
> server
>
> 5、from yarn console and found the following error message
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/u01/hbase/hadoop-2.5.0-cdh5.3.0/share/hadoop/
> common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/
> StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/u01/hbase/tmp/nm-local-dir/usercache/hbase/
> appcache/application_1455892346017_5077/filec

Re: yarn deployment

2016-08-11 Thread Nikolai Tikhonov
Val,

Yes, you are right. For example: /ignite/apache-ignite-fabric-1.7.0-bin.zip

On Thu, Aug 11, 2016 at 2:16 AM, vkulichenko 
wrote:

> To my knowledge you can upload the ZIP file and provide the path to it. No
> need to unzip it manually.
>
> Nikolai, can you please confirm?
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/yarn-deployment-tp6843p6939.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Client Reconnect Lifecycle with Continuous Queries

2016-08-10 Thread Nikolai Tikhonov
Hi,

In your case, a client node will deploy CQ query on new nodes which will
join to topology. Until you don't stop CQ (invoke close() method) or node
(which started CQ) stays in topology you will getting events.

On Thu, Aug 4, 2016 at 4:11 PM, barrettbnr  wrote:

> I'm trying to understand the lifecycle of reconnecting
>
> I have created a gist with testcase and some logging output
>
> https://gist.github.com/bearrito/a2aed9e3e8e06799d3f5b27fc997aaa6
>
> My question is why does the client still receive the cache put event even
> after it has been disconnected and then receives the reconnect event ?
>
> This is evidenced by the line that says: GOT POST-RECONNECT:
> 98cb5181-f06a-4f29-8683-a2fb4429b8be
>
> From the documentation it seems that I should not have to get a new cache
> instance, but rather should be able to use, the previous cache instance? It
> also seems like my previous continuous query should not have continued
> working.
>
> Is there a difference between a network disconnect and reconnecting after
> the grid node has called closed?
>
> I'm mainly interested in answering how to reconnect my continuous queries
> after I've received a CLIENT_NODE_RECONNECTED event.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Client-Reconnect-Lifecycle-with-
> Continuous-Queries-tp6763.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: yarn deployment

2016-08-09 Thread Nikolai Tikhonov
Hi,

YARN could not download Ignite build. It's might be related with
configuration your access to internet.

You can configure IGNITE_PATH property. In this case ignite yarn will take
a apache ignite build from hdfs [1].

[1] http://apacheignite.gridgain.org/v1.6/docs/yarn-deployment

On Tue, Aug 9, 2016 at 10:54 PM, prasanth  wrote:

> bump...
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/yarn-deployment-tp6843p6885.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite for Spark on YARN Deployment

2016-06-10 Thread Nikolai Tikhonov
> com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:117)
>>  at 
>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510)
>>  at 
>> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:848)
>>  at 
>> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:777)
>>  at 
>> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
>>  at 
>> com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:243)
>>  at 
>> com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:347)
>>  at 
>> org.springframework.beans.factory.xml.DefaultDocumentLoader.loadDocument(DefaultDocumentLoader.java:76)
>>  at 
>> org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadDocument(XmlBeanDefinitionReader.java:428)
>>  at 
>> org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:390)
>>  ... 12 more
>> Failed to start grid: Failed to instantiate Spring XML application context 
>> [springUrl=file:/disk2/hadoop/yarn/local/usercache/hongmei/appcache/application_1464374946035_32114/container_e24_1464374946035_32114_01_14/./ignite-config.xml/,
>>  err=Line 1 in XML document from URL 
>> [file:/disk2/hadoop/yarn/local/usercache/hongmei/appcache/application_1464374946035_32114/container_e24_1464374946035_32114_01_14/./ignite-config.xml/]
>>  is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber: 
>> 1; columnNumber: 1; Content is not allowed in prolog.]
>>
>>
>>
>> Thank you very much!
>>
>> Hongmei
>>
>>
>> On Jun 10, 2016, at 5:56 AM, Nikolai Tikhonov 
>> wrote:
>>
>> Hi Hongmei Zong!
>>
>> Could you show logs from other containers (container_e24_1464374946035_
>> 29722_01_15) which was completed?
>>
>> On Thu, Jun 9, 2016 at 6:29 PM, Hongmei Zong  wrote:
>>
>>> Hi Nikolay,
>>>
>>> After I changed the value of IGNITE_XML_CONFIG=/user/hongmei/ignite/config/
>>>  (a HDFS path). Ignite RARN is running now. I use the Hadoop UI console to
>>> check the log of the application, the attached is the *stderr *log 
>>> information
>>> about containers:
>>>
>>> It looks like that the containers are allocated and then completed! The*
>>> stderr *log is very long and the container ID from X01_001 to
>>> XX01_013582.  Finally all these containers are completed.
>>>
>>> I have no idea, is there anything not right?
>>>
>>> There is no information in *stdout* log.
>>>
>>> Thank you!
>>>
>>> Hongmei
>>> Logs for container_e24_1464374946035_29722_01_01
>>>
>>> SLF4J: Class path contains multiple SLF4J bindings.
>>> SLF4J: Found binding in 
>>> [jar:file:/usr/hdp/2.3.4.0-3485/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: Found binding in 
>>> [jar:file:/disk2/hadoop/yarn/local/usercache/hongmei/appcache/application_1464374946035_29722/filecache/10/ignite-yarn.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
>>> explanation.
>>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>>> 16/06/09 10:48:16 INFO impl.ContainerManagementProtocolProxy: 
>>> yarn.client.max-cached-nodemanagers-proxies : 0
>>> 16/06/09 10:48:16 INFO client.ConfiguredRMFailoverProxyProvider: Failing 
>>> over to rm2
>>> Jun 09, 2016 10:48:16 AM org.apache.ignite.yarn.ApplicationMaster run
>>> INFO: Application master registered.
>>> Jun 09, 2016 10:48:16 AM org.apache.ignite.yarn.ApplicationMaster run
>>> INFO: Making request. Memory: 2,432, cpu 1.
>>> Jun 09, 2016 10:48:16 AM org.apache.ignite.yarn.ApplicationMaster run
>>> INFO: Making request. Memory: 2,432, cpu 1.
>>> 16/06/09 10:48:17 INFO impl.AMRMClientImpl: Received new token for : 
>>> c5hdp108.c5.runwaynine.com:45454
>>> 16/06/09 10:48:17 INFO impl.AMRMClientImpl: Received new token for : 
>>> c5hdp111.c5.runwaynine.com:45454
>>> Jun 09, 2016 10:48:17 AM org.apache.ignite.yarn.ApplicationMaster 
>>> onContainersAllocated
>>> INFO: Launching container: container_e24_1464374946035_29722_01_02.
>>> 16/06/09 10:48:17 INFO 

Re: Ignite for Spark on YARN Deployment

2016-06-10 Thread Nikolai Tikhonov
374946035_29722_01_21. State: COMPLETE.
> Jun 09, 2016 10:48:33 AM org.apache.ignite.yarn.ApplicationMaster 
> onContainersCompleted
> INFO: Container completed. Container id: 
> container_e24_1464374946035_29722_01_14. State: COMPLETE.
> Jun 09, 2016 10:48:33 AM org.apache.ignite.yarn.ApplicationMaster 
> onContainersCompleted
> INFO: Container completed. Container id: 
> container_e24_1464374946035_29722_01_15. State: COMPLETE.
> Jun 09, 2016 10:48:33 AM org.apache.ignite.yarn.ApplicationMaster run
> INFO: Making request. Memory: 2,432, cpu 1.
> Jun 09, 2016 10:48:33 AM org.apache.ignite.yarn.ApplicationMaster run
> INFO: Making request. Memory: 2,432, cpu 1.
> Jun 09, 2016 10:48:34 AM org.apache.ignite.yarn.ApplicationMaster 
> onContainersAllocated
>
>
> On Thu, Jun 9, 2016 at 10:21 AM, Nikolay Tikhonov 
> wrote:
>
>> You set wrong value to IGNITE_XML_CONFIG property. The property should
>> contains path to ignite configuration file. For example
>>
>> IGNITE_XML_CONFIG=/u/hongmei/apache-ignite/config/default-config.xml
>>
>> I think you can comment this line in property file and ignite will start
>> with default configuration.
>>
>> On Thu, Jun 9, 2016 at 5:08 PM, Hongmei Zong  wrote:
>>
>>> Hi nikolai,
>>>
>>> Thank you very much for prompt reply!
>>>
>>> I did not find the ignite-config.xml file under my ignite home directory(
>>>  /u/hongmei/apache-ignite/  ).
>>>
>>> I find a "default-config.xml" at the path:
>>> /u/hongmei/apache-ignite/config/default-config.xml
>>>
>>> 
>>>
>>>
>>> 
>>>
>>>
>>> http://www.springframework.org/schema/beans";
>>>
>>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>>>
>>>xsi:schemaLocation="
>>>
>>>http://www.springframework.org/schema/beans
>>>
>>>http://www.springframework.org/schema/beans/spring-beans.xsd";>
>>>
>>> 
>>>
>>> >> "org.apache.ignite.configuration.IgniteConfiguration">
>>>
>>> 
>>>
>>> >> "org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>>>
>>> 
>>>
>>> >> "org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder"
>>> >
>>>
>>> >> "addresses">
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>> c5hdpe001.c5.runwaynine.com:47500..47509
>>>
>>> 
>>> c5hdpe002.c5.runwaynine.com:47500..47509
>>>
>>> 
>>> c5hdpe003.c5.runwaynine.com:47500..47509
>>>
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>>
>>>     
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>>
>>>
>>>
>>> The "cluster.properties" at the path:
>>> /u/hongmei/apache-ignite/config/cluster.properties
>>>
>>> # The number of nodes in the cluster.
>>>
>>> IGNITE_NODE_COUNT=2
>>>
>>>
>>> # The number of CPU Cores for each Apache Ignite node.
>>>
>>> IGNITE_RUN_CPU_PER_NODE=1
>>>
>>>
>>> # The number of Megabytes of RAM for each Apache Ignite node.
>>>
>>> IGNITE_MEMORY_PER_NODE=2048
>>>
>>>
>>> # The version ignite which will be run on nodes.
>>>
>>> IGNITE_VERSION=1.6.0
>>>
>>>
>>> # The hdfs directory which will be used for saving Apache Ignite
>>> disbributives.
>>>
>>> IGNITE_RELEASES_DIR=/u

Re: Ignite for Spark on YARN Deployment

2016-06-09 Thread Nikolai Tikhonov
finitionReader.java:303)
> at
> org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:379)
> ... 9 more
> *Caused by: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 1;
> Content is not allowed in prolog.*
> at
> com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:198)
> at
> com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:177)
> at
> com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:441)
> at
> com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:368)
> at
> com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1436)
> at
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:999)
> at
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:606)
> at
> com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:117)
> at
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510)
> at
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:848)
> at
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:777)
> at
> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:141)
> at
> com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:243)
> at
> com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:347)
> at
> org.springframework.beans.factory.xml.DefaultDocumentLoader.loadDocument(DefaultDocumentLoader.java:76)
> at
> org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadDocument(XmlBeanDefinitionReader.java:428)
> at
> org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:390)
> ... 12 more
> *Failed to start grid: Failed to instantiate Spring XML application
> context 
> *[springUrl=file:/disk/12/hadoop/yarn/local/usercache/hongmei/appcache/application_1464374946035_27403/container_e24_1464374946035_27403_01_110493/./ignite-config.xml/,
> err=Line 1 in XML document from URL
> [file:/disk/12/hadoop/yarn/local/usercache/hongmei/appcache/application_1464374946035_27403/container_e24_1464374946035_27403_01_110493/./ignite-config.xml/]
> is invalid; nested exception is org.xml.sax.SAXParseException; lineNumber:
> 1; columnNumber: 1; Content is not allowed in prolog.]
>
>
> Hongmei
>
>
> On Thu, Jun 9, 2016 at 6:04 AM, Nikolai Tikhonov 
> wrote:
>
>> *your_address1:47500..47510,your_address2:47500..47510
>>> and your_address3:47500..47510 are the YARN master_host address, right?*
>>>
>>
>> No, this addresses hosts on which deploy YARN cluster. For example, you
>> have YARN cluster which contains two servers: 10.0.0.1 and 10.0.0.2. In
>> this case you will have the following configuration:
>>
>> ipFinder.setAddresses(Arrays.asList("10.0.0.1:47500..47510",
>> "10.0.0.2:47500..47510"));
>>
>>
>


Re: Ignite for Spark on YARN Deployment

2016-06-09 Thread Nikolai Tikhonov
>
> *your_address1:47500..47510,your_address2:47500..47510
> and your_address3:47500..47510 are the YARN master_host address, right?*
>

No, this addresses hosts on which deploy YARN cluster. For example, you
have YARN cluster which contains two servers: 10.0.0.1 and 10.0.0.2. In
this case you will have the following configuration:

ipFinder.setAddresses(Arrays.asList("10.0.0.1:47500..47510", "10.0.0.2:47500
..47510"));


Re: Ignite for Spark on YARN Deployment

2016-06-08 Thread Nikolai Tikhonov
Hi Hongmei Zong!

Client node which started from IgniteContext can't to find server nodes. By
default ignite integration with YARN uses TcpDiscoveryVmIpFinder (if you
don't use another ip finder in your configuration).
In the case you should set the ip finder in IgniteContext. The following
code snippet shows how do it on Java:

IgniteConfiguration cfg = new IgniteConfiguration();

TcpDiscoverySpi spi = new TcpDiscoverySpi();

TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(Arrays.asList("your_address1:47500..47510",
"your_address2:47500..47510",
"your_address3:47500..47510"));

spi.setIpFinder(ipFinder);

cfg.setDiscoverySpi(spi);


On Wed, Jun 8, 2016 at 6:31 PM, Hongmei Zong  wrote:

> Hi Denis,
>
> I tried testing Ignite as the following steps:
>
> Background information:
> 1. Our Spark is running on YARN deployment; There are three Master hosts
> and many Worker nodes and three client nodes in the Spark clusters.
> 2. I installed Ignite on one of the client node and can launch
> Ignite-shell locally on this client node. Since for testing purpose, I did
> not install Ignite on any master nodes or worker nodes.
> 3. I logged into the client node which Ignite was installed and launched
> Ignite YARN application using the following command:
>
> hadoop jar
> /u/hongmei/apache-ignite/libs/optional/ignite-yarn/ignite-yarn-1.6.0.jar
> /u/hongmei/apache-ignite/libs/optional/ignite-yarn/ignite-yarn-1.6.0.jar
> /u/hongmei/apache-ignite/config/cluster.properties
>
> I open the UI console for the Ignite YARN application, and it shows that
> Ignite YARN is running, Containers, CPU Cores, Memory are all allocated by
> YARN. screenshot is as the following
>
> application_1464374946035_26956
> 
> hongmei ignition YARN default Wed Jun 8 10:21:04 -0400 2016 N/A RUNNING
> UNDEFINED 12 12 34816 9.6 3.8
> ApplicationMaster
> 
>
> 4. I open an other terminal log into the same client node and run the
> Ignite for Spark Shell command as the following:
> The Spark shell started
>
> /usr/bin/spark-shell --jars /u/hongmei/apache-ignite/libs/
> ignite-core-1.6.0.jar,/u/hongmei/apache-ignite/libs/optional/ignite-spark/
> ignite-spark-1.6.0.jar,/u/hongmei/apache-ignite/libs/cache-api-1.0.0.jar,
> /u/hongmei/apache-ignite/libs/optional/ignite-log4j/
> ignite-log4j-1.6.0.jar,
> /u/hongmei/apache-ignite/libs/optional/ignite-log4j/log4j-1.2.17.jar
> --packages
> org.apache.ignite:ignite-spark:1.6.0,org.apache.ignite:ignite-spring:1.6.0
>
> The Spark shell launched successfully and I use these two commands
> to import the ignite spark files:
>
> import org.apache.ignite.spark._import org.apache.ignite.configuration._
>
>
> Next I create an instance of Ignite context as the following syntax:
>
> val ic = new IgniteContext[Integer, Integer](sc, () => new 
> IgniteConfiguration())
>
>
> I got the following message: So I stuck at this point.
>
> 16/06/08 10:28:27 WARN TcpDiscoverySpi: IP finder returned empty addresses
> list. Please check IP finder configuration and make sure multicast works on
> your network. Will retry every 2 secs.
>
> So I stuck at this point.
>
> The other scenario when I run command listed as Step 4. The Spark shell
> can not be launched successfully with the status of “Accept”, but never get
> chance to run.
>
> Any good suggestions?? Is there anything wrong with my test procedure??
> Thanks in advance!
>
> Mei
>
>
> On Jun 7, 2016, at 10:26 AM, Denis Magda  wrote:
>
> Hi,
>
> I’m not an expert in this area however have you tried to specify a Spark
> master like the following documentation says?
>
> https://apacheignite-fs.readme.io/docs/testing-integration-with-spark-shell#working-with-spark-shell
>
> If you did try please share the full logs, someone from the community will
> respond.
>
> —
> Denis
>
> On Jun 6, 2016, at 5:58 PM, Hongmei Zong  wrote:
>
> Hi there,
>
> I would like to use "Ignite for Spark" to save the states of Spark jobs in
> memory and those states can be used for later jobs. For Shared Deployment,
> the document only offer two ways to deploy Ignite cluster. First is the
> standalone deployment, second is MESOS deployment. But Our Spark clusters
> are running on YARN. My question is: is it possible to run Ignite for Spark
> on YARN deployment???
>
> I downloaded and installed Ignite on my machine. Next, I referenced the
> link
> below for YARN Deployment.
> http://apacheignite.gridgain.org/docs/yarn-deployment
>
> I created the cluster.properties file and ran the application using the
> command:hadoop jar
> /u/hongmei/apache-ignite/libs/optional/ignite-yarn/ignite-yarn-1.6.0.jar
> /u/hongmei/apache-ignite/libs/optional/ignite-yarn/ignite-yarn-1.6.0.jar
> /u/hongmei/apache-ignite/config/cluster.properties
>
> Form the YARN console, The YARN ignite application works ok. It shows
> running

Re: Using Ignite within within networks without internet egress

2016-06-07 Thread Nikolai Tikhonov
Hi, Haithem Turki!

If your yarn cluster running in network without internet, you can use
IGNITE_PATH property. The property allows to use apache ignite build from
hdfs. Also error message isn't clear, I've created ticket [1] for this
issue.

1. https://issues.apache.org/jira/browse/IGNITE-3268

On Tue, May 31, 2016 at 11:58 PM, Haithem Turki 
wrote:

> Hello,
>
> When deploying Ignite on YARN in a network without outbound internet
> access, I run into the following issue on startup:
>
> Exception in thread "main" java.lang.RuntimeException: Failed update
> ignite.
> at
> org.apache.ignite.yarn.IgniteProvider.updateIgnite(IgniteProvider.java:243)
> at org.apache.ignite.yarn.IgniteProvider.getIgnite(IgniteProvider.java:93)
> at
> org.apache.ignite.yarn.IgniteYarnClient.getIgnite(IgniteYarnClient.java:194)
> at org.apache.ignite.yarn.IgniteYarnClient.main(IgniteYarnClient.java:84)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: java.net.ConnectException: Connection timed out
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> at
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> at
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:579)
> at java.net.Socket.connect(Socket.java:528)
> at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
> at sun.net.www.http.HttpClient.(HttpClient.java:211)
> at sun.net.www.http.HttpClient.New(HttpClient.java:308)
> at sun.net.www.http.HttpClient.New(HttpClient.java:326)
> at
> sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:997)
> at
> sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:933)
> at
> sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:851)
> at
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1301)
> at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:468)
> at
> org.apache.ignite.yarn.IgniteProvider.updateIgnite(IgniteProvider.java:220)
> ... 9 more
>
> Is there a supported way of disabling the download?
>
> As a separate bug, even with internet access, you run into the following
> issue during startup:
>
> Exception in thread "main" java.io.FileNotFoundException: File
> ignite-releases/gridgain-community-fabric-1.5.22.zip does not exist
>
> at
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:534)
>
> at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
>
> at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
>
> at
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:424)
>
> at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:340)
>
> at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1944)
>
> at
> org.apache.ignite.yarn.utils.IgniteYarnUtils.copyLocalToHdfs(IgniteYarnUtils.java:86)
>
> at org.apache.ignite.yarn.IgniteProvider.getIgnite(IgniteProvider.java:105)
>
> at
> org.apache.ignite.yarn.IgniteYarnClient.getIgnite(IgniteYarnClient.java:194)
>
> at org.apache.ignite.yarn.IgniteYarnClient.main(IgniteYarnClient.java:84)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>
>
> Renaming the downloaded jar from
> ignite-releases/gridgain-professional-fabric-1.5.22.zip to
> gnite-releases/gridgain-community-fabric-1.5.22.zip seems to solve the
> issue.
>
> - Haithem
>


Re: IGFS YARN setup

2016-05-30 Thread Nikolai Tikhonov
Hi, Haithem Turki!

* Seems like dynamic allocation isn't supported? Wanted to get a sense of
>> whether this was in the roadmap
>>
>
Could you please describe more about what you want from a dynamic
allocation?


> * Since YARN allocates containers at random it's pretty onerous to figure
>> out which hostnames have Ignite nodes running on them and specifying those
>> in the URL. For now I have TCP enabled (Ignite doesn't seem to die on port
>> conflicts if multiple nodes are running on the same machine) and I guess I
>> can set up a reverse proxy so that I can point towards a stable URL but
>> it's not great / doesn't scale well so I was wondering if there were other
>> suggestions on how to configure discovery (maybe spin up a local node
>> outside of YARN that leverages the cluster discovery?)
>>
>
I've created ticket and you can track status there [1]. Now I don't see
solution which look more elegant than you describe. Yes, you can start
ignite node outside of YARN cluster and use it as a stable URL.

[1]  https://issues.apache.org/jira/browse/IGNITE-3214


Re: Exception in Kerberos Yarn cluster

2015-11-16 Thread Nikolai Tikhonov
Hi,

I've created jira issue and think that this improvement will be released in
1.6. You can track status there:
https://issues.apache.org/jira/browse/IGNITE-1922



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Exception-in-Kerberos-Yarn-cluster-tp1950p1967.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


<    1   2   3