Re: Re: about mr accelerator question.

2016-03-22 Thread l...@runstone.com
I am so glad to tell you the problem has been solved,thanks a lot. but the 
peformance improve only 300%, is there other good idea for config?
there is another problem is i am not able to follow the track the job like use 
framework yarn.so I  cant count the jobs and view the state which I have been 
finished.is there good suggestion.

the ignite config is 






http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
xmlns:util="http://www.springframework.org/schema/util";
   xsi:schemaLocation="http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util
   http://www.springframework.org/schema/util/spring-util.xsd";>



Spring file for Ignite node configuration with IGFS and Apache Hadoop 
map-reduce support enabled.
Ignite node will start with this configuration by default.



















  
   
  
  
  
  

  
  

  
  





































































































*.*.*.*
*.*.*.*:47500..47509











l...@runstone.com 
北京润通丰华科技有限公司
李宜明 like wind exist
电话 13811682465
 
From: Vladimir Ozerov
Date: 2016-03-17 13:37
To: user
Subject: Re: about mr accelerator question.
Hi,
 
The fact that you can work with 29G cluster with only 8G of memory might be
caused by the following things:
1) Your job doesn't use all data form cluster and hence caches only part of
it. This is the most likely case.
2) You have eviction policy configured for IGFS data cache. 
3) Or may be you use offheap.
Please provide the full XML configuration and we will be able to understand
it.
 
Anyways, your initial question was about out-of-memory. Could you provide
exact error message? Is it about heap memory or may be permgen?
 
As per execution time, this depends on your workload. If there are lots map
tasks and very active work with data, you will see improvement in speed. If
there are lots operations on file system (e.g. mkdirs, move, etc.) and very
little amount of map jobs, chances there will be no speedup at all. Provide
more details on the job you test and type of data you use and we will be
able to give you more ideas on what to do.
 
Vladimir.
 
 
--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/about-mr-accelerator-question-tp3502p3552.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite server stop unexpectedly

2016-03-22 Thread 张鹏鹏
I am just learning Ignite,so maybe this is a dumb question.

I want to test the Partitioned Cache Mode,so I start three Ignite nodes on
three Server.This is the config:






http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd";>








10.20.30.91
10.20.30.92
10.20.30.93
















I start the nodes use this command:

./ignite.sh ../../ignite-config.xml >> /opt/ignite.log &



I start a Client node to write some date to the Server nodes.Simple code
copy from the Doc.Then I kill one Server node and start the Client again.I
find that one Ignite server stop unexpectedly.

Sometimes I can find Exceptions like this:


   [10:35:40,259][SEVERE][tcp-disco-msg-worker-#2%null%][TcpDiscoverySpi]
TcpDiscoverSpi's message worker thread failed abnormally. Stopping the node
in order to prevent cluster wide instability.
java.lang.InterruptedException
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088)
at
java.util.concurrent.LinkedBlockingDeque.pollFirst(LinkedBlockingDeque.java:522)
at
java.util.concurrent.LinkedBlockingDeque.poll(LinkedBlockingDeque.java:684)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:5779)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2161)
at
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
[10:35:40] Topology snapshot [ver=9, servers=1, clients=0, CPUs=12,
heap=1.0GB]
[10:35:40] Ignite node stopped OK [uptime=00:07:53:07]

sometimes,I just find one line:
   Ignite node stopped OK


Re: SpiQuery fails with exception

2016-03-22 Thread vkulichenko
Kamil,

Can you please share your thoughts in the ticket?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SpiQuery-fails-with-exception-tp3615p3627.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: SpiQuery fails with exception

2016-03-22 Thread knowak
Hi Val -

There are a few more potential issues we noticed when using Spi indexing in
1.5.0.final:

- method query() in IndexingSpi interface returns
Iterator> but elements of type Map.Entry are expected
further down the stack (i.e. in IgniteCacheProxy:528). We had to create type
that extends both Map.Entry and Cache.Entry to get it working.

- when IndexingSpi.store() throws an exception on commit, error is not
propagated to a client node and transaction ends up with state COMMITTED as
if no error occured. In a server node transaction state gets marked as
UNKNOWN though.

- setters in SpiQuery return SqlQuery type (rather than SpiQuery) which
means they can't be used in method chaining manner

Kamil



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SpiQuery-fails-with-exception-tp3615p3626.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: SpiQuery fails with exception

2016-03-22 Thread vkulichenko
Kamil,

I reproduced the issue and created a ticket:
https://issues.apache.org/jira/browse/IGNITE-2881. Someone in the community
will pick it up and fix.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SpiQuery-fails-with-exception-tp3615p3625.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to filter result of query by row number?

2016-03-22 Thread vkulichenko
Hi Nam,

Can you please properly subscribe to the mailing list so that the community
receives email notifications? Here is the instruction:
http://apache-ignite-users.70518.x6.nabble.com/mailing_list/MailingListOptions.jtp?forum=1


Nam Nguyen wrote
> In my case, I want to load the data by paging. But I cannot found any
> solution? 
> 
> For example: I have 10 rows in result of query, but I only want to get
> rows between 7 to 8.

You can use 'LIMIT ... OFFSET ...' clause for this. In your example the
query should look like this:

SELECT ... LIMIT 1 OFFSET 7

Let me know if this works for you.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-filter-result-of-query-by-row-number-tp3621p3624.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: Is there a way to get original object reference from IgniteCache?

2016-03-22 Thread vkulichenko
Jimmy,

Ignite is a distributed system and the approach you're describing doesn't
make much sense for it. If the value is fetched from a remote node, you will
always get a copy. If you get the value locally, you can force Ignite to
return the stored instance by setting
CacheConfiguration.setCopyOnRead(false) property, but this should be used
only in read-only scenario. It's not safe to modify this instance, because
the serialized form will not be updated until you call cache.put(), so the
one that reads it will potentially get the old value. Additionally, it can
be concurrently serialized which can cause data corruption.

I understand that this is a big change, but looks like you will have to
revisit your architecture and make sure that you use Ignite API properly.

Makes sense?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-there-a-way-to-get-original-object-reference-from-IgniteCache-tp3611p3623.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Hibernate connection inspite of jdbc?

2016-03-22 Thread vkulichenko
Hi Ravi,

Do you have IGNITE_HOME environment variable? If so, please make sure it
points to the correct folder and that it's the same for all participating
processes.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Hibernate-connection-inspite-of-jdbc-tp3412p3622.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


RE: Is there a way to get original object reference from IgniteCache?

2016-03-22 Thread Zhao, Jimmy
Val, thank you for reply. Although we can get cache anywhere as you mentioned, 
our problem is that UserProfile is not visible to the classes that will change 
User object. BTW, we are trying to integrate Ignite into lots of our existing 
applications, this will result massive changes. So the best way for us is to 
get original UserProfile object reference, rather than a copy of serialized 
data.

Jimmy

-Original Message-
From: vkulichenko [mailto:valentin.kuliche...@gmail.com]
Sent: Monday, March 21, 2016 5:24 PM
To: user@ignite.apache.org
Subject: Re: Is there a way to get original object reference from IgniteCache?

Hi Jimmy,

Ignite stores data in serialized form, i.e. the object that you put is 
serialized and saved as a byte array.  Having said that, you have to use
IgniteCache.put() to update the cache.

Note that you can always acquire Ignite instance using Ignition.ignite() 
method. It's static, so you can do this anywhere in the code. Will this work 
for you?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-there-a-way-to-get-original-object-reference-from-IgniteCache-tp3611p3613.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
E-mail correspondence to and from this address may be subject to the
North Carolina public records laws and if so, may be disclosed.


Re: SpiQuery fails with exception

2016-03-22 Thread knowak
Hi Val,

Exception is thrown every time we run a query or, more specifically, when we
start to iterate on result QueryCursor.

Please find server and client node code snippet with configuration and SPI
query. 

// server node
Ignite igniteServerNode = Ignitions.start(new IgniteConfiguration()
   (...)
   .setIndexingSpi(new CustomIndexSpi()) // returns iterator
(ArrayList.iterator()) on query
   .setCacheConfiguration(
   new CacheConfiguration(Person.class.getSimpleName())
(...)
   .setCacheMode(CacheMode.PARTITIONED)
   .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL));

// start server node as separate process


// client node
Ignite igniteClientNode = Ignitions.start(new IgniteConfiguration()
   (...)
   .setClientMode(true);

IgniteCache personCache =
igniteClientNode.getOrCreateCache(Person.class.getSimpleName());
QueryCursor> cursor = personCache.query(new
SpiQuery().setArgs("argument1"));

cursor.forEach(person -> { // throws exception
LOG.info("... found person {}");
});


Additionally,  adding client side stacktrace (one from previous post was
raised in server process).

Exception in thread "main" javax.cache.CacheException: class
org.apache.ignite.IgniteCheckedException: Query execution failed:
GridCacheQueryBean [qry=GridCacheQueryAdapter [type=SPI, clsName=null,
clause=null, filter=null, part=null, incMeta=false,
metrics=GridCacheQueryMetricsAdapter [minTime=0, maxTime=0, sumTime=0,
avgTime=0.0, execs=0, completed=0, fails=0], pageSize=1024, timeout=0,
keepAll=true, incBackups=false, dedup=false, prj=null, keepBinary=false,
subjId=80183557-7672-43e4-90e9-71df4af08b4f, taskHash=0], rdc=null,
trans=null]
   at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1618)
   at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryFutureAdapter.next(GridCacheQueryFutureAdapter.java:181)
   at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy$5.onHasNext(IgniteCacheProxy.java:528)
   at
org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
   at
org.apache.ignite.internal.util.lang.GridIteratorAdapter.hasNext(GridIteratorAdapter.java:45)
   at java.lang.Iterable.forEach(Iterable.java:74)
   (…)


Thanks,
Kamil



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SpiQuery-fails-with-exception-tp3615p3619.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Hibernate connection inspite of jdbc?

2016-03-22 Thread Ravi Puri
yes .. its the same issue



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Hibernate-connection-inspite-of-jdbc-tp3412p3618.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.