Re: HDP, Hive + Ignite

2017-05-15 Thread Alena Melnikova
Hi Ivan,
TEZ was on 6 data nodes. So you're right, I can't reliably estimate the
performance of Ignite MR.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12868.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-05-15 Thread Ivan V.
Alena,
wrt 1.: saying 80 vs. 23 sec , do you compare Ignite MR on 1 node vs. Tez
on *1* node also?

On Mon, May 15, 2017 at 5:03 PM, Alena Melnikova  wrote:

> Ivan,
>
> 1. In my environment Ignite MR works correctly only on one node and it
> works
> slower than TEZ (80 sec vs 23 sec). I guess because of one ignite node. On
> multi node cluster result was incorrect.
>
> 2. "Do I correctly understand that Ignite MR was not used in that
> experiment?"
> Yes, it was TEZ+IGFS.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12850.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: HDP, Hive + Ignite

2017-05-15 Thread Alena Melnikova
Ivan,

1. In my environment Ignite MR works correctly only on one node and it works
slower than TEZ (80 sec vs 23 sec). I guess because of one ignite node. On
multi node cluster result was incorrect.

2. "Do I correctly understand that Ignite MR was not used in that
experiment?"
Yes, it was TEZ+IGFS.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12850.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-05-15 Thread Evgeniy Stanilovskiy

Ivan, what kind of tests did u run ? plz show SQL requests ?
1-node tests for distributed computing looks like wierd

My observations (on very simplified 1-node environment) show that  
Ignite-MR ~10% faster than TEZ under equal >conditions.


On Mon, May 15, 2017 at 1:52 PM, Ivan V.   
wrote:
Hi, Alena, regarding "1) Ignite MR works slower than Hive on TEZ, but  
faster than Hive on MR." -- as far as >>I remember, you have observed  
incorrect results with Ignite MR, and we didn't find the reason, just  
abandoned that. >>Performance measurements don't have much sense until  
we have correct query results. So, I would say that there we >>just  
don't have results we can trust.


Regarding ""Out of memory: Kill process" -- this means that Ignite node  
process requested so much memory that OS >>failed to give. This may be  
investigated further -- all the memory limits set for Ignite node  
should be checked and >>compared to the real memory physically  
available on the host. Do I correctly understand that Ignite MR was not  
used >>in that experiment?


On Mon, May 15, 2017 at 9:32 AM, Alena Melnikova  wrote:

Hi Ivan,

You're right. In kernel log there is message: "Out of memory: Kill  
process

19988 (java)"

Let me sum up, please, correct me if I'm wrong.
If we use Hive + Tez we don't need Hadoop Accelerator because:
1) Ignite MR works slower than Hive on TEZ, but faster than Hive on MR.
2) TEZ+HDFS and TEZ+IGFS work at the same speed. Although TEZ+IGFS can  
be

faster in queries with intensive I/O (need to test).

Many thanks for your patience and prompt help.
I'm going to try Ignite + Spark, I'll open new topic)



--
View this message in context:  
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12838.html

Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: HDP, Hive + Ignite

2017-05-15 Thread Ivan V.
My observations (on very simplified 1-node environment) show that Ignite-MR
~10% faster than TEZ under equal conditions.

On Mon, May 15, 2017 at 1:52 PM, Ivan V.  wrote:

> Hi, Alena, regarding "1) Ignite MR works slower than Hive on TEZ, but
> faster than Hive on MR." -- as far as I remember, you have observed
> incorrect results with Ignite MR, and we didn't find the reason, just
> abandoned that. Performance measurements don't have much sense until we
> have correct query results. So, I would say that there we just don't have
> results we can trust.
>
> Regarding ""Out of memory: Kill process" -- this means that Ignite node
> process requested so much memory that OS failed to give. This may be
> investigated further -- all the memory limits set for Ignite node should be
> checked and compared to the real memory physically available on the host.
> Do I correctly understand that Ignite MR was not used in that experiment?
>
> On Mon, May 15, 2017 at 9:32 AM, Alena Melnikova  wrote:
>
>> Hi Ivan,
>>
>> You're right. In kernel log there is message: "Out of memory: Kill process
>> 19988 (java)"
>>
>> Let me sum up, please, correct me if I'm wrong.
>> If we use Hive + Tez we don't need Hadoop Accelerator because:
>> 1) Ignite MR works slower than Hive on TEZ, but faster than Hive on MR.
>> 2) TEZ+HDFS and TEZ+IGFS work at the same speed. Although TEZ+IGFS can be
>> faster in queries with intensive I/O (need to test).
>>
>> Many thanks for your patience and prompt help.
>> I'm going to try Ignite + Spark, I'll open new topic)
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/HDP-Hive-Ignite-tp12195p12838.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: HDP, Hive + Ignite

2017-05-15 Thread Ivan V.
Hi, Alena, regarding "1) Ignite MR works slower than Hive on TEZ, but
faster than Hive on MR." -- as far as I remember, you have observed
incorrect results with Ignite MR, and we didn't find the reason, just
abandoned that. Performance measurements don't have much sense until we
have correct query results. So, I would say that there we just don't have
results we can trust.

Regarding ""Out of memory: Kill process" -- this means that Ignite node
process requested so much memory that OS failed to give. This may be
investigated further -- all the memory limits set for Ignite node should be
checked and compared to the real memory physically available on the host.
Do I correctly understand that Ignite MR was not used in that experiment?

On Mon, May 15, 2017 at 9:32 AM, Alena Melnikova  wrote:

> Hi Ivan,
>
> You're right. In kernel log there is message: "Out of memory: Kill process
> 19988 (java)"
>
> Let me sum up, please, correct me if I'm wrong.
> If we use Hive + Tez we don't need Hadoop Accelerator because:
> 1) Ignite MR works slower than Hive on TEZ, but faster than Hive on MR.
> 2) TEZ+HDFS and TEZ+IGFS work at the same speed. Although TEZ+IGFS can be
> faster in queries with intensive I/O (need to test).
>
> Many thanks for your patience and prompt help.
> I'm going to try Ignite + Spark, I'll open new topic)
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12838.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: HDP, Hive + Ignite

2017-05-14 Thread Alena Melnikova
Hi Ivan,

You're right. In kernel log there is message: "Out of memory: Kill process
19988 (java)"

Let me sum up, please, correct me if I'm wrong.
If we use Hive + Tez we don't need Hadoop Accelerator because:
1) Ignite MR works slower than Hive on TEZ, but faster than Hive on MR.
2) TEZ+HDFS and TEZ+IGFS work at the same speed. Although TEZ+IGFS can be
faster in queries with intensive I/O (need to test).

Many thanks for your patience and prompt help.
I'm going to try Ignite + Spark, I'll open new topic)



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12838.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-05-13 Thread Ivan V.
Alena,
regarding comparison of your Hive query on TEZ+HDFS vs. TEZ+IGFS: my
experiments show same results (~58 sec in average) for both. At least not
distinguishable within dispersion.
I suppose, in this usecase fetching the table data takes negligible time as
compared to overall task processing time.
(I used primary IGFS mode and explicitly loaded the data to avoid any
non-cached data cold start effects.)
IGFS may give noticeable speedup for tasks that really involve large disk
I/O, this one does not seem to be such. Also please note, that some file
data are cached in memory by the operating system, so even if  you read
from disk you frequently read from memory, in fact.

On Fri, May 12, 2017 at 3:52 PM, Ivan Veselovsky 
wrote:

> Alena,
> as I understand, the message "19988 Killed "$JAVA"" means that the Ignite
> node process was killed by the operating system. Can you please see the
> kernel log -- what does it say near the node crash time?
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12660.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: HDP, Hive + Ignite

2017-05-12 Thread Ivan Veselovsky
Alena, 
as I understand, the message "19988 Killed "$JAVA"" means that the Ignite
node process was killed by the operating system. Can you please see the
kernel log -- what does it say near the node crash time? 





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12660.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-05-11 Thread Alena Melnikova
Hi Ivan,

Yes, it helps to avoid NPEs!



Though, from time to time one node dies. Usually this is the node that I
specify when I start the beeline: 
beeline  --hiveconf fs.default.name=igfs://dev-dn1:1050
/home/ignite/apache-ignite-hadoop-1.9.0-bin/bin/ignite.sh: line 170: 19988
Killed "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON}
-DIGNITE_HOME="${IGNITE_HOME}" -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp
"${CP}" ${MAIN_CLASS} "${CONFIG}"
Full log in previous post (ignite-node-dn1_1.log).

However, average execution time on TEZ (table in HDFS) and TEZ (table in
IGFS) is comparable:
TEZ: 215 sec (6 nodes)
TEZ+IGFS: 207 sec (6 nodes)
I'm waiting for the results of your tests.






--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12638.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-05-11 Thread Ivan Veselovsky
As a workaround to IGNITE-4862 propetry
FileSystemConfiguration#perNodeParallelBatchCount can be set to 1.
Also setting FileSystemConfiguration#prefetchBlocks to 0 should help.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12627.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-05-11 Thread Evgeniy Stanilovskiy

I keep in mind ticket for TEZ experimenting\investigations.


Evgeniy, sure, IGNITE-4862, the link is above.

On Thu, May 11, 2017 at 11:07 AM, Evgeniy Stanilovskiy  
 wrote:

Ivan, do we have appropriate jira ticket?

4. No. We only start experimenting with Tez -- I'm currently setting  
it up in my environment to investigate the >>>problems.

Re: HDP, Hive + Ignite

2017-05-11 Thread Evgeniy Stanilovskiy

Ivan, do we have appropriate jira ticket?

4. No. We only start experimenting with Tez -- I'm currently setting it  
up in my environment to investigate the >problems.

Re: HDP, Hive + Ignite

2017-05-10 Thread Ivan Veselovsky
Alena, regarding NPEs in Ignite node logs, this seems to be
https://issues.apache.org/jira/browse/IGNITE-4862 , fixed, but not yet
merged.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12594.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-05-10 Thread Alena Melnikova
Hi Ivan,

1. I tried to run analytical query on table that created in IGFS. Here
couple of examples of errors.
beeline_output_1.log

  
ignite-node-dn1_1.log

  
beeline_output_2.log

  
ignite-node-dn1_2.log

  

4. We are looking forward results of your experiments.

p.s. I sent email again on 5th May.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12588.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-05-05 Thread Ivan V.
1. Please attach full logs.

3. I might suspect the property "shared.classloader" , but if it is
definitely set to 'false' , and there is no error in the logs, I have no
other ideas at the moment.

4. No. We only start experimenting with Tez -- I'm currently setting it up
in my environment to investigate the problems.

p.s. No, I did not. Can you please send it again.


On Fri, May 5, 2017 at 3:39 PM, Alena Melnikova  wrote:

> Hi Ivan,
>
> 1. I still continue to experiment with the table created in IGFS. Currently
> it works if query is executed only once, then either the ignite node fails
> or error: Exception in thread "igfs-#60%null%"
> java.lang.NullPointerException.
>
> 3. You're right! I didn't restart Visor. What's more, I run one Visor as
> root (and forgot about it) and one as user ignite. After killing all of
> them
> new topology starts from ver=1.
> But it doesn't help for correct Ignite MR. Topology is correct, there is no
> any error in logs, but the result is wrong. To be honest, I decided to stop
> experiments with IgniteMR and focus on IGFS+TEZ or Spark.
> beeline_ignite_mr.log
>  n12457/beeline_ignite_mr.log>
> ignite-node-dn1.log
>  n12457/ignite-node-dn1.log>
> ignite-node-dn2.log
>  n12457/ignite-node-dn2.log>
>
> 4. I think Evgeniy said about comparison TEZ vs TEZ+IGFS. So I join the
> question:
> did you conduct some tests Ignite + TEZ?
>
> p.s. Ivan, did you get my email about Hadoop meetup? I sent it couple of
> days ago.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12457.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: HDP, Hive + Ignite

2017-05-05 Thread Alena Melnikova
Hi Ivan,

1. I still continue to experiment with the table created in IGFS. Currently
it works if query is executed only once, then either the ignite node fails
or error: Exception in thread "igfs-#60%null%"
java.lang.NullPointerException.

3. You're right! I didn't restart Visor. What's more, I run one Visor as
root (and forgot about it) and one as user ignite. After killing all of them
new topology starts from ver=1.
But it doesn't help for correct Ignite MR. Topology is correct, there is no
any error in logs, but the result is wrong. To be honest, I decided to stop
experiments with IgniteMR and focus on IGFS+TEZ or Spark.
beeline_ignite_mr.log

  
ignite-node-dn1.log

  
ignite-node-dn2.log

  

4. I think Evgeniy said about comparison TEZ vs TEZ+IGFS. So I join the
question: 
did you conduct some tests Ignite + TEZ?

p.s. Ivan, did you get my email about Hadoop meetup? I sent it couple of
days ago.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12457.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-05-04 Thread Ivan Veselovsky
Hi, Alena, 

3. Looks like we have an answer why the initial topology version is so high:
you possibly do not restart the Visor process, is that true? If so, please
start next experiment with all nodes stopped, as well as the Visor process .
After that initial topology version should start with 1, it is not persisted
anywhere.
We should make sure each new started server joins successfully, with the 1st
attempt, and no "Node left topology" message ever appears. If it does, need
to investigate, why before further experiments. 

4. Ignite MR also makes all intermediate operations in memory, so I don't
see any obvious reasons of why Ignite MR vs. TEZ comparison is senseless. I
suppose, the above results (23 sec on Tez vs. 80 sec on IgniteMR) can be
explained by the fact that Ignite was running in 1-node mode, while Tez was
using several (6?) nodes.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12430.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-05-04 Thread Alena Melnikova
Hi Ivan,

1. Need more time for experiments... 

3. Yes, logs are full. I started every node with this command:
$IGNITE_HOME/bin/ignite.sh -v -J"-Xms10g -Xmx10g -XX:MaxMetaspaceSize=4g"
2>&1 | tee
/home/ignite/apache-ignite-hadoop-1.9.0-bin/work/log/ignite-node-dnX.log
I thought ver=72 because I did 72 attempts))
I don't know how to reset this counter. I stop ignite nodes Ctrl-C or Ctrl-Z
(then kill PID) or in Visor kill -k.

Look, there is no Ignite process, but now ver=109:
*[ignite@dev-dn1 ~]$ ps -ef | grep ignite*
ignite6796 21325  0 15:04 pts/500:00:00 ps -ef
ignite6797 21325  0 15:04 pts/500:00:00 grep ignite
root 21324 21290  0 May02 pts/500:00:00 su - ignite
ignite   21325 21324  0 May02 pts/500:00:00 -bash
root 27287 17525  0 May02 pts/100:00:00 su - ignite
ignite   27288 27287  0 May02 pts/100:00:00 -bash
*[ignite@dev-dn1 ~]$ $IGNITE_HOME/bin/ignite.sh -v -J"-Xms10g -Xmx10g
-XX:MaxMetaspaceSize=4g" 2>&1 | tee
/home/ignite/apache-ignite-hadoop-1.9.0-bin/work/log/ignite-node.log*
Ignite Command Line Startup, ver. 1.9.0#20170302-sha1:a8169d0a
2017 Copyright(C) Apache Software Foundation

[15:04:08,630][INFO ][main][IgniteKernal] 

>>>__    
>>>   /  _/ ___/ |/ /  _/_  __/ __/  
>>>  _/ // (7 7// /  / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/   
>>> 
>>> ver. 1.9.0#20170302-sha1:a8169d0a
>>> 2017 Copyright(C) Apache Software Foundation
>>> 
>>> Ignite documentation: http://ignite.apache.org

[15:04:08,630][INFO ][main][IgniteKernal] Config URL:
file:/home/ignite/apache-ignite-hadoop-1.9.0-bin/config/default-config.xml
[15:04:08,630][INFO ][main][IgniteKernal] Daemon mode: off
[15:04:08,631][INFO ][main][IgniteKernal] OS: Linux 2.6.32-696.el6.x86_64
amd64
[15:04:08,631][INFO ][main][IgniteKernal] OS user: ignite
[15:04:08,631][INFO ][main][IgniteKernal] PID: 6891
[15:04:08,631][INFO ][main][IgniteKernal] Language runtime: Java Platform
API Specification ver. 1.8
[15:04:08,631][INFO ][main][IgniteKernal] VM information: Java(TM) SE
Runtime Environment 1.8.0_101-b13 Oracle Corporation Java HotSpot(TM) 64-Bit
Server VM 25.101-b13
[15:04:08,633][INFO ][main][IgniteKernal] VM total memory: 9.6GB
[15:04:08,633][INFO ][main][IgniteKernal] Remote Management [restart: on,
REST: on, JMX (remote: on, port: 49199, auth: off, ssl: off)]
[15:04:08,633][INFO ][main][IgniteKernal]
IGNITE_HOME=/home/ignite/apache-ignite-hadoop-1.9.0-bin
[15:04:08,633][INFO ][main][IgniteKernal] VM arguments: [-Xms1g, -Xmx1g,
-XX:+AggressiveOpts, -XX:MaxMetaspaceSize=256m,
-Djava.library.path=/usr/hdp/current/hadoop-client/lib/native/,
-DIGNITE_QUIET=false,
-DIGNITE_SUCCESS_FILE=/home/ignite/apache-ignite-hadoop-1.9.0-bin/work/ignite_success_6a954010-244e-42bb-9cf7-b4fbbf39519a,
-Dcom.sun.management.jmxremote, -Dcom.sun.management.jmxremote.port=49199,
-Dcom.sun.management.jmxremote.authenticate=false,
-Dcom.sun.management.jmxremote.ssl=false,
-DIGNITE_HOME=/home/ignite/apache-ignite-hadoop-1.9.0-bin,
-DIGNITE_PROG_NAME=/home/ignite/apache-ignite-hadoop-1.9.0-bin/bin/ignite.sh,
-Xms10g, -Xmx10g, -XX:MaxMetaspaceSize=4g]
[15:04:08,634][INFO ][main][IgniteKernal] Configured caches
['ignite-marshaller-sys-cache', 'ignite-sys-cache',
'ignite-hadoop-mr-sys-cache', 'ignite-atomics-sys-cache', 'igfs-meta',
'igfs-data']
[15:04:08,638][INFO ][main][IgniteKernal] 3-rd party licenses can be found
at: /home/ignite/apache-ignite-hadoop-1.9.0-bin/libs/licenses
[15:04:08,725][INFO ][main][IgnitePluginProcessor] Configured plugins:
[15:04:08,725][INFO ][main][IgnitePluginProcessor]   ^-- None
[15:04:08,725][INFO ][main][IgnitePluginProcessor] 
[15:04:08,786][INFO ][main][TcpCommunicationSpi] Successfully bound
communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0,
selectorsCnt=4, selectorSpins=0, pairedConn=false]
[15:04:08,790][WARN ][main][TcpCommunicationSpi] Message queue limit is set
to 0 which may lead to potential OOMEs when running cache operations in
FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and
receiver sides.
[15:04:08,810][WARN ][main][NoopCheckpointSpi] Checkpoints are disabled (to
enable configure any GridCheckpointSpi implementation)
[15:04:08,842][WARN ][main][GridCollisionManager] Collision resolution is
disabled (all jobs will be activated upon arrival).
[15:04:08,846][WARN ][main][NoopSwapSpaceSpi] Swap space is disabled. To
enable use FileSwapSpaceSpi.
[15:04:08,847][INFO ][main][IgniteKernal] Security status
[authentication=off, tls/ssl=off]
[15:04:09,292][INFO ][main][GridTcpRestProtocol] Command protocol
successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0, port=11211]
[15:04:09,718][INFO ][main][IpcServerTcpEndpoint] IPC server loopback
endpoint started [port=10500]
[15:04:09,720][INFO ][main][IpcServerTcpEndpoint] IPC server loopback
endpoint started [port=11400]
[15:04:09,729][INFO ][main][HadoopProcessor] HADOOP_HOME is set to
/usr/hdp/current/hadoop-client
[15:04:09,730][INFO ][

Re: HDP, Hive + Ignite

2017-05-04 Thread Ivan Veselovsky
Alena, I suppose, incorrect results in your environment may be a consequence
of topology troubles. In any way, to have some stable and reproducible
results you need to have stable Ignite cluster topology. To achieve that I
would recommend the following steps: 
1) kill all the Ignite processes on all the nodes (you may see them with "ps
-ef | grep ignite" in Unix shell).
2) start 1st Ignite node (preferably with "-v" option, and with a dedicated
console, redirecting the output to a file: "./ignite.sh -v ... |& tee
mylogfile " ) -- find first "Topology snapshot" line in the log. It should
say "Topology snapshot [ver=1, servers=1, clients=0, CPUs=..." . If topology
version is different from 1, that means something is wrong, possibly there
is a stale Ignite process this one attempts to join.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12412.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-05-03 Thread Ivan Veselovsky
WRT item 2. : cannot reproduce the issue yet. Each time I get correct data: 
OK
2017-03-15  36564815
2017-03-16  36872463
2017-03-17  36900812
2017-03-18  36904198
2017-03-19  3630
2017-03-20  37029921
Time taken: 69.603 seconds, Fetched: 6 row(s)




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12398.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-05-03 Thread Ivan Veselovsky
Hi, Alena,

1. E.g. can you explicitly specify igfs:// as the table data location, like
create table .. stored as orc location 'igfs://./path/test_ignite';  
?

2. Ok, thanks, will try to reproduce this using the provided data.

3. Here is something very strange. Are these logs full and do they reflect
cluster startup from the beginning (from the state when no node is running)
? For example, it is unclear, why the topology version is 72 at the moment
of the 1st node start: 

[13:11:10,358][INFO ][main][GridDiscoveryManager] Topology snapshot
[*ver=72*, servers=1, clients=0, CPUs=8, heap=10.0GB]

4. Can you please specify more exactly, what Evgeniy's comment you're
referring to?

Regards, 
Ivan Veselovsky.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12394.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-05-03 Thread Alena Melnikova
1. How can I explicitly load the Hive table into the IFGS don't using Java
API? (I don't know Java)
I use DUAL_SYNC. Here is my config.
default-config.xml
 
 

2. I attache sample data (test_ignite.rar). These are ORC files for Hive
partitioned table.
create table test_ignite (column1 double) partitioned by (calday string)
stored as orc location '/path/test_ignite';
alter table test_ignite add partition (calday='2017-03-15');
alter table test_ignite add partition (calday='2017-03-16');
alter table test_ignite add partition (calday='2017-03-17');
alter table test_ignite add partition (calday='2017-03-18');
alter table test_ignite add partition (calday='2017-03-19');
alter table test_ignite add partition (calday='2017-03-20');

select calday, count(*) from test_ignite where calday between '2017-03-15'
and '2017-03-20' group by calday order by calday;
Correct result on one ignite node:
+-+---+--+
|   calday|_c1|
+-+---+--+
| 2017-03-15  | 36564815  |
| 2017-03-16  | 36872463  |
| 2017-03-17  | 36900812  |
| 2017-03-18  | 36904198  |
| 2017-03-19  | 3630  |
| 2017-03-20  | 37029921  |
+-+---+--+
6 rows selected (49.88 seconds)

Wrong result on two ignite nodes:
+-+---+--+
|   calday|_c1|
+-+---+--+
| 2017-03-16  | 24582164  |
| 2017-03-17  | 12301380  |
| 2017-03-18  | 36904198  |
| 2017-03-19  | 12332322  |
+-+---+--+
4 rows selected (45.199 seconds)
test_ignite.rar
  

3. I started ignite nodes sequentially on 6 servers (dn1, dn2, dn3, dn4,
dn5, dn6). They formed 3 clusters:
dn1-dn3-dn6
dn2-dn4
dn5
ignite-node-dn1.log

  
ignite-node-dn2.log

  
ignite-node-dn3.log

  
ignite-node-dn4.log

  
ignite-node-dn5.log

  
ignite-node-dn6.log

  


4. As regard Evgeniy's comment it sounds reasonable, but I'm trying to cache
some hot Hive tables so that different users run their queries faster
because they don't need to read the same data from the disk. Still hope this
is possible)





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12393.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-05-02 Thread Ivan Veselovsky
1. Please make sure IGFS is really used: e.g. you may explicitly locate some
table data on IGFS, and run the queries upon. IGFS statistics can partially
be observed through Visor. 
Also please note, that upon node(s) start IGFS is empty. In case of dual
modes it caches the data upon file reading. In case of primary mode you need
to put some data onto the file system before you can use it. So, data read
performance boost can be seen only when some data are already cached in
IGFS, and read from there rather than from disk. 

2. Can you specify data and the query , so that we could reproduce the
issue? (E.g. you can use some publicly available sample data from Hive
examples.)

3. No. The nodes should connect without additional effort. Can you please
attach full logs of all nodes where this situation happens?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12355.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-05-02 Thread Alena Melnikova
Hi Ivan,

I have some progress)

*1. TEZ on Ignite (with IGFS, without Ignite MR)*
I could run Hive queries on TEZ and Ignite with next settings:
$IGNITE_HOME/bin/ignite.sh -v -J"-Xms10g -Xmx10g -XX:MaxMetaspaceSize=4g"
(every server has RAM 16Gb )
beeline  --hiveconf fs.default.name=igfs://dev-dn1:10500 --hiveconf
ignite.job.shared.classloader=false
set tez.use.cluster.hadoop-libs = true; (to avoid
"java.lang.ClassNotFoundException: Class
org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem not found")
ignite.job.shared.classloader = false; 
hive.rpc.query.plan = true;
hive.execution.engine = tez;
select calday, count(*) from price.toprice where calday between '2017-03-01'
and '2017-03-21' group by calday order by calday;

I run this query 8 times on TEZ+Ingnite and 8 times just on TEZ (without
IGFS), threw out the best and worst result and calculated average.
Results are:
Average execution time TEZ+Ignite: 25 sec
Average execution time just TEZ: 23 sec

Then I run more complex analytical query with joins on the same conditions.
Results are:
Average execution time TEZ+Ignite: 312 sec
Average execution time just TEZ: 313 sec

Results are mostly identical, so I guess IGFS is not used. 
May be I should explicitly tell Hive to cache data in IGFS?
Is there any way to understand that Ignite is used besides measuring
execution time?


*2. Ignite MR (with IGFS, with Ignite MR)*
I could run Hive queries on Ignite MR with next settings: 
$IGNITE_HOME/bin/ignite.sh -v -J"-Xms10g -Xmx10g -XX:MaxMetaspaceSize=4g"
(every server has RAM 16Gb )
beeline  --hiveconf fs.default.name=igfs://dev-dn1:10500 --hiveconf
ignite.job.shared.classloader=false
ignite.job.shared.classloader = false; 
mapreduce.jobtracker.address=dev-dn1.co.vectis.local:11211;
hive.rpc.query.plan = true;
hive.execution.engine = mr;
select calday, count(*) from price.toprice where calday between '2017-03-01'
and '2017-03-21' group by calday order by calday;

If I use one ignite node it returns correct answer but much slower - 80 sec
vs 23 sec on TEZ.
If I run this query on two or more nodes then result is not correct. As I
can see there are no any errors in logs.
What is wrong?
ignite-node-dn1.log

  
ignite-node-dn2.log

  

3. When I start ignite nodes on different servers sometimes they do not see
each other. I have to rerun a node a few times, after that they connect in
one cluster. Is it normal?




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12344.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-04-28 Thread Ivan Veselovsky
Yes, we did some experiments with Hive over Ignite on HDP distributions, in
basic experiments everything was working without critical issues. 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12308.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-04-28 Thread Ivan Veselovsky
1. The observed "java.lang.ClassNotFoundException: Class
org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem not found " suggests
that TEZ does not have IgniteHadoopFileSystem in class path. Please check
how TEZ composes the  classpath and if it adds all the libs from 
/usr/hdp/2.6.0.3-8/hadoop/lib .

2. Can you please provide full logs: all Ignite logs and the client console
log.

3. Please make sure option --hiveconf ignite.job.shared.classloader=false 
really takes effect: e.g. execute "set ignite.job.shared.classloader" query
on Hive: what does it print? 
This is very important to have this option to be set to false to ensure
correct Hive execution , otherwise very bad consequences like
https://issues.apache.org/jira/browse/IGNITE-4720 and
https://issues.apache.org/jira/browse/IGNITE-5044 are possible. 

The link you're referring is about execution Hive tasks over TEZ, not over
Ignite-MR, so I have some doubt the "vectorized execution" property advice
is relevant there. Also note that the advice mentions 2 properties: "set
hive.vectorized.execution.enabled=false;
set hive.vectorized.execution.reduce.enabled=false; "

About OutOfMem: please see Ignite logs (run ignite.sh with with "-v") -- how
memory is utilized on Ignite nodes? What error happens first? 
In Java8 the metaspace size is configured with
"-XX:MaxMetaspaceSize=[g|m|k] " option. 

Also please note that "hive.rpc.query.plan=true" property is *required* to
execute Hive queries on Ignite MR successfully.




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12307.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-04-28 Thread Alena Melnikova
Hi Ivan,

Many thanks!

1.  *I run Hive query with IFGS without Ignite MapReduce (on TEZ)*
beeline  --hiveconf fs.default.name=igfs://dev-dn1:10500
set hive.execution.engine = tez;

*Errors in Hive log:*
2017-04-28 11:38:16,409 [INFO] [main] |service.AbstractService|: Service
org.apache.tez.dag.app.DAGAppMaster failed in state INITED; cause:
java.lang.RuntimeException: java.lang.ClassNotFoundException: Class
org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem not found
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2228)
at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2780)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2793)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2829)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2811)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:390)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at
org.apache.tez.common.TezCommonUtils.getTezBaseStagingPath(TezCommonUtils.java:86)
at
org.apache.tez.common.TezCommonUtils.getTezSystemStagingPath(TezCommonUtils.java:145)
at 
org.apache.tez.dag.app.DAGAppMaster.serviceInit(DAGAppMaster.java:427)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.tez.dag.app.DAGAppMaster$7.run(DAGAppMaster.java:2389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at
org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2386)
at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2190)
... 17 more

*Errors in Ignite log*
Exception in thread "igfs-#64%null%" java.lang.NullPointerException
at
org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsSecondaryFileSystemPositionedReadable.read(HadoopIgfsSecondaryFileSystemPositionedReadable.java:104)
at
org.apache.ignite.internal.processors.igfs.IgfsLazySecondaryFileSystemPositionedReadable.read(IgfsLazySecondaryFileSystemPositionedReadable.java:64)
at
org.apache.ignite.internal.processors.igfs.IgfsDataManager.secondaryDataBlock(IgfsDataManager.java:405)
at
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:343)
at
org.apache.ignite.internal.processors.igfs.IgfsDataManager$4.applyx(IgfsDataManager.java:332)
at
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
at
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at
org.apache.ignite.internal.util.future.GridFutureChainListener.access$000(GridFutureChainListener.java:30)
at
org.apache.ignite.internal.util.future.GridFutureChainListener$1.run(GridFutureChainListener.java:65)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

Jars (ignite-core-1.9.0.jar, ignite-hadoop-1.9.0.jar,
ignite-shmem-1.0.0.jar) are available on every cluster node in
/usr/hdp/2.6.0.3-8/hadoop/lib.

Environment variables are set on every cluster node:
export JAVA_HOME=/usr/java/jdk1.8.0_101
export IGNITE_HOME=/home/ignite/apache-ignite-hadoop-1.9.0-bin
export HADOOP_HOME=/usr/hdp/current/hadoop-client
export HADOOP_COMMON_HOME=/usr/hdp/2.6.0.3-8/hadoop
export HADOOP_HDFS_HOME=/usr/hdp/current/hadoop-hdfs-client/
export HADOOP_MAPRED_HOME=/usr/hdp/current/hadoop-mapreduce-client/

What could be wrong?


2.  *I run Hive query with IgniteMR but without IGFS*
beeline
set mapreduce.jobtracker.address=dev-dn1.co.vectis.local:11211;
set hive.execution.engine=mr;

*Error*
][FATAL][Hadoop-task-2ccf111e-16a9-4f1d-9b40-84166f5bc7d7_1-MAP-2-0-#68%null%][ExecMapper]
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while
processing row 
at
org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:52)

Then I set hive.vectorized.execution.enabled = false;

*Error*
][ERROR][Hadoop-task-2ccf111e-16a9-4f1d-9b40-84166f5bc7d7_2-MAP-1-0-#199%null%][HadoopRunnableTask]
Task execution failed.
class org.apache.ignite.IgniteCheckedException: class
org.apache.ignite.IgniteCheckedException: Error in configuring object
at
org.apache.ignite.internal.processors.hadoop.impl.v1.HadoopV1MapTask.run(HadoopV1MapTask.java:128)

org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while
processing row {…}
at 
org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:565)


3.   * I run Hive query with Ignite MapReduce and 

Re: HDP, Hive + Ignite

2017-04-27 Thread Ivan Veselovsky
Hi, Alena
 
1. logs you have attached show some errors, but , in fact, I cannot deal
with them until the way to reproduce the problem is known.

2. Here I mean that IGFS (write-through cache built upon another file
system) and the Ignite map-reduce engine (jobtracker on port 11211) are 2
independent things , and can be used independently one of the other. That
means that you can use IGFS without Ignite map-reduce, and Ignite Map-reduce
without IGFS. If you experience some problem using them both, an idea to
track down the issue is to try the same job (1) without Ignite at all, (2)
with IGFS , but without Ignite map-reduce, (3) without IGFS, but with Ignite
job-tracker. This may help to understand, what subsystem causes the problem.
Similar approach can be used when trying to speed up a task.

3. This option should be a property of Hadoop job, so it can either be set
in global Hadoop configuration, or set for concrete hadoop job, e.g. 
jar
./hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar pi
-Dignite.job.shared.classloader=false 5 10
Notice that here the -Dkey=value pairs must be placed between the program
name (pi) and its arguments (5, 10).
In case of Hive similar properties can be passed in using --hiveconf hive
client option, like "hive ... --hiveconf ignite.job.shared.classloader=false
... " .

4. You see OOME (OutOfMemoryError) related to insufficient heap. Heap size
is set via JVM -Xms (initial) and -Xmx (maximum) options. Assumed way to
configure those options for Ignite is to use -J prefix : anything following
-J will be passed to Ignite JVM, e.g. command "./ignite.sh -v -J-Xms4g
-Xmx4g" gives Ignite JVM 4g of initial and 4g of max heap.
Off-heap memory parameters are managed differently, in the Ignite's xml
config.

5. It is very problematic to provide all possible values , since one
configuration value may be set to many different value beans , and each of
them has its own properties. E.g.
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi#ipFinder property (in
the end of your default-config.xml) sets value
"org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder"
there, but there are also ~10 other impelmentations of
org.apache.ignite.spi.discovery.tcp.ipfinder.TcpDiscoveryIpFinder , and they
have different properties. 
Moreover, one can create own implementation with some ad-hoc properties.
So , the file "with all possible values" is really hard or impossible to
create.
To print the default config with all the values explicitly shown -- this is
doable, you can submit it as a feature-request.

6. IGFS can be used as a standalone file system (aka PRIMARY mode), and as a
caching layer on top of another file system (DUAL_SYNC, DUAL_ASYNC modes).
Regarding high availability: IGFS is not highly available . In case of a
dual mode it will re-read data from underlying file system layer on failure,
in primary mode a data loss is possible. Problem with starting HDFS is only
with global configs seen by HDFS daemons -- they should not specify IGFS as
a default file system.

7. Hive query will be transformed to a number of map-reduce tasks, each task
will run on Ignite execution engine. Ignite map-reduce is designed in such a
way that it does not "spill" intermediate data between map and reduce -- it
stores all them in memory (mainly offheap, and this is not caches offheap
you configure in default-config.xml). It is difficult to give exact answer
to yopur question in theory, exact limit can better be found experimentally.
Very rough estimation is that 1.5 -
 doubled file data size (uncompressed, 9G) should fit in memory on all
nodes. Note that offheap memory used by Ignite map-reduce is not limited in
configuration.   

8. You can monitor resources utilized by Ignite using
(1) its own diagnostic printouts (appear periodically in its console output,
when ran with "-v" option).
(2) using any Java monitoring tool, such as JConsole, VisualVM,
JavaFlightRecorder, etc.
But, what is important, offheap memory used in map-reduce will not be shown
in any of the tools listed above. It can be seen only as total amount of
memory used by the Ignite java process -- you can use a native (your OS
specific) process monitoring tool for that . 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12296.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-04-26 Thread Alena Melnikova
Hi Ivan,


> Run Ignite nodes with -v option and see console logs of the nodes

Unfortunately, I can’t run again, it fails. I tried to run on one node with
–v option. Logs in attachment.
beeline.output
  
default-config.xml
 
 
ignite-node.log
  


> use IGFS w/o Ignite Map-Reduce, Ignite Map-Reduce w/o IGFS

Sorry, I don’t understand, I'm newbie. Could you explain it more detailed?


> set  -Dignite.job.shared.classloader=false

Where I can set it? I don’t know and don't use Java.


> Simplest solution is to give Ignite more memory

How can I know how much memory Ignite uses?
Currently every cluster node has 8 CPU and 16Gb RAM.
I tried to add these parameters to default-config.xml:
/
/
But then OOM errors arose, so I removed it from config.


> In code you can create bean objects  (like new FileSystemConfiguration() )
> , then take the values  via getters

To my shame I don’t know Java. Although I guess that I'm not the only one
who does not know Java, but is attracted by Ignite. So I think
default-config.xml with all possible parameters would be really useful.


> what do you mean "cluster is HA"?

High Available cluster. I think if you explain “use IGFS w/o Ignite
Map-Reduce, Ignite Map-Reduce w/o IGFS” a little bit more I stop generating
silly questions.


> This question does not have simple answer: what is done with these data? 

Suppose this file is Hive table (40 mln records, 30 columns, ORC 500MB, csv
9Gb), and we want to calculate some aggregation (SUM, AVG) group by 3-5
columns. How Ignite will utilize memory for that?


> 1 Ignite node per host is okay, if the node can utilize all the host
> resources (memory & cpu) 

How can I find out that ignite node utilizes all the host resources?

Thanks for your help!






--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12257.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-04-25 Thread Ivan Veselovsky
Hi, Alena, there are several different requests, let's try to separate them.

A1. Wrong Hive query results: 
Is this use case easily reproducible? Now it appears, it is not. Please try
to track it down, as possible: 
- Run Ignite nodes with -v option and see console logs of the nodes: are
there any errors there? If there are any errors, query result in most the
cases is not predictable.
- use IGFS w/o Ignite Map-Reduce, Ignite Map-Reduce w/o IGFS. 
- try to run the query in 1 node instead of several nodes.
- See https://issues.apache.org/jira/browse/IGNITE-4720 , set 
-Dignite.job.shared.classloader=false.

A2. Run on Ignite is twice-slow than on TEZ.
Need more info on Ignite configuration .

A3. OOME
What are the JVM options Ignite nodes are running with? Simplest solution is
to give Ignite more memory. 

A4. NPE
Please provide full logs of all Ignite nodes.

Further questions:
1. In code you can create bean objects  (like new FileSystemConfiguration()
) , then take the values  via getters. But may be it's simpler to see the
source code.
2. Not quite understand , what do you mean "cluster is HA"? Config setting
"fs.default.name" in global config will prevent HDFS to start, yes, but in
non-global configs you can use igfs:// value.
3. No, currently Ignite does not provide such functionality.
4. Ignite implements generic map-reduce task execution , it will work  with
any input format, whatever configured.
5. This question does not have simple answer: what is done with these data?
Is it searched for something only (like wordcount), or sorted (like in
terasort) ? Also it should be noted that some data are stored onheap, while
some are stored offheap. These 2 types of used memory  are limited and
observed differently.
6. I suppose, 1 Ignite node per host is okay, if the node can utilize all
the host resources (memory & cpu).
7. Yes. Ignite does not give MR task execution correctness guarantees if a
node is stopped or crashed.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12248.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-04-25 Thread Alena Melnikova
Hi Ivan,

Thanks a lot for very useful guidelines! Using your explanations I could run
Ignite Map-Reduce on my cluster but only twice (next attempts failed with
various errors) and result was unexpected.
I run simle query 
/select calday, count(*) from price.toprice where calday between
'2017-03-20' and '2017-03-21' group by calday order by calday;/
and in works twice slowly than TEZ (40 sec vs 22 sec) and what is more query
result is wrong. Expected result is 37mln records per day, but Ignite MR
calculated only 12 mln records per day. *Why?*

I run one ignite node on every cluster node and I set in Ambari 
/mapreduce.jobtracker.address=127.0.0.1:11211;/
But query runs only when I set in beeline
/set mapreduce.jobtracker.address=dev-dn1:11211;/

My next attempts to run the same query failde with different errors:
•   Caused by: java.lang.OutOfMemoryError: Metaspace
•   java.lang.OutOfMemoryError: GC overhead limit exceeded
•   [ERROR][pool-2-thread-1][HadoopJobTracker] Unhandled exception while
processing event. java.lang.NullPointerException
*How to tune Ignite MR that it works stable?*

If you do not mind I ask a few more questions here:
1.  Where can I find an example of default-config.xml with all possible
parameters with default values? I just copypast some example from this
forum, but it is not complete and with some mistakes.
2.  Am I right that the IGFS can only be used if the cluster is not HA?
Otherwise, HDFS does not start because fs.default.name=igfs://myhost:10500/
3.  When I run Hive query with Ignite MR this is not YARN application. Is it
possible to run Hive queries on Ignite as YARN applications?
4.  Does Ignite take advantage of the ORC files (built-in indexes,
statistics)?
5.  Suppose compressed ORC file is 500Mb, the same uncompressed csv-file is
9Gb. How much memory Ignite needs to work with this file?
6.  What is right approach for Ignite MR: either run one ignite-node on 
every
cluster node or run as many ignite-nodes on one cluster node as possible?
7.  I stopped one ignite-node during the query execution then job (query
execution) failed. Is it normal?

Thanks in advance!




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDP-Hive-Ignite-tp12195p12238.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: HDP, Hive + Ignite

2017-04-24 Thread Ivan V.
p.s. Please use a HadoopFileSystemFactory in secondary file system config,
as described there
https://apacheignite-fs.readme.io/docs/installing-on-hortonworks-hdp ,
constructor org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem(2)
is deprecated.

On Mon, Apr 24, 2017 at 7:44 PM, Ivan V.  wrote:

> Hi, Aloha,
> First of all, Hadoop Accelerator consists of 2 parts that are independent
> and can be used one without the other: (1) IGFS and (2) map-reduce
> execution engine.
>
> IGFS is not used in your case because default file system in your cluster
> is still hdfs:// (specified by global property "fs.default.name").
> The 2 properties you set (*.igfs.impl=..) define the IGFS implementation
> classes, but they come into play only when igfs:// schema encounters.
> Idea to set fs.default.name=igfs://myhost:10500/ is not so good as it may
> appear, because HDFS daemons (namenode, datanode) cannot run with such
> property value, while you probably need HDFS as the underlying (secondary)
> file system.
>
> So, to use IGFS you should either use explicit URI with igfs:// schema as
> you do in your example above "hadoop fs -ls igfs:///user/hive", or try to
> instruct Hive to use igfs as default property, like this:
> hive-1.2/bin/beeline \
> --hiveconf fs.default.name=igfs://myhost:10500/ \
> --hiveconf hive.rpc.query.plan=true \
> --hiveconf mapreduce.framework.name=ignite \
> --hiveconf mapreduce.jobtracker.address=myhost:11211 -u jdbc:hive2://
> 127.0.0.1:1
>
> Also , in order to use Ignite Map-Reduce engine with Hive,  in HDP 2.4+
> the Hive execution engine (property "hive.execution.engine") should
> explicitly be set to "mr", because the default value is different.
>
> On Mon, Apr 24, 2017 at 3:09 PM,  wrote:
>
>> Hi,
>>
>> I have a cluster HDP 2.6 (High Available, 8 nodes) and like to try using
>> Hive+Orc+Tez with Ignite. I guess I should use IFGS as cache layer for HDFS.
>> I installed Hadoop Accelerator  1.9 on all cluster nodes and run one
>> ignite-node on every cluster node.
>>
>> I added these settings using Ambari  and then restarted HDFS, MapReduce,
>> Yarn, Hive.
>> HDFS, add 2 new properties to Custom core-site
>> fs.igfs.impl=org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem
>> fs.AbstractFileSystem.igfs.impl=org.apache.ignite.hadoop.fs.
>> v2.IgniteHadoopFileSystem
>>
>> Mapred, Custom mapred-site
>> mapreduce.framework.name=ignite
>> mapreduce.jobtracker.address=dev-nn1:11211
>>
>> Hive, Custom hive-site
>> hive.rpc.query.plan=true
>>
>> Now I can get access to HDFS through IGFS
>> hadoop fs -ls igfs:///user/hive
>> Found 3 items
>> drwx--  - hive hdfs  0 2017-04-19 21:00
>> igfs:///user/hive/.Trash
>> drwxr-xr-x  - hive hdfs  0 2017-04-19 10:07
>> igfs:///user/hive/.hiveJars
>> drwx--  - hive hdfs  0 2017-04-22 14:27
>> igfs:///user/hive/.staging
>>
>> I thought that Hive read data from HDFS first time and then read the same
>> data from IFGS.
>> But when I run HIVE (cli or beeline) it still reads data from HDFS (I
>> tried a few times), in igniteVisor "Avg. free heap" remains the same
>> before/during/after running query (about 80%).
>> What is wrong? May be I should load data to IFGS manually for every query?
>
>
>


Re: HDP, Hive + Ignite

2017-04-24 Thread Ivan V.
Hi, Aloha,
First of all, Hadoop Accelerator consists of 2 parts that are independent
and can be used one without the other: (1) IGFS and (2) map-reduce
execution engine.

IGFS is not used in your case because default file system in your cluster
is still hdfs:// (specified by global property "fs.default.name").
The 2 properties you set (*.igfs.impl=..) define the IGFS implementation
classes, but they come into play only when igfs:// schema encounters.
Idea to set fs.default.name=igfs://myhost:10500/ is not so good as it may
appear, because HDFS daemons (namenode, datanode) cannot run with such
property value, while you probably need HDFS as the underlying (secondary)
file system.

So, to use IGFS you should either use explicit URI with igfs:// schema as
you do in your example above "hadoop fs -ls igfs:///user/hive", or try to
instruct Hive to use igfs as default property, like this:
hive-1.2/bin/beeline \
--hiveconf fs.default.name=igfs://myhost:10500/ \
--hiveconf hive.rpc.query.plan=true \
--hiveconf mapreduce.framework.name=ignite \
--hiveconf mapreduce.jobtracker.address=myhost:11211 -u jdbc:hive2://
127.0.0.1:1

Also , in order to use Ignite Map-Reduce engine with Hive,  in HDP 2.4+ the
Hive execution engine (property "hive.execution.engine") should explicitly
be set to "mr", because the default value is different.

On Mon, Apr 24, 2017 at 3:09 PM,  wrote:

> Hi,
>
> I have a cluster HDP 2.6 (High Available, 8 nodes) and like to try using
> Hive+Orc+Tez with Ignite. I guess I should use IFGS as cache layer for HDFS.
> I installed Hadoop Accelerator  1.9 on all cluster nodes and run one
> ignite-node on every cluster node.
>
> I added these settings using Ambari  and then restarted HDFS, MapReduce,
> Yarn, Hive.
> HDFS, add 2 new properties to Custom core-site
> fs.igfs.impl=org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem
> fs.AbstractFileSystem.igfs.impl=org.apache.ignite.hadoop.
> fs.v2.IgniteHadoopFileSystem
>
> Mapred, Custom mapred-site
> mapreduce.framework.name=ignite
> mapreduce.jobtracker.address=dev-nn1:11211
>
> Hive, Custom hive-site
> hive.rpc.query.plan=true
>
> Now I can get access to HDFS through IGFS
> hadoop fs -ls igfs:///user/hive
> Found 3 items
> drwx--  - hive hdfs  0 2017-04-19 21:00
> igfs:///user/hive/.Trash
> drwxr-xr-x  - hive hdfs  0 2017-04-19 10:07
> igfs:///user/hive/.hiveJars
> drwx--  - hive hdfs  0 2017-04-22 14:27
> igfs:///user/hive/.staging
>
> I thought that Hive read data from HDFS first time and then read the same
> data from IFGS.
> But when I run HIVE (cli or beeline) it still reads data from HDFS (I
> tried a few times), in igniteVisor "Avg. free heap" remains the same
> before/during/after running query (about 80%).
> What is wrong? May be I should load data to IFGS manually for every query?