Hi Klaus,
Thanks for telling us the request. Currently, the official docker
image of Apache Hadoop is not available as Roman mentioned. I will
raise this request as discussion.
Thanks,
- Tsuyoshi
On Tue, Jul 19, 2016 at 12:29 PM, Deepak Vohra
wrote:
> A custom
Hi,
To get entire log, yarn logs command can help you.
yarn logs -applicationId application_1448325816071_0002
Thanks,
- Tsuyoshi
On Tue, Nov 24, 2015 at 7:43 PM, Namikaze Minato wrote:
> Have a look at the logs for your attempt_1448325816071_0002_m_03_0.
>
>
Tue, Oct 6, 2015 at 9:35 AM, Tsuyoshi Ozawa <oz...@apache.org> wrote:
> Hi commiters and users of Hadoop stack,
>
> I’ll share the current status of JDK-8 support here. We can take a
> two-step approach to support JDK-8 - runtime-level support and
> source-level support.
>
&
Hi commiters and users of Hadoop stack,
I’ll share the current status of JDK-8 support here. We can take a
two-step approach to support JDK-8 - runtime-level support and
source-level support.
About runtime-level support, I’ve tested Hadoop stack with JDK-8 e.g.
MapReduce, Spark, Tez, Flink on
Please sent an email to user-unsubscr...@hadoop.apache.org to
unsubscribe hadoop-user mailing list instead of sending msg to
user@hadoop.apache.org.
Thanks,
- Tsuyoshi
On Mon, Sep 21, 2015 at 4:48 PM, EUGEO 2015 wrote:
> i dont understand what i need to do in order to
Hi Matteo,
It depends on configurations - yarn-site.xml (nodemanager's capacity
of memory) and requests of containers by spark(spark.yarn.am.memory
and executor's Memory) if you don't use Dominant Resource Fairness.
Could you share them?
Thanks,
- Tsuyoshi
On Wed, Sep 2, 2015 at 7:25 AM, Matteo
Hi Shushant,
If you use fair scheduler, you can restrict number of AM by configuring queue:
* maxRunningApps: limit the number of apps from the queue to run at once
* maxAMShare: limit the fraction of the queue's fair share that can be
used to run application masters. This property can only be
Sometimes compiled native libraries included in tar ball doesn't work
correctly - how about recompiling library on your environment?
Thanks,
- Tsuyoshi
On Tue, Mar 24, 2015 at 6:18 PM, 王鹏飞 wpf5...@gmail.com wrote:
I noticed a map-reduce job encountered an
I think it would be better to send your question to Cloudera's mailing list.
Thanks,
- Tsuyoshi
On Tue, Mar 17, 2015 at 11:44 AM, kumar jayapal kjayapa...@gmail.com wrote:
Hello,
May i know how to stop oozie flows in CDH5 using CM?
can you please give me some link to know more about it?
I think ZooKeeper can handle thousands of updates,
I meant thousands of updates per second.
Thanks,
- Tsuyoshi
On Fri, Feb 13, 2015 at 3:59 PM, Tsuyoshi Ozawa oz...@apache.org wrote:
Hi Suma,
I think ZooKeeper can handle thousands of updates, so thousands of
jobs can be launched
Hi Suma,
I think ZooKeeper can handle thousands of updates, so thousands of
jobs can be launched at the same time.
More jobs can be running at the same time since the number of updates
against ZooKeeper is less than the number of jobs. Please free to ask
us if you face the scalability or
Hi,
Are you using ZKFC? If the answer is positive, could you share the
configuration files for ZooKeeper?
Thanks,
- Tsuyoshi
On Wed, Dec 3, 2014 at 2:15 AM, mail list louis.hust...@gmail.com wrote:
Hi,all
we are testing QJM NAMENODE HA, and when the active name node down,it cost
about 5
Hi Abhishek,
Welcome to Hadoop!
The Hadoop community has maintained the documents for Hadoop for
community edition. You can read it on the web site.
http://hadoop.apache.org/docs/current/
If you'd like to know Hadoop for more detail, Hadoop: The Definitive
Guide by Tom White is a good book for
Hi,
Could you share following configurations? It can be failures because
of out of memory at mapper side.
yarn.app.mapreduce.am.resource.mb
mapreduce.map.memory.mb
mapreduce.reduce.memory.mb
mapreduce.map.java.opts
mapreduce.reduce.java.opts
On Wed, Nov 19, 2014 at 12:23 AM, francexo83
Hi Jakub,
You have 2 options:
1. Turning off virtual memory check as you mentioned.
2. Making yarn.nodemanager.vmem-pmem-ratio larger.
1. is reasonable choice if you cannot predict virtual memory usage in
advance or you don't have any applications to check virtual memory.
Thanks,
- Tsuyoshi
Hi,
Latest version of the document Sebastiano mentioned is available here:
http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/HadoopStreaming.html
Thanks,
- Tsuyoshi
On Fri, Sep 5, 2014 at 12:39 PM, Andrew Ehrlich and...@aehrlich.com wrote:
Also when you
Hi,
Yes, they can on YARN.
- Tsuyoshi
On Mon, Sep 1, 2014 at 2:32 PM, Adaryl Bob Wakefield, MBA
adaryl.wakefi...@hotmail.com wrote:
Can Tez and MapReduce live together and get along in the same cluster?
B.
--
- Tsuyoshi
Hi,
It looks a problem of class path at spark side.
Thanks,
- Tsuyoshi
On Fri, Aug 29, 2014 at 8:49 AM, arthur.hk.c...@gmail.com
arthur.hk.c...@gmail.com wrote:
Hi,
I use Hadoop 2.4.1, I got org.apache.hadoop.io.compress.SnappyCodec not
found” error:
hadoop checknative
14/08/29 02:54:51
Hi Mark,
Thanks for your reporting. I also confirmed that we cannot access jars
of Hadoop 2.5.0.
Karthik, could you check this problem?
Thanks,
- Tsuyoshi
On Thu, Aug 21, 2014 at 2:08 AM, Campbell, Mark mark.campb...@xerox.com wrote:
It seems that all the needed archives (yard, mapreduce,
Hi,
Please check the value of mapreduce.map.maxattempts and
mapreduce.reduce.maxattempts. If you'd like to ignore the error only
in specific jobs, it's useful to use -D option to change the
configuration as follows:
bin/hadoop jar job.jar -Dmapreduce.map.maxattempts=10
Thanks,
- Tsuyoshi
On
Hi,
Could you check that the following packages which are mentioned in
BUILDING.txt are installed?
* CMake 2.6 or newer (if compiling native code)
* Zlib devel (if compiling native code)
* openssl devel ( if compiling native hadoop-pipes )
Thanks,
- Tsuyoshi
On Tue, Aug 5, 2014 at 7:13 PM,
Hi,
Unfortunately, sometimes we face unexpected test failures. Please
check whether the problem has been registered or resolved on Hadoop's
JIRAs.
* https://issues.apache.org/jira/browse/HADOOP
* https://issues.apache.org/jira/browse/HDFS
* https://issues.apache.org/jira/browse/MAPREDUCE
*
Hi Chris MacKenzie,
How about trying as follows to identify the reason of your problem?
1. Making both yarn.nodemanager.pmem-check-enabled and
yarn.nodemanager.vmem-check-enabled false
2. Making yarn.nodemanager.pmem-check-enabled true
3. Making yarn.nodemanager.pmem-check-enabled true and
Hi,
To edit the document you mentioned, please edit
hadoop-common/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/WritingYarnApplications.apt.vm
The document of apt.vm is written in APT format, which is described
in the following document:
I added it in the pom.xml file (inside
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-resourcemanager/pom.xml)
mvn package -Pdist,native -Dtar
How about editing hadoop-project/pom.xml and add your dependency to
it? I think it will work.
Thanks,
- Tsuyoshi
On Wed, Jul
Some tools provide us CLI tools which don't require creating jar. For
example, you can use Pig interactive mode if you'd like to use Pig.
http://pig.apache.org/docs/r0.12.1/start.html#interactive-mode
Hive CLI is one of them:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Cli
Hi Ch,
How about using DistCp?
http://hadoop.apache.org/docs/r1.2.1/distcp2.html
Thanks,
- Tsuyoshi
On Wed, Jun 4, 2014 at 5:40 PM, ch huang justlo...@gmail.com wrote:
hi,mailist:
my company signed a new IDC , i need move the total 50T data
from old hadoop cluster to the new
Hi Ted and Christian,
In fact, the problem is filed as HADOOP-10510.
https://issues.apache.org/jira/browse/HADOOP-10510
This problem is also reproduced on my local, but not on the other
environment. Maybe it's environment-dependent problem. However, I
cannot understand the condition this problem
Hi,
The log messages mean ProcfsBasedProcessTree just updates internal
information via procfs. The redundant log message is suppressed after
2.1.0-beta, 0.23.8. For more detail, please check YARN-476.
Thanks,
- Tsuyoshi
On Wed, May 28, 2014 at 2:53 PM, Prashant Kommireddi
prash1...@gmail.com
Hi,
You need to set mapreduce.map.java.opts and mapreduce.reduce.java.opts
instead of mapred.child.java.opts to tune JVMs when you use MRv2.
Please note that you also need to tune
mapreduce.map.memory.mb/mapreduce.reduce.memory.mb to notify
ResourceManager of the amounts of resource usage.
This
hi,
In addition to that, you need to change property *yarn*.*nodemanager*.
resource.*memory*-mb in yarn-site.xmk to make NM recognize memory usage.
On May 22, 2014 7:50 PM, ch huang justlo...@gmail.com wrote:
hi,maillist:
i set YARN_NODEMANAGER_HEAPSIZE=15000,so the NM run in a 15G
for containers.
Vinod
On May 22, 2014, at 8:25 PM, Tsuyoshi OZAWA ozawa.tsuyo...@gmail.com
wrote:
hi,
In addition to that, you need to change property *yarn*.*nodemanager*.
resource.*memory*-mb in yarn-site.xmk to make NM recognize memory usage.
On May 22, 2014 7:50 PM, ch huang justlo
Hi,
As Oleg mentioned, container-reuse is done at ApplicationMaster level
currently. Tez's ApplicationMaster is one of them.
Thanks,
- Tsuyoshi
On Thu, Apr 24, 2014 at 2:03 AM, Oleg Zhurakousky
oleg.zhurakou...@gmail.com wrote:
While YARN-373 addresses a bit of a different problem the use case
Hi,
There are some known issues and blockers of 2.4.1 release.
You can check it with the following query of JIRAs:
(project = Hadoop Common OR project = Hadoop HDFS OR project =
Hadoop YARN OR project = Hadoop Map/Reduce) AND Target Version/s
= 2.4.1 AND priority = Blocker
YARN-1929 and
Hi Steve,
I've filed the problem as HADOOP-10051.
https://issues.apache.org/jira/browse/HADOOP-10051
Can someone answer this problem?
Thanks,
Tsuyoshi
On Fri, Mar 14, 2014 at 2:53 PM, Steve Lewis lordjoe2...@gmail.com wrote:
To run Hadoop 2.0 you need to build WunUtil.exe and hadoop.dll
I am
Hi Anand and YARN developers,
I found that UnixShellScriptBuilder#command just concatenates
each commands with space, not with ;.
Therefore, you need to suffix ; after commands you'd like to execute.
UnixShellScriptBuilder {
@Override
public void command(ListString command) {
Instead of Ted's approach, it's also useful to use surefire plugin
when you debug tests.
mvn test -Dmaven.surefire.debug -Dtest=TestClassName
This commands accept debugger's attach on 5005 port by default, so you
can attach via eclipse's debugger. Then the test runs and you can use
debugger. I
Hi,
How about checking the value of mapreduce.map.java.opts? Are your JVMs
launched with assumed heap memory?
On Thu, Oct 24, 2013 at 11:31 AM, Manu Zhang owenzhang1...@gmail.com wrote:
Just confirmed the problem still existed even the mapred-site.xmls on all
nodes have the same configuration
Hi,
One point in addition to Arun's comment: the docs Arun pointed is
being updated now. Please check this JIRA.
https://issues.apache.org/jira/browse/HADOOP-10050
Thanks, Tsuyoshi
On Fri, Oct 18, 2013 at 2:00 PM, Arun C Murthy a...@hortonworks.com wrote:
Try this?
Hi,
could you check environment variables(e.g.
HADOOP_COMMON_HOME/HADOOP_HDFS_HOME/HADOOP_MAPRED_HOME/HADOOP_CONF_DIR)
and send us the contents of etc/yarn-site.conf? In my environment, I
cannot reproduce your problem with 2.2.0 tar ball.
Thanks, Tsuyoshi
On Thu, Oct 17, 2013 at 10:18 AM,
specifically? At what times would one prefer the other?
On Fri, Jul 26, 2013 at 11:43 AM, Tsuyoshi OZAWA
ozawa.tsuyo...@gmail.com wrote:
Hi,
Now, Apache Mesos, an distributed resource manager, is top-level
apache project. Meanwhile, As you know, Hadoop has own resource
manager - YARN. IMHO, we
be
confusing.
Its very easy for a good technology to get starved because no one asks how
to combine these features in to the framework.
On Jul 29, 2013, at 9:58 AM, Tsuyoshi OZAWA ozawa.tsuyo...@gmail.com wrote:
I thought some high availability and resource isolation features in
Mesos are more
Hi,
Now, Apache Mesos, an distributed resource manager, is top-level
apache project. Meanwhile, As you know, Hadoop has own resource
manager - YARN. IMHO, we should make resource manager pluggable in
MRv2, because there are their own field users of MapReduce would like
to use. I think this work
Hi Oleg,
Speculative tasks can be launched as TaskAttempt in MR jobs.
And, if no reducer class is set, MR launches default Reducer
class(IdentityReducer).
Thanks,
Tsuyoshi
On Sun, Dec 9, 2012 at 11:53 PM, Oleg Zhurakousky
oleg.zhurakou...@gmail.com wrote:
I studying user logs on the two
44 matches
Mail list logo