Hi, I have not seen that error before. I'd add the '-x' option to the sh
command in the pom to get diagnostics.
exec executable=${shell-executable}
dir=${project.build.directory} failonerror=true
arg line=./dist-copynativelibs.sh/
I am sorry you are
Hello all. I have an interesting case where we lose data in the event of a
flume crash, which is easily reproducible when we kill -9 the flume agent.
I believe that this may be because the Flume Sink is issuing a commit before
it actually completes the fs sync. If this is the case then the
+1
Thanks
+Vinod
On Jun 4, 2015, at 1:05 PM, Subramaniam V K
subru...@gmail.commailto:subru...@gmail.com wrote:
Hi Vinod,
Thanks for organizing the BoF meetup.
We have put up an initial proposal
@YARN-2915https://issues.apache.org/jira/browse/YARN-2915 on Federating YARN
to make it
Sorry, missed responding to this. No registration is required, the conference
will have ended by then.
Thanks
+Vinod
On Jun 3, 2015, at 10:53 AM, Karthik Kambatla
ka...@cloudera.commailto:ka...@cloudera.com wrote:
Also, are Hadoop summit registrations required to attend the BoF?
On Wed, Jun
Hi Sandeep,
Thanks for your instruction.
However since I ran the build and tests inside a docker container, and I
am not the root user of the docker container, I cannot edit /etc/hosts
directly. But I can change /etc/hosts by adding some options in the
docker command before starting the
It seems the parameter mapreduce.map.memory.mb is parsed from client.
2015-06-07 15:05 GMT+08:00 J. Rottinghuis jrottingh...@gmail.com:
On each node you can configure how much memory is available for containers
to run.
On the other hand, for each application you can configure how large
thnak you first! it really works!
but now i I got a new problem.may be raised by dist-copynativelibs.sh .
i see the dist-copynativelibs.sh has been generate as well as
hadoop-common-2.7.0.jar,
hadoop-common-2.7.0-tests.jar.i am sure i have sh, mkdir, rm, cp, tar, gzip
in path Environmental
Hello,
Apologies if I'm posting to the wrong group however I thought this was right
audience for the research study I'm conducting hence this posting.
I'm a user experience researcher @ CA Technologieshttp://www.ca.com. I'm
running a research study to understand the role of Hadoop / Big Data