Mainly @steveloughran Is it safe to say that *old* fs semantics are in
FSContract test, and *new* fs semantics in FSMainOps tests ?
I ask this because it seems that you had tests in your swift filesystem tests
which used the FSContract libs, as well as the FSMainOps..
Not sure why you need
Hi YouPeng, thanks for your advice. I have read the docs and configure the
parameters as follows:
Physical Server: 8 cores CPU, 16GB memory.
For YARN:
yarn.nodemanager.resource.memory-mb set to 12GB and keep 4GB for the OS.
yarn.scheduler.minimum-allocation-mb set to 2048M as the minimum
Hi
If I understood you correctly, you would like to run your AM with YARN
Client from shell as oppose to run the Driver like in MRv1. But it's the
same thing (more or less). In the example you provided
(org.apache.hadoop.yarn.applications.DistributedShell) the Client.class is
the driver. However
Ok, thank you.
Regards
Veera Prasad Nallamilli
Sr. Systems Consultant - Americas Energy Group
Direct: +1 (281) 414-7230
veeraprasad.nallami...@openlink.com
vpra...@olf.com
www.openlink.comhttp://www.openlink.com/
New York * London * Houston * Berlin * Vienna * Sydney * São Paulo * Singapore
* Toronto *
Hi all,
In the hadoop-3.0.0-SNAPSHOT
I set the below option hoping that it will throttle a container that
over-utilize its resources.
property
nameyarn.nodemanager.container-monitor.resource-calculator.class/name
valueorg.apache.hadoop.yarn.util.LinuxResourceCalculatorPlugin/value
So i tried the deprecated parameter mapred.tasktracker.expiry.interval in
my configuration and voila it works!
Hansi, this is exactly the one parameter that I told you about in a
previous post ;)
The heap of application master is controlled
via yarn.app.mapreduce.am.command-opts and its default value is -Xmx1024m (
http://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml
).
yarn.scheduler.minimum-allocation-mb is completely different
One blog post is here:
http://grepalex.com/2012/10/20/hadoop-unit-testing-with-minimrcluster/
When I was playing with miniDFSCluster, and miniMRCluster, I was using them
via HBaseTestingUtility (it can take a configuration object in a
constructor
The SequenceFile.Reader will work PErfect! (I should have seen that).
As always - thanks Harsh
On Thu, Dec 5, 2013 at 2:22 AM, Harsh J ha...@cloudera.com wrote:
If you're looking for file header/contents based inspection, you could
download the file and run the Linux utility 'file' on the
MiniDFSCluster is used everywhere in HDFS's unit tests. You can easily
find examples in the source code of HDFS (e.g.,
org.apache.hadoop.hdfs.TestDFSMkdirs). You can also test simple HA
setup using MiniDFSCluster (e.g.,
org.apache.hadoop.hdfs.server.namenode.ha.TestHASafeMode).
On Thu, Dec 5,
I am running it remotely from the same PC where HDP sandbox is installed. This
is a NOT a hadoop job yet. It is a simple HBase client . the exception is
thrown when creating HBaseAdmin based on the configuration.
The reason I am asking the question on the Hadoop user list is because the
Hi
Have your spread you config over your cluster.
And do you take a look whether the error containers are on any concentrated
nodes?
regards
2013/12/5 panfei cnwe...@gmail.com
Hi YouPeng, thanks for your advice. I have read the docs and configure the
parameters as follows:
Physical
hi,maillist:
any variables can control it?
Hi, All
Is there a way to write files into remote HDFS on Linux using C# on
Windows? We want to use HDFS as data storage.
We know there is HDFS java API, but not C#. We tried SAMBA for file sharing
and FUSE for mounting HDFS. It worked if we simply copy files to HDFS, but
if we open a filestream
You can take a look at this parameter. This will control number of jobs a user
can initialize.
mapred.capacity-scheduler.queue.default.maximum-initialized-jobs-per-user = ….
On Dec 5, 2013, at 5:33 PM, ch huang justlo...@gmail.com wrote:
hi,maillist:
any variables can
Hi,
I encounter a problem with Yarn's fair scheduler. The thing is that, I
first set a queue by configuring fair-scheduler.xml below. Next I try to submit
a job to that queue by designating queue name via mapreduce.job.queuename=
amelie.
fair-scheduler.xml:
allocations
queue name=amelie
hi,maillist:
i try to use terasort to benchmark my cluster ,when i run it
,i found tearsort try to read partition file from local filesystem
not HDFS,i see a partition file in HDFS ,when i copy this file into local
filesystem,run terasort again ,it's work fine ,but it run on local
Hello
I am following the tutorial hadoop on single node cluster and I am able
test word count program map reduce. its working fine.
I would like to know
How to monitor when shuffle phase network traffic occurs via wireshark or
someother means.
Pls guide me.
Thanks
Abdul Navaz
Graduate student
i search the code only
src/hadoop-mapreduce1-project/src/contrib/capacity-scheduler/src/test/org/apache/hadoop/mapred/TestCapacitySchedulerConf.java
file has the variables,i did see it on
You can try using WebHDFS.
Thanks,
+Vinod
On Thu, Dec 5, 2013 at 6:04 PM, Fengyun RAO raofeng...@gmail.com wrote:
Hi, All
Is there a way to write files into remote HDFS on Linux using C# on
Windows? We want to use HDFS as data storage.
We know there is HDFS java API, but not C#. We tried
Hi Ch huang,
Please check whether all datanodes in your cluster have enough disk space
and number non-decommissioned nodes should be non-zero.
Thanks and regards,
Vinayakumar B
From: ch huang [mailto:justlo...@gmail.com]
Sent: 06 December 2013 07:14
To: user@hadoop.apache.org
Subject: error
hi, Abdul Navaz:
assign shuffle port in each NM using option mapreduce.shuffle.port in
mapred-site.xml,then
monitor this port use tcpdump or wireshark ,hope this info can help you
On Fri, Dec 6, 2013 at 11:22 AM, navaz navaz@gmail.com wrote:
Hello
I am following the tutorial hadoop on
Something looks really bad on your cluster. The JVM's heap size is 200MB
but its virtual memory has ballooned to a monstrous 332GB. Does that ring
any bell? Can you run regular java applications on this node? This doesn't
seem related to YARN per-se.
+Vinod
Hortonworks Inc.
hi:
you are right,my DN disk is full,i delete some file,now it's worked
,thanks
On Fri, Dec 6, 2013 at 11:28 AM, Vinayakumar B vinayakuma...@huawei.comwrote:
Hi Ch huang,
Please check whether all datanodes in your cluster have enough disk
space and number non-decommissioned nodes
I have a bunch of SequenceFiles which I'd like to convert to HFiles. How do
I do that?
Table
Define
CREATE TABLE kvpair (
id STRING,
arrstr ARRAYSTRING,
arrmap ARRAYMAPSTRING, STRING
)
ROW FORMAT SERDE com.cloudera.hive.serde.JSONSerDe;
##com.cloudera.hive.serde.JSONSerDe is a SerDe can handle complex
Thanks!
I tried WebHDFS, which also work well if I copy local files to HDFS, but
still can't find a way to open a filestream, and write to it.
2013/12/6 Vinod Kumar Vavilapalli vino...@hortonworks.com
You can try using WebHDFS.
Thanks,
+Vinod
On Thu, Dec 5, 2013 at 6:04 PM, Fengyun RAO
Hi Arun,
I have copied a shell script to HDFS and trying to execute it on
containers. How do I specify my shell script PATH in setCommands() call
on ContainerLaunchContext? I am doing it this way
String shellScriptPath =
hdfs://isredeng:8020/user/kbonagir/KKDummy/list.ksh;
add this file in the files to be localized. (LocalResourceRequest). and
then refer it as ./list.ksh .. While adding this to LocalResource specify
the path which you have mentioned.
On Thu, Dec 5, 2013 at 10:40 PM, Krishna Kishore Bonagiri
write2kish...@gmail.com wrote:
Hi Arun,
I have
30 matches
Mail list logo