Hi,
On Mon, Jul 14, 2014 at 7:50 PM, Adam Kawa kawa.a...@gmail.com wrote:
It sounds like JobTracker setting, so the restart looks to be required.
ok.
You verify it in pseudo-distributed mode by setting it to a very low
value, restarting JT and seeing if you get the exception that prints
Hi,
I am planning to use hadoop 2.4.1 for my work, and was wondering if it is
recommended for production? If not should I use the 2.2.0 GA release?
WDYT?
Regards,
Shani Ranasinghe.
The real cause is the IOException. The PriviledgedActionException is a
generic exception. Other file writes succeed in the same directory with the
same user.
On Tue, Jul 15, 2014 at 4:59 AM, Yanbo Liang yanboha...@gmail.com wrote:
Maybe the user 'test' has no privilege of write operation.
You
Federation is just a namenode namespace management capability. It is
designed to control namenode management and to provide scalability for
namenode. I don't think it poses any security or restrictions on accesssing
the HDFS filesystem. I guess this link would help for your question 2 -
HI
There are four conditions to exclude DN..I feel, you met anyone of the
following,Mostly (ii) or (iii).
i) Check if the node is (being) decommissed.
--- Can check from Namenode UI OR by exceuting hdfs dfsadmin -report
ii) Check the remaining capacity of the target machine
--- Can check
Hi Praveenesh,
Thank you for pointing this out. Will go through this.
On Tue, Jul 15, 2014 at 3:36 PM, praveenesh kumar praveen...@gmail.com
wrote:
Federation is just a namenode namespace management capability. It is
designed to control namenode management and to provide scalability for
All,
I am running a small cluster with hadoop-2.2.0 installed on an NFS
shared directory. Since all nodes can access, I do not want to enable
log aggregation.
My understanding was that if aggregation wasn't enabled, the 'yarn logs'
command would just look in the $HADOOP_HOME/logs/userlogs
IMHO,
$ yarn logs looks for aggregated logs at remote location.
2014-07-15 16:49 GMT+02:00 Brian C. Huffman bhuff...@etinternational.com:
All,
I am running a small cluster with hadoop-2.2.0 installed on an NFS shared
directory. Since all nodes can access, I do not want to enable log
For example, the remote location is configured
via yarn.nodemanager.remote-app-log-dir and defaults to /tmp/logs.
This is why, you see:
*Logs not available at
/tmp/logs/hadoop/logs/application_1405396841766_0003.*
PS.
The full path is configured via $
Hbase is not harcoded to hdfs: it works on any file system that implements the
file system interface, we've run it on glusterfs for example. I assume some
have also run it on s3 and other alternative file systems .
** However **
For best performance, direct block io hooks on hdfs can boost
HBase will take advantage of HDFS specific features if they are available
but can run on anything that has a Hadoop FileSystem driver. Gluster is an
option. Maybe Lustre and Ceph also.
If you plan on dedicating storage to Cassandra, then you don't have to
worry about managing a distributed
Our Utility companies have several BI projects starting up designed to use
Hadoop. Where can I find information on Hadoop Security Best Practices?
Thanks in advance for your time and consideration.
Rich Corrigan, CISSP-ISSMP, C|EH
Information Security Principal
(O) 858-503-5092
(M)
Hey everyone i got the following error with the oiv tool.
Please help
ERROR
hdfs oiv -i /home/hduser/fsimage1234 -o /home/hduser/interpret.txt
Exception in thread main java.lang.NoClassDefFoundError:
org/apache/commons/cli/ParseException
at
which version of hadoop are you using ?
If it is 2.4.x and above use hdfs -oiv input output
otherwise use hadoop -oiv. input output
On Tue, Jul 15, 2014 at 12:51 PM, Ashish Dobhal dobhalashish...@gmail.com
wrote:
Hey everyone i got the following error with the oiv tool.
Please help
Hey Anandha,
I am using hadoop 2.3.0 on using the hadoop command I got the folowing
error.
bin/hadoop oiv -i /home/hduser/fsimage1234 -o /home/hduser/please.txt
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
log4j:ERROR Could not
Adam is right.
yarn logs command only works when log-aggregation is enabled. It's not
easy but possible to make it work when aggregation is disabled.
+Vinod
Hortonworks Inc.
http://hortonworks.com/
On Tue, Jul 15, 2014 at 10:03 AM, Brian C. Huffman
bhuff...@etinternational.com wrote:
Hi,
Request you to help for
1) submitting the mapreduce job to remote job tacker. getting
unknownhostexception
2) which configuration files are used for submitting mapreduce job.
17 matches
Mail list logo