pointers to solve this
issue.
Thanks
Divye Sheth
://blog.cloudera.com/blog/2012/03/hbase-hadoop-xceivers/
Also make sure you do not have a leak in your code where there are open
file descriptors.
Thanks
Divye Sheth
On Mon, Apr 28, 2014 at 9:09 PM, Wright, Eric
ewri...@soleocommunications.com wrote:
Currently, we are working on setting up a production hadoop
this server as dead though no corresponding
value i.e. DFS used, Non-DFS Used etc. would be shown.
Thanks
Divye Sheth
On Mon, Apr 14, 2014 at 12:39 PM, Sandeep Nemuri nhsande...@gmail.comwrote:
You can add the hostname/IP in exclude file and run this command
hadoop dfsadmin -refreshNodes.
On Mon
the ulimit
set to a high number as hadoop tends to open too may files.
check ulimit -n if the value is 1024 increase it to a higher number usually
32K or 64K. Again a restart of the cluster would be required.
Thanks
Divye Sheth
On Mon, Apr 14, 2014 at 11:10 AM, Stanley Shi s...@gopivotal.com
-default.xml for the exact property name.
Sent from my phone.
Thanks
Divye Sheth
On Apr 12, 2014 8:33 PM, Viswanathan J jayamviswanat...@gmail.com wrote:
Hi,
I'm using Hadoop v1.2.1 and it is running fine so long(3 months) without
any issues.
Suddenly I got the below error in Jobtracker and jobs
Sorry for the error. Did not have a proper look at the logs. This seems to
be a JT issue. Ignore the previous email.
Thanks
Divye Sheth
On Apr 14, 2014 6:06 PM, divye sheth divs.sh...@gmail.com wrote:
This usually occurs when the task takes more memory and exceeds its heap
space. You can
to run my WordCount MRV1
example but not the code that I have written for a usecase.
Thanks
Divye Sheth
On Tue, Apr 8, 2014 at 6:12 PM, Kavali, Devaraj devaraj.kav...@intel.comwrote:
As per the given exception stack trace, it is trying to use local file
system. Can you check whether you have
fairly new to the coding
aspect of map-reduce.
Thanks
Divye Sheth
On Tue, Apr 8, 2014 at 7:43 PM, divye sheth divs.sh...@gmail.com wrote:
Hi,
I saw that pretty much after sending the email. I verified the properties
file and it has all the correct properties even the mapred.framework.nameis
You can execute this command on any machine where you have set the
HADOOP_MAPRED_HOME
Thanks
Divye Sheth
On Fri, Mar 28, 2014 at 12:31 PM, Avinash Kujur avin...@gmail.com wrote:
we can execute the above command anywhere or do i need to execute it in
any particular directory?
thanks
Hi Avinash,
The export command you can execute on any one machine in the cluster as of
now. Once you have executed the export command i.e. export
HADOOP_MAPRED_HOME=/path/to/your/hadoop/installation you can then execute
the mapred job -list command from that very same machine.
Thanks
Divye Sheth
Which version of hadoop are u using? AFAIK the hadoop mapred home is the
directory where hadoop is installed or in other words untarred.
Thanks
Divye Sheth
On Mar 28, 2014 10:43 AM, Avinash Kujur avin...@gmail.com wrote:
hi,
when i am trying to execute this command:
hadoop job -history ~/1
Hi Haihong,
Please check out the link below, I believe it should solve your problem.
http://stackoverflow.com/questions/21005643/container-is-running-beyond-memory-limits
Thanks
Divye Sheth
On Wed, Mar 12, 2014 at 11:33 AM, haihong lu ung3...@gmail.com wrote:
Thanks, even if i had added
here.
Thanks
Divye Sheth
settled.
Thanks
Divye Sheth
On Fri, Mar 7, 2014 at 6:33 PM, divye sheth divs.sh...@gmail.com wrote:
Hi,
I am using hadoop 0.20.2 and have decommissioned one of my servers. It has
around 300G's of data which if I am correct will be distributed across the
other machines and slowly all the blocks
this? Data loss is a NO NO for me.
Thanks
Divye Sheth
On Wed, Mar 5, 2014 at 1:28 PM, Azuryy Yu azury...@gmail.com wrote:
Hi,
That probably break something if you apply the patch from 2.x to 0.20.x,
but it depends on.
AFAIK, Balancer had a major refactor in HDFSv2, so you'd better fix
.
background: our cluster is not balanced, load balancer is very slow, so i
wrote this tool to move blocks from one node to another node.
On Wed, Mar 5, 2014 at 4:06 PM, divye sheth divs.sh...@gmail.com wrote:
I wont be in a position to fix that depending on HDFS-1804 as we are
upgrading to CDH4
such scenario, is it not hadoop's job to balance the
disk utilization between multiple disks on single datanode?
Thanks
Divye Sheth
You can force namenode to leave safemode.
hadoop dfsadmin -safemode leave
Then run the hadoop fsck.
Thanks
Divye Sheth
On Mar 4, 2014 10:03 PM, John Lilley john.lil...@redpoint.net wrote:
More information from the NameNode log. I don't understand... it is
saying that I cannot delete
Sheth
On Wed, Mar 5, 2014 at 11:28 AM, Harsh J ha...@cloudera.com wrote:
You're probably looking for
https://issues.apache.org/jira/browse/HDFS-1804
On Tue, Mar 4, 2014 at 5:54 AM, divye sheth divs.sh...@gmail.com wrote:
Hi,
I am new to the mailing list.
I am using Hadoop 0.20.2
I would recommend you to stop the cluster and then start the daemons one by
one.
1. stop-dfs.sh
2. hadoop-daemon.sh start namenode
3. hadoop-daemon.sh start datanode
This will show start up errors if any, also verify if the datanode is able
to communicate with the namenode.
Thanks
Divye Sheth
20 matches
Mail list logo