Re: Error While copying file from local to dfs

2016-03-04 Thread Vinodh Nagaraj
Hi All,

Please help me.

Thanks & Regards,
Vinodh.N

On Fri, Mar 4, 2016 at 6:29 PM, Vinodh Nagaraj 
wrote:

> Hi All,
>
> I am new bee to Hadoop.
>
> I installed hadoop 2.7.1 on windows 32 bit machine ( windows 7 ) for
> learning purpose.
>
> I can execute start-all.cmd successfully.
>
> When i execute jps,i got the below output.
> 28544 NameNode
> 35728
> 36308 DataNode
> 43828 Jps
> 40688 NodeManager
> 33820 ResourceManager
>
> My configuration files are.
>
> core-site.xml
> -
> 
>  
>fs.defaultFS
>hdfs://10.219.149.100:50075/
>NameNode URI
>  
> 
>
>
>
> hdfs-site.xml
> -
> 
>
>  dfs.replication
>  2
> 
> 
>   dfs.namenode.name.dir
>   D:\Hadoop_TEST\Hadoop\Data
> 
> 
>  dfs.datanode.data.dir
>   D:\Hadoop_TEST\Hadoop\Secondary
>
>
>   
>   dfs.namenode.datanode.registration.ip-hostname-check
>
>  false
>
> 
>
> I tried to copy text file from my locad drive to hdfs file system.but i
> got the below error.
>
> *D:\Hadoop_TEST\Hadoop\ts>hadoop fs -copyFromLocal 4300.txt
> hdfs://10.219.149.100:50010/a.txt *
> *copyFromLocal: End of File Exception between local host is:
> "PC205172/10.219.149.100 "; destination host is:
> "PC205172.cts.com ":50010; : java.io.EOFException;
> For more details see:  http://wiki.apache.org/hadoop/EOFException
> *
>
>
> Please share your suggestions.
>
> How to identify whether i have installed hadoop properly or not
> how to identify  DATA NODE LOCATION , DATA NODE PORT and others by hdfs or
> hadoop command
> how to identify  NAME NODE LOCATION , NAME NODE PROT and its
>  configuration details by hdfs or hadoop command like  how many replicat
> etc.
>
> Thanks & Regards,
> Vinodh.N
>
>
>


Push jars into userchache without running a MR job

2016-03-04 Thread Shashank Prabhakara
Hi all,

I have some job dependency jars that need to be pushed to all the node
managers' usercache ahead of the job that will be using it. I understand
that the job will distribute the jars at runtime, but if possible I want to
have it done beforehand.

Thanks in advance.

Regards,
Shashank


RE: change HDFS disks on each node

2016-03-04 Thread Joseph Naegele
Thanks Anu, that makes sense. Assuming that works, allow me to complicate the 
problem a bit further:

Say I didn't use LVM and I needed to move from 2x3 TB volumes to 6x1 TB 
volumes. The problem now is that the contents of each 3TB volume won't fit on a 
single 1TB volume. Is this still possible?

Thanks again,
Joe

-Original Message-
From: Anu Engineer [mailto:aengin...@hortonworks.com] 
Sent: Friday, March 04, 2016 4:25 PM
To: Joseph Naegele ; user@hadoop.apache.org
Subject: Re: change HDFS disks on each node

Hi Joe,

As long as you copy all the data in the old disk without altering the paths ( 
ie. Structure and layout of the data dirs) and you use the same version of 
datanode software then it should work.

Here is the Apache documentation that implies that this will work, 
http://wiki.apache.org/hadoop/FAQ#On_an_individual_data_node.2C_how_do_you_balance_the_blocks_on_the_disk.3F

In your case all you need to do is :

1. Add the new hard disk
2. Mount the new hard disk — copy the data directories to the new disk 3. 
Remove the old disk and mount the new drive and make sure that your data 
directories are pointing to the new location.

That should do the trick. 

As usual, any advice from a user group carries the risk of data loss.  So 
please be gentle with your old disk(s) until you are absolutely sure that new 
disks are perfectly functional :)

Thanks
Anu






On 3/4/16, 1:04 PM, "Joseph Naegele"  wrote:

>Hi all,
>
>Each of our N datanodes has attached two 3TB disks. I want to attach 
>new
>*replacement* storage to each node, move the HDFS contents to the new 
>storage, and remove the old volumes. We're using Hadoop 2.7.1.
>
>1. What is the simplest, correct way to do this? Does hot-swapping move 
>data from old disks to new disks? I am able to stop the cluster completely.
>
>2. Is it reasonable to use LVM to create expandable logical volumes? 
>We're using AWS and contemplating switching from SSDs to magnetic 
>storage, which is limited to 1TB volumes.
>
>Thanks,
>Joe
>
>
>
>-
>To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: user-h...@hadoop.apache.org
>
>

-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org


-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org



Re: change HDFS disks on each node

2016-03-04 Thread Anu Engineer
Yes, it is possible, but If I were you and someone asked me to do it, I would 
say No. 

May I suggest that you decommission one node, let the data migrate to other 
machines, replace the disks and bring that machine back in and let the cluster 
heal itself.
Not as efficient as moving data around by yourself, but safer and way more 
simpler than doing it manually.

Thanks
Anu




On 3/4/16, 2:38 PM, "Joseph Naegele"  wrote:

>Thanks Anu, that makes sense. Assuming that works, allow me to complicate the 
>problem a bit further:
>
>Say I didn't use LVM and I needed to move from 2x3 TB volumes to 6x1 TB 
>volumes. The problem now is that the contents of each 3TB volume won't fit on 
>a single 1TB volume. Is this still possible?
>
>Thanks again,
>Joe
>
>-Original Message-
>From: Anu Engineer [mailto:aengin...@hortonworks.com] 
>Sent: Friday, March 04, 2016 4:25 PM
>To: Joseph Naegele ; user@hadoop.apache.org
>Subject: Re: change HDFS disks on each node
>
>Hi Joe,
>
>As long as you copy all the data in the old disk without altering the paths ( 
>ie. Structure and layout of the data dirs) and you use the same version of 
>datanode software then it should work.
>
>Here is the Apache documentation that implies that this will work, 
>http://wiki.apache.org/hadoop/FAQ#On_an_individual_data_node.2C_how_do_you_balance_the_blocks_on_the_disk.3F
>
>In your case all you need to do is :
>
>1. Add the new hard disk
>2. Mount the new hard disk — copy the data directories to the new disk 3. 
>Remove the old disk and mount the new drive and make sure that your data 
>directories are pointing to the new location.
>
>That should do the trick. 
>
>As usual, any advice from a user group carries the risk of data loss.  So 
>please be gentle with your old disk(s) until you are absolutely sure that new 
>disks are perfectly functional :)
>
>Thanks
>Anu
>
>
>
>
>
>
>On 3/4/16, 1:04 PM, "Joseph Naegele"  wrote:
>
>>Hi all,
>>
>>Each of our N datanodes has attached two 3TB disks. I want to attach 
>>new
>>*replacement* storage to each node, move the HDFS contents to the new 
>>storage, and remove the old volumes. We're using Hadoop 2.7.1.
>>
>>1. What is the simplest, correct way to do this? Does hot-swapping move 
>>data from old disks to new disks? I am able to stop the cluster completely.
>>
>>2. Is it reasonable to use LVM to create expandable logical volumes? 
>>We're using AWS and contemplating switching from SSDs to magnetic 
>>storage, which is limited to 1TB volumes.
>>
>>Thanks,
>>Joe
>>
>>
>>
>>-
>>To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
>>For additional commands, e-mail: user-h...@hadoop.apache.org
>>
>>
>
>-
>To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: user-h...@hadoop.apache.org
>
>


Re: change HDFS disks on each node

2016-03-04 Thread Anu Engineer
Hi Joe,

As long as you copy all the data in the old disk without altering the paths ( 
ie. Structure and layout of the data dirs) and you use the same version of 
datanode software then it should work.

Here is the Apache documentation that implies that this will work, 
http://wiki.apache.org/hadoop/FAQ#On_an_individual_data_node.2C_how_do_you_balance_the_blocks_on_the_disk.3F

In your case all you need to do is :

1. Add the new hard disk
2. Mount the new hard disk — copy the data directories to the new disk
3. Remove the old disk and mount the new drive and make sure that your data 
directories are pointing to the new location.

That should do the trick. 

As usual, any advice from a user group carries the risk of data loss.  So 
please be gentle with your old disk(s) until you are absolutely sure that new 
disks are perfectly functional :)

Thanks
Anu






On 3/4/16, 1:04 PM, "Joseph Naegele"  wrote:

>Hi all,
>
>Each of our N datanodes has attached two 3TB disks. I want to attach new
>*replacement* storage to each node, move the HDFS contents to the new
>storage, and remove the old volumes. We're using Hadoop 2.7.1.
>
>1. What is the simplest, correct way to do this? Does hot-swapping move data
>from old disks to new disks? I am able to stop the cluster completely.
>
>2. Is it reasonable to use LVM to create expandable logical volumes? We're
>using AWS and contemplating switching from SSDs to magnetic storage, which
>is limited to 1TB volumes.
>
>Thanks,
>Joe
>
>
>
>-
>To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
>For additional commands, e-mail: user-h...@hadoop.apache.org
>
>

-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org


change HDFS disks on each node

2016-03-04 Thread Joseph Naegele
Hi all,

Each of our N datanodes has attached two 3TB disks. I want to attach new
*replacement* storage to each node, move the HDFS contents to the new
storage, and remove the old volumes. We're using Hadoop 2.7.1.

1. What is the simplest, correct way to do this? Does hot-swapping move data
from old disks to new disks? I am able to stop the cluster completely.

2. Is it reasonable to use LVM to create expandable logical volumes? We're
using AWS and contemplating switching from SSDs to magnetic storage, which
is limited to 1TB volumes.

Thanks,
Joe



-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org



Hadoop 2.8 Release Data

2016-03-04 Thread Benjamin Kim
I have a general question about Hadoop 2.8. Is it being prepped for release 
anytime soon? I am awaiting HADOOP-5732 bringing SFTP support natively.

Thanks,
Ben
-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org



Error While copying file from local to dfs

2016-03-04 Thread Vinodh Nagaraj
Hi All,

I am new bee to Hadoop.

I installed hadoop 2.7.1 on windows 32 bit machine ( windows 7 ) for
learning purpose.

I can execute start-all.cmd successfully.

When i execute jps,i got the below output.
28544 NameNode
35728
36308 DataNode
43828 Jps
40688 NodeManager
33820 ResourceManager

My configuration files are.

core-site.xml
-

 
   fs.defaultFS
   hdfs://10.219.149.100:50075/
   NameNode URI
 




hdfs-site.xml
-

   
 dfs.replication
 2


  dfs.namenode.name.dir
  D:\Hadoop_TEST\Hadoop\Data


 dfs.datanode.data.dir
  D:\Hadoop_TEST\Hadoop\Secondary
   

  
  dfs.namenode.datanode.registration.ip-hostname-check
 false
   


I tried to copy text file from my locad drive to hdfs file system.but i got
the below error.

*D:\Hadoop_TEST\Hadoop\ts>hadoop fs -copyFromLocal 4300.txt
hdfs://10.219.149.100:50010/a.txt *
*copyFromLocal: End of File Exception between local host is:
"PC205172/10.219.149.100 "; destination host is:
"PC205172.cts.com ":50010; : java.io.EOFException;
For more details see:  http://wiki.apache.org/hadoop/EOFException
*


Please share your suggestions.

How to identify whether i have installed hadoop properly or not
how to identify  DATA NODE LOCATION , DATA NODE PORT and others by hdfs or
hadoop command
how to identify  NAME NODE LOCATION , NAME NODE PROT and its  configuration
details by hdfs or hadoop command like  how many replicat etc.

Thanks & Regards,
Vinodh.N