General Query : Master & DataNode start

2017-01-02 Thread Renjith Gk
Hi All,

As part of my lab exercise i am doing a self study of Hadoop. i have cloned
Master to 2 Datanodes as Datanode 1 and Datanode 2.

Two Querys from Real Scenarios :

1. If Master is Up and running, DataNodes are stopped/suspended, will there
be any communication channel from Datanode to Master or will the
replication works ?

2.. Is there a need to run this command start-dfs.sh or start-all.sh in
master as well as datanodes ?

Thanks,
Renjith


Re: Mismatch in length of source:

2017-01-02 Thread Ulul

Hi

I can't remember the exact error message but distcp consistently fails 
when trying to copy open files. Is it your case ?


Workaround it to snapshot prior to copying

Ulul


On 31/12/2016 19:25, Aditya exalter wrote:

Hi All,
  A very happy new year to ALL.

  I am facing issue while executing distcp between two different clusters,

Caused by: java.io.IOException: Mismatch in length of 
source:hdfs://ip1/xx/x and 
target:hdfs://nameservice1/xx/.distcp.tmp.attempt_1483200922993_0056_m_11_2


I tried using -pb and -skipcrccheck

 hadoop distcp -pb -skipcrccheck -update hdfs://ip1/xx/x 
hdfs:////


hadoop distcp -pb  hdfs://ip1/xx/x hdfs:////

hadoop distcp -skipcrccheck -update 
hdfs://ip1/xx/x hdfs:////



but nothing seems to be working .Any solutions please.


Regards,
Aditya.



-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org



Re: HDFS client with Knox

2017-01-02 Thread Philippe Kernévez
Hi all,

Sorry for that bad quality post.
I accidentally send a draft during my holidays (with a bad connexion). I
will send you a detail message (in english) as soon as I can.
I will update it with your answer.

Regards,
Philippe

2017-01-01 15:46 GMT+01:00 Ted Yu :

> Can you phrase your post in English ?
>
> 2017-01-01 4:22 GMT-08:00 Philippe Kernévez :
>
>> kk1) Maintenant que Knox est en place j'aimerai l'utiliser.
>> En particulier depuis un client HDFS.
>> Je peux faire (ça marche) :
>> a) HDFS en RPC sur mon name node actif : "hdfs dfs -ls /apps"
>> b) HDFS en WebHdfs sur mon name node actif : hdfs dfs -ls
>> webhdfs://node1:50070/apps
>> c) CURL sur mon WebHDFS : curl "http://node1:50070/webhdfs/v1
>> /apps?op=LISTSTATUS"
>> d) CURL sur Knox : curl -u admin:admin-password -k "
>> https://node1:8443/gateway/default/webhdfs/v1/apps?op=LISTSTATUS";
>>
>> Par contre comment faire avec Knox ? J'ai 2 pb :
>> a) Comment faire une authent basique, je ne trouve pas de moyen de passer
>> un login/password à la commande (soit elle est sans sécu, soit avec
>> Kerberos)
>> b) Comment indiquer que le protocole est en SSL, webhdfss ne semble pas
>> exister...
>>
>> --
>>
>>
>> Philippe Kernévez
>>
>>
>>
>> Directeur technique (Suisse),
>>
>>
>> pkerne...@octo.com
>> +41 79 888 33 32 <+41%2079%20888%2033%2032>
>>
>>
>>
>> Retrouvez OCTO sur OCTO Talk : http://blog.octo.com
>> OCTO Technology http://www.octo.com
>>
>>
>>
>>
>>
>>
>


-- 
Philippe Kernévez



Directeur technique (Suisse),
pkerne...@octo.com
+41 79 888 33 32

Retrouvez OCTO sur OCTO Talk : http://blog.octo.com
OCTO Technology http://www.octo.com


Re: Heartbeat between RM and AM

2017-01-02 Thread Sunil Govind
Hi

If you are thinking about allocation requests heartbeat calls from AM to
RM, then its mostly driven per application level (not YARN specific
config). For eg: in MapReduce, below config is used for same.
yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms

Thanks
Sunil


On Sat, Dec 31, 2016 at 8:20 AM Sultan Alamro 
wrote:

> Hi all,
>
> Can any one tell me how I can modify the heartbeat between the RM and AM?
> I need to add new requests to the AM from the RM.
>
> These requests basically are values calculated by the RM to be used by the
> AM online.
>
> Thanks,
> Sultan
>


Re: HDFS client with Knox

2017-01-02 Thread Ulul

Hi

Philippe is asking about using basing auth with knox and using SSL with 
webhdfs


a) this is exactly what you do with your d) : you pass 
admin:admin-password credentials to knox, basic auth bing curl default. 
If your question is about knox authenticating to hdfs, that's normal : 
HDFS has only "no security" or kerberos


b) You can find swebhdfs 
http://hortonworks.com/blog/deploying-https-hdfs/ but I'd say it's more 
experimental than poduction grade. And the idea behind knox is that you 
have an SSL encrypted stream between your client and knox, as you did in 
d), and then clear streams between knox and HDFS servers, the cluster 
being protected by a firewall of some kind. Please note that Knox 
creates a bottleneck through which all data is flowing so don't use it 
for massive data transfer


Ulul


On 01/01/2017 15:46, Ted Yu wrote:

Can you phrase your post in English ?

2017-01-01 4:22 GMT-08:00 Philippe Kernévez >:


kk1) Maintenant que Knox est en place j'aimerai l'utiliser.
En particulier depuis un client HDFS.
Je peux faire (ça marche) :
a) HDFS en RPC sur mon name node actif : "hdfs dfs -ls /apps"
b) HDFS en WebHdfs sur mon name node actif : hdfs dfs -ls
webhdfs://node1:50070/apps
c) CURL sur mon WebHDFS : curl
"http://node1:50070/webhdfs/v1/apps?op=LISTSTATUS
"
d) CURL sur Knox : curl -u admin:admin-password -k
"https://node1:8443/gateway/default/webhdfs/v1/apps?op=LISTSTATUS
"

Par contre comment faire avec Knox ? J'ai 2 pb :
a) Comment faire une authent basique, je ne trouve pas de moyen de
passer un login/password à la commande (soit elle est sans sécu,
soit avec Kerberos)
b) Comment indiquer que le protocole est en SSL, webhdfss ne
semble pas exister...

-- 



Philippe Kernévez



Directeur technique (Suisse),


pkerne...@octo.com 
+41 79 888 33 32 



Retrouvez OCTO sur OCTO Talk : http://blog.octo.com
OCTO Technology http://www.octo.com