Hi,
I have HDP-2.2.4.2-2 running with Sqoop 1.4.5.2.2.4.2-2.
From the Hortonworks Sqoop
dochttp://docs.hortonworks.com/HDPDocuments/HDP1/HDP-1.2.4/bk_dataintegration/content/ch_using-sqoop-intro.html.,
I reached the Microsoft
dochttps://www.microsoft.com/en-us/download/details.aspx?id=27584.
$Counter
BYTESCOPIED=41194
BYTESEXPECTED=41194
COPY=1
Regards,
Omkar Joshi
From: Joshi Omkar
Sent: den 15 maj 2015 13:37
To: user@ambari.apache.org
Subject: FW: Change in the DataNode directories !!!
The background is in the trailing mail.
I went
Hi,
I have a HDP-2.2.4.2-2 9 node cluster running.
There are multiple users who want to upload their files from their
Windows/Linux desktops onto HDFS either via tools like WinSCP etc. or map the
HDFS as a network drive.
I think 'HDFS NFS Gateway' is the way to go but I couldn't find a way to
etc.)? Any more 'clean-ups' or config
changes required ? Is there some doc. available(I couldn't find)
2. After I start data loading, how can I verify point 1
Regards,
Omkar Joshi
From: Joshi Omkar
Sent: den 8 maj 2015 15:08
To: user@ambari.apache.org
Subject: Change in the DataNode
: org.apache.hadoop.security.AccessControlException on hive client
You need to have a user home directory on HDFS. Log as the HDFS user and create
a home dir for root
# su – hdfs
$ hdfs dfs –mkdir /user/root
$ hdfs dfs –chown root:root /user/root
Thanks,
Olivier
From: Joshi Omkar
Reply-To: user
Hi,
I have a 9 node(1 NN + 8 DN) HDP cluster running which has Hive client on two
of the nodes.
I tried to connect to the Hive CLI on one of the nodes and got the following
exception :
(I tried to check if there is a security config. in Hive by logging in Ambari
and going to the Host -
Hi,
I have 600GB X 8 disks on each machine that can be used for HDP.
1 disk is used for the /root, /home etc. so I'm now left with 7 disks.
If I understand correctly from the HDP
recoshttp://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.7/bk_cluster-planning-guide/content/file_system.html.
,
Hi,
I have installed the HDP 2.2 with Ambari 2.0.
I'm facing a big issue - I had kept the DataNode directories as :
/nsr/hadoop/hdfs/data,/opt/hadoop/hdfs/data,/usr/hadoop/hdfs/data,/usr/local/hadoop/hdfs/data,/var/hadoop/hdfs/data
Now, I want several disks to be mounted on each node and I
Hi,
I'm trying to install a 8-node HDP using Ambari 2.0.
In the 'Advanced Repository Option', I had kept the public repo. URL as all
hosts have Internet connectivity.
Two attempts to do the same failed with several nodes having failures with the
same msg. in stderr :
Python script has been
While I didn't try any custom rpm, I had got a similar error with the local
repo. and I did the following steps(you need to first establish that the issue
is really with the custom rpm or elsewhere) :
1. From logs, find the exact command for which rpm its failing(in ur case,
I guess its
I have a HDP 2.2 running on 9 nodes and Ambari version is 1.7.0
One of the machines is designated to run services like Nagios etc. but it also
has Hive Metastore, HiveServer2, MySQL Server and WebHCat Server, however the
Hive and HCat services are failing to start.
[root@l1041lab ~]#
/2.2.0.0/repodata/repomd.xml
Instructions for building the local repos:
http://docs.hortonworks.com/HDPDocuments/Ambari-1.7.0.0/AMBARI_DOC_SUITE/index.html#Item2.6
From: Joshi Omkar omkar.jo...@scania.commailto:omkar.jo...@scania.com
Reply-To: user@ambari.apache.orgmailto:user@ambari.apache.org
user
locate the packages but the .repo files look
right, and you have the repositories.
What happens if from command line you run:
yum -y install hadoop_2_2_*-yarn
From: Joshi Omkar omkar.jo...@scania.commailto:omkar.jo...@scania.com
Reply-To: user@ambari.apache.orgmailto:user@ambari.apache.org
user
13 matches
Mail list logo