Thanks Mr Chandrashekhar
The input data sets in HDFS breaks it in blocks of default size 128 MB and
replicate it by default replication factor 3. It also balance load by
transfering job of failed or busy nodes to free or active nodes. Can we
manage how much data sets and load should assign to whic
Hi
Can we connect c with HDFS using cloudera hadoop distribution.
--
*Thanks & Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
Google:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/LibHdfs.html
--
Alexander Alten-Lorenz
m: wget.n...@gmail.com
b: mapredit.blogspot.com
> On May 4, 2015, at 10:57 AM, unmesha sreeveni wrote:
>
> Hi
> Can we connect c with HDFS using cloudera hadoop distribution.
thanks alex
I have gone through the same. but once I checked my cloudera distribution
I am not able to get those folders ..Thats y I posted here. I dont know if
I made any mistake.
On Mon, May 4, 2015 at 2:40 PM, Alexander Alten-Lorenz
wrote:
> Google:
>
> http://hadoop.apache.org/docs/current
That depends on the installation source (rpm, tgz or parcels). Usually, when
you use parcels, libhdfs.so* should be within /opt/cloudera/parcels/CDH/lib64/
(or similar). Or just use linux' "locate" (locate libhdfs.so*) to find the
library.
--
Alexander Alten-Lorenz
m: wget.n...@gmail.com
b:
Thanks
Did it.
http://unmeshasreeveni.blogspot.in/2015/05/hadoop-word-count-using-c-hadoop.html
On Mon, May 4, 2015 at 3:43 PM, Alexander Alten-Lorenz
wrote:
> That depends on the installation source (rpm, tgz or parcels). Usually,
> when you use parcels, libhdfs.so* should be within /opt/cloude
Hi
If you're quoted fields may contain commas, you must use RegexSerDe to parse
each line into fields.
create table foo(c0 string, c1 string, c2 string, c3 string, c4 string, c5
string, c6 string, c7 string)
row format serde 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
with
Hi,
when I execute
cat /proc/cpuinfo | grep ^processor | wc -l
I get 2, do I need to specify this value in yarn.nodemanager.resource.cpu-vcores
or there is some kind of ratio between pcore and vcore?
I found yarn.nodemanager.vcores-pcores-ratio but it seems that is is
deprecated, since I cannot
I would also suggest to take a look at
https://issues.apache.org/jira/browse/HDFS-6994. I have been using libhdfs3
for POC in past few months, and highly recommend it. the only drawback is
the libhdfs3 has not been formed committed into hadoop/hdfs yet.
if you only like to play with hdfs, using t
Follow-up, this is indeed a YARN bug and I've filed a JIRA, which has garnered
a lot of attention and a patch.
john
From: John Lilley [mailto:john.lil...@redpoint.net]
Sent: Friday, April 17, 2015 1:01 PM
To: 'user@hadoop.apache.org'
Subject: Error in YARN localization with Active Directory user
10 matches
Mail list logo