http://www.unmeshasreeveni.blogspot.in/2014/09/what-do-you-think-of-these-three.html
--
*Thanks Regards *
*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
Hi
5 th question can it be SQOOP?
On Mon, Oct 6, 2014 at 1:24 PM, unmesha sreeveni unmeshab...@gmail.com
wrote:
Yes
On Mon, Oct 6, 2014 at 1:22 PM, Santosh Kumar skumar.bigd...@hotmail.com
wrote:
Are you preparing g for Cloudera certification exam?
Thanks and Regards,
Santosh
Hi
How much data does wordcount job is processing?
What is the disk space (df -h ) available in the node where it always fail?
The point I didn't understand is why it uses only one datanode disc space?
For reducers task running, containers can be allocated at any node. I
think, in your
For question 3 answer should be B and for question 4 answer should be D.
Thanks,
Adarsh D
Consultant - BigData and Cloud
[image: View my profile on LinkedIn]
http://in.linkedin.com/in/adarshdeshratnam
On Mon, Oct 6, 2014 at 2:25 PM, unmesha sreeveni unmeshab...@gmail.com
wrote:
Hi
5 th
what about the last one? The answer is correct. Pig. Is nt it?
On Mon, Oct 6, 2014 at 4:29 PM, adarsh deshratnam
adarsh.deshrat...@gmail.com wrote:
For question 3 answer should be B and for question 4 answer should be D.
Thanks,
Adarsh D
Consultant - BigData and Cloud
[image: View my
Does it work with a small table? I prefer to use hftp instead of webhdfs.
From: Brian Jeltema [mailto:brian.jelt...@digitalenvoy.net]
Sent: Friday, October 03, 2014 11:01 AM
To: user@hadoop.apache.org user@hadoop.apache.org
Subject: ExportSnapshot webhdfs problems
I posted this on users@hbase,
All,
I have a small hadoop cluster (2.5.0) with 4 datanodes and 3 data disks
per node. Lately some of the volumes have been filling, but instead of
moving to other configured volumes that *have* free space, it's giving
errors in the datanode logs:
2014-10-03 11:52:44,989 ERROR
Thank you.
On Sat, Oct 4, 2014 at 10:38 AM, Ted Yu yuzhih...@gmail.com wrote:
send email to user-unsubscr...@hadoop.apache.org please.
On Sat, Oct 4, 2014 at 4:56 AM, Daneille Miller daneill...@gmail.com
wrote:
Unsubscribe
Hi Experts,
We have a use case which needs to login user into Kerberos hadoop
using the kerberos user's name and password.
I have searched around and only found that
1) one can login a user from ticket cache ( this is the default one) or
2) login a user from this user's keytab file e.g.
You may find this approach interesting.
https://issues.apache.org/jira/browse/HADOOP-10342
The idea is that you preauthenticate using JAAS/krb5 or something in your
application and then leverage the resulting java Subject to assert the
authenticated identity.
On Mon, Oct 6, 2014 at 1:51 PM,
Hi List
I have a hadoop 2.5 namenode communicating with a single datanode: When I
run start-hdfs.sh on the name node, I see the datanode process initially
start up on the node, then fail with the following exception:
---
2014-10-06 21:12:39,835 FATAL
Hello,
I have 8 Datanodes and each having storage capacity of only 3GB. I am
running word count on 1GB of text file.
Initially df h shows it has 2.8GB after HDFS write. When Shuffling Starts
it goes on consuming the disc space of only one node. I think it is the
reducer. Finally df h shows
Hi Larry,
Thanks! This is the very right approach I am looking for. Currently
I am using Hadoop 2.3.0 , seems this API
UserGroupInformation.getUGIFromSubject(subject) is only available from
Hadoop 3.0.0 , which seems is not released yet. So when can I expect
to get the downloadable for Hadoop
Well, it seems to be committed to branch-2 - so I assume it will make it
into the next 2.x release.
On Mon, Oct 6, 2014 at 2:51 PM, Xiaohua Chen xiaohua.c...@gmail.com wrote:
Hi Larry,
Thanks! This is the very right approach I am looking for. Currently
I am using Hadoop 2.3.0 , seems this
Larry,
Thanks and you have a nice day!
Best regards,
Sophia
On Mon, Oct 6, 2014 at 12:08 PM, Larry McCay lmc...@hortonworks.com wrote:
Well, it seems to be committed to branch-2 - so I assume it will make it
into the next 2.x release.
On Mon, Oct 6, 2014 at 2:51 PM, Xiaohua Chen
Nice !
mapred.reduce.tasks affects the job (the group of tasks) so it should be
at least equal to mapred.tasktracker.reduce.tasks.maximum * number of
nodes
With your setup you allow each of your 7 tasktrackers to launch 8
reducers (that would be 56) but you limit the total number of reducers
Hello
Did you check you don't have a job.setNumReduceTasks(1); in your job
driver ?
And you should check the number of slots available on the jobtracker web
interface
Ulul
Le 06/10/2014 20:34, Abdul Navaz a écrit :
Hello,
I have 8 Datanodes and each having storage capacity of only 3GB. I
Hi,
I'm trying to figure out what are more ideal settings for using ext4 on
hadoop cluster datanodes. From the hadoop site its recommended nodelalloc
option is chosen in the fstab. Is that still a preferred option?
I read elsewhere to disable the ext4 journal, and use data=writeback.
Hi
No, Pig is a data manipulation language for data already in Hadoop.
The question is about importing data from OLTP DB (eg Oracle, MySQL...)
to Hadoop, this is what Sqoop is for (SQL to Hadoop)
I'm not certain certification guys are happy with their exam questions
ending up on blogs and
For filesystem creation, we use the following with mkfs.ext4
mkfs.ext4 -T largefile -m 1 -O dir_index,extent,sparse_super -L $HDFS_LABEL
/dev/${DEV}1
By default, mkfs creates way too many inodes, so we tune it a bit with the
largefile option, which modifies the inode_ratio. This gives us ~2
I agree with the answers suggested above.
3. B
4. D
5. C
On Mon, Oct 6, 2014 at 2:58 PM, Ulul had...@ulul.org wrote:
Hi
No, Pig is a data manipulation language for data already in Hadoop.
The question is about importing data from OLTP DB (eg Oracle, MySQL...) to
Hadoop, this is what Sqoop
What I feel like is
For question
5
it says, the weblogs are already in HDFS (so no need to import
anything).Also these are log files, NOT database files with a specific
schema. So
I think
Pig is the best way to access and process this data.
On Tue, Oct 7, 2014 at 4:10 AM, Pradeep
Hi Pradeep
You are right. Updated the right answers in the blog.
This may help anyone thinking about investing in that particular test
package.
On Tue, Oct 7, 2014 at 9:25 AM, Pradeep Gollakota pradeep...@gmail.com
wrote:
That's not exactly what the question is asking for... It's saying that
23 matches
Mail list logo