You need to set up a local realm on your KDC ( linux) and run commands on
windows AD to add this realm as a trust realm on your AD realm.
After this you need to modify your /etc/krb5.conf to include this local realm
as trust realm to your AD realm.
And then you should be all set.
Sent from my
Krb5 looks good.
Can you also share commands you ran in your Windows AD ?
Sent from my iPhone
On Jul 25, 2012, at 8:27 AM, Ivan Frain ivan.fr...@gmail.com wrote:
Thanks for your answer.
I think I already did what you propose. Some comments in the remaining.
2012/7/25 Mapred Learn
of the netdom trust).
- ksetup /addkdc HADOOP.REALM mitkdc.hadoop.realm
- ksetup /SetEncTypeAttr HADOOP.REALM RC4-HMAC-MD5
What do you think ?
2012/7/25 Mapred Learn mapred.le...@gmail.com
Krb5 looks good.
Can you also share commands you ran in your Windows AD ?
Sent from my iPhone
Hi Vikas,
One basic way would be to get the stdout after job submission and parse it to
find jobid.
And run kill command whenever you need it.
-JJ
Sent from my iPhone
On Jun 27, 2012, at 7:43 AM, hadoop hadooph...@gmail.com wrote:
Hi Folks,
I m using java client to run queries on
Hi Yong,
Could you send steps to set it up on MAC ?
Looks like there's no documentation about it.
On Fri, Jun 8, 2012 at 9:21 PM, Yongwei Xing jdxyw2...@gmail.com wrote:
Hello
I am trying to compile the hadoop native library on mac os.
My Mac OS X is 10.7.4. My Hadoop is 1.0.3
I have
Yes, User submitting a job needs to have an account on all the nodes.
Sent from my iPhone
On Jun 7, 2012, at 6:20 AM, Koert Kuipers ko...@tresata.com wrote:
with kerberos enabled a mapreduce job runs as the user that submitted it.
does this mean the user that submitted the job needs to have
, Mapred Learn mapred.le...@gmail.com wrote:
Hi Harsh,
Could you show one sample of how to do this ?
I have not seen/written any mapper code where people use log4j logger or
log4j file to set the log level.
Thanks in advance
-JJ
On Thu, May 3, 2012 at 4:32 PM, Harsh J ha
Check your number of blocks in the cluster.
This also indicates that your datanodes are more full than they should be.
Try deleting unnecessary blocks.
On Fri, May 4, 2012 at 7:40 AM, Mohit Anchlia mohitanch...@gmail.comwrote:
Please see:
config and wouldn't get
affected by the toggling of
(i). The feature from (i) is available already in CDH 4.0.0-b2 btw.
On Fri, May 4, 2012 at 4:58 AM, Mapred Learn mapred.le...@gmail.com
wrote:
Hi Harsh,
Does doing (ii) mess up with hadoop (i) level ?
Or does it happen in both
();
logger.info(Exiting application.);
}
}
On Sat, May 5, 2012 at 3:40 AM, Mapred Learn mapred.le...@gmail.com wrote:
Hi Harsh,
Could you show one sample of how to do this ?
I have not seen/written any mapper code where people use log4j logger or
log4j file to set the log level
- and will also avoid
changing Hadoop's own Child log levels, unlike the (1) method.
On Fri, Apr 20, 2012 at 8:47 PM, Mapred Learn mapred.le...@gmail.com
wrote:
Hi,
I m trying to find out best way to add debugging in map- red code.
I have System.out.println() statements that I keep
at 4:58 AM, Mapred Learn mapred.le...@gmail.com
wrote:
Hi Harsh,
Does doing (ii) mess up with hadoop (i) level ?
Or does it happen in both the options anyways ?
Thanks,
-JJ
On Fri, Apr 20, 2012 at 8:28 AM, Harsh J ha...@cloudera.com wrote:
Yes this is possible, and there's
Hi,
I m trying to find out best way to add debugging in map- red code.
I have System.out.println() statements that I keep on commenting and
uncommenting so as not to increase stdout size
But problem is anytime I need debug, I Hv to re-compile.
If there a way, I can define log levels using log4j
Do you have these ports open amongst the datanodes and name node ?
Sent from my iPhone
On Mar 23, 2012, at 4:04 PM, Eric Schwartz sc...@csail.mit.edu wrote:
Howdy,
I'm working on getting a kerberized hadoop cluster up and running. I'm using
version 1.0.1, as packaged for 64-bit Debian
Did u try /usr/bin/hadoop instead of hadoop ?
How many java versions do u Hv on ur box ?
Sent from my iPhone
On Mar 16, 2012, at 10:53 AM, Sujit Dhamale sujitdhamal...@gmail.com wrote:
Hi,
i am not able to create Directory in Hadoop file system
while putting file only Directory is created
U need to set ulimit -n bigger value on datanode and restart datanodes.
Sent from my iPhone
On Jan 26, 2012, at 6:06 AM, Idris Ali psychid...@gmail.com wrote:
Hi Mark,
On a lighter note what is the count of xceivers? dfs.datanode.max.xceivers
property in hdfs-site.xml?
Thanks,
-idris
Can u share your create table command ?
Sent from my iPhone
On Jan 26, 2012, at 2:21 PM, rk vishu talk2had...@gmail.com wrote:
I did specify the first column in the table creation.
On Thu, Jan 26, 2012 at 2:15 PM, Mapred Learn mapred.le...@gmail.comwrote:
In your external table creation
by '\n'
stored as sequencefile
location '/xyz/mytable/;
LOAD DATA INPATH '/tmp/mymapredout/part-*' INTO TABLE stg.my_tab
;
On Thu, Jan 26, 2012 at 3:49 PM, Mapred Learn mapred.le...@gmail.com
wrote:
Can u share your create table command ?
Sent from my iPhone
On Jan 26, 2012, at 2
This is my question too.
What if I want output to be in same order as input without using reducers.
Thanks,
JJ
Sent from my iPhone
On Jan 19, 2012, at 12:19 PM, Ronald Petty ronald.pe...@gmail.com wrote:
Daniel,
Can you provide a concrete example of what you mean by output to be in an
mapred.input.format.class = org.apache. .CombineFileInputFormat
-D mapred.max.split.size=1073741824
-D mapred.reduce.tasks=0
Hope it helps!..
Regards
Bejoy.K.S
On Wed, Dec 21, 2011 at 7:15 AM, Mapred Learn mapred.le...@gmail.com
wrote:
Hi Shevek/others,
I tried this.
First job created about
you've paid that extra cost, you might as well reconsider
your downstream process and the reason for this subdivision.
S.
On 27 October 2011 23:07, Mapred Learn mapred.le...@gmail.com wrote:
Hi Shevek,
Thanks for the explanation !
Can you point me to some documentatino for specifying
from the first job in chunks of X bytes, and just writes them out. Use an
IdentityMapper and set the split size. I have not tried this at home.
S.
On 26 October 2011 07:03, Mapred Learn mapred.le...@gmail.com wrote:
Hi,
I am trying to create output files of fixed size by using
Hi,
I am trying to create output files of fixed size by using :
-Dmapred.max.split.size=6442450812 (6 Gb)
But the problem is that the input Data size and metadata varies and I have
to adjust above value manually to achieve fixed size.
Is there a way I can programmatically
Hi,
I have same question regarding the documentation and :
Is there something like this for memory and CPU utilization also ?
Sent from my iPhone
Thanks,
JJ
On Oct 19, 2011, at 5:00 PM, Rajiv Chittajallu raj...@yahoo-inc.com wrote:
ivan.nov...@emc.com wrote on 10/18/11 at 09:23:50 -0700:
What is mapred.child.java.opts set to in your server (TAsk trackers )
configuration ?
You need to set this to a bigger value like 1 gig or so..
Sent from my iPhone
On Sep 19, 2011, at 5:43 AM, john smith js1987.sm...@gmail.com wrote:
Hey guys,
I am running hive and I am trying to join two
Did you try using -files option in your hadoop jar command as:
/usr/bin/hadoop jar jar name main class name -files absolute path of
file to be added to distributed cache input dir output dir
On Fri, Jul 29, 2011 at 11:05 AM, Roger Chen rogc...@ucdavis.edu wrote:
Slight modification: I now
ok for accessing it in mapper code, u can do something like:
On Fri, Jul 29, 2011 at 11:09 AM, Mapred Learn mapred.le...@gmail.comwrote:
Did you try using -files option in your hadoop jar command as:
/usr/bin/hadoop jar jar name main class name -files absolute path of
file to be added
I hope my previous reply helps...
On Fri, Jul 29, 2011 at 11:11 AM, Roger Chen rogc...@ucdavis.edu wrote:
After moving it to the distributed cache, how would I call it within my
MapReduce program?
On Fri, Jul 29, 2011 at 11:09 AM, Mapred Learn mapred.le...@gmail.com
wrote:
Did you try
Did u try playing with mapred.child.ulimit along with java.opts ?
Sent from my iPhone
On Jun 18, 2011, at 9:55 AM, Ken Williams zoo9...@hotmail.com wrote:
Hi All,
I'm having a problem running a job on Hadoop. Using Mahout, I've been able to
run several Bayesian classifiers and train and
try using complete path for where you hadoop binary is present. For eg
/usr/bin/hadoop instead of hadoop...
On Tue, May 31, 2011 at 3:56 PM, neeral beladia neeral_bela...@yahoo.comwrote:
Hi,
I am not sure if this question has been asked. Its more of a hadoop fs
question. I am trying to
Oops.. reading again, the command is working.
what is the exact string that you have in cmdStr ?
On Tue, May 31, 2011 at 4:51 PM, Mapred Learn mapred.le...@gmail.comwrote:
try using complete path for where you hadoop binary is present. For eg
/usr/bin/hadoop instead of hadoop...
On Tue
-copyFromLocal $FILE $DEST_PATH
done
If doing this via the Java API, then, yes you will have to use multiple
threads.
On Wed, May 18, 2011 at 1:04 AM, Mapred Learn mapred.le...@gmail.com
wrote:
Thanks harsh !
That means basically both APIs as well as hadoop client commands allow
only
serial
Do u Hv right permissions on the new dirs ?
Try stopping n starting cluster...
-JJ
On May 24, 2011, at 9:13 PM, Mark question markq2...@gmail.com wrote:
Well, you're right ... moving it to hdfs-site.xml had an effect at least.
But now I'm in the NameSpace incompatable error:
WARN
ha...@cloudera.com wrote:
Hello,
Adding to Joey's response, copyFromLocal's current implementation is serial
given a list of files.
On Wed, May 18, 2011 at 9:57 AM, Mapred Learn mapred.le...@gmail.com
wrote:
Thanks Joey !
I will try to find out abt copyFromLocal. Looks like Hadoop Apis
hadoop fs -copyFromLocal $FILE $DEST_PATH
done
If doing this via the Java API, then, yes you will have to use multiple
threads.
On Wed, May 18, 2011 at 1:04 AM, Mapred Learn mapred.le...@gmail.comwrote:
Thanks harsh !
That means basically both APIs as well as hadoop client commands allow
Hi,
My question is when I run a command from hdfs client, for eg. hadoop fs
-copyFromLocal or create a sequence file writer in java code and append
key/values to it through Hadoop APIs, does it internally transfer/write data
to HDFS serially or in parallel ?
Thanks in advance,
-JJ
of a file in Hadoop.
Doing copyFromLocal could write multiple files in parallel (I'm not
sure if it does or not), but a single file would be written serially.
-Joey
On Tue, May 17, 2011 at 5:44 PM, Mapred Learn mapred.le...@gmail.com wrote:
Hi,
My question is when I run a command from hdfs
Hi,
I get error like:
java.lang.NullPointerException
at
org.apache.hadoop.iohttp://org.apache.hadoop.io.serializer.serializationfactory.ge/
.serializer.SerializationFactory.getSerializer(SerializationFactory.java:73)
at
hi,
I have a use case to upload gzipped text files of sizes ranging from 10-30
GB on hdfs.
We have decided on sequence file format as format on hdfs.
I have some doubts/questions regarding it:
i) what should be the optimal size for a sequence file considering the input
text files range from 10-30
39 matches
Mail list logo