org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(String,
String) as suggested in one of the trailing mail.
Cheers,
Subroto Sanyal
On Mon, Nov 2, 2015 at 3:13 PM, Vishwakarma, Chhaya <
chhaya.vishwaka...@thinkbiganalytics.com> wrote:
> Code is successfully authenticating to Kerberos but when I try to run any
> hdfs command I
Hi Matthew
You can check if you are hitting into:
https://issues.apache.org/jira/browse/HADOOP-10786
Cheers,
Subroto Sanyal
On Thu, Oct 8, 2015 at 5:11 PM, Matthew Bruce wrote:
> Hello Hadoop Users,
>
>
>
> We have been doing java upgrade testing in one of our Hadoop lab
>
Hello,
While running lot of Jobs on a YARN clusters, I noticed the following which
looked little unusual to me:
[image: Inline image 1]
VCored Used > VCores Total
The hadoop Version used here is: 2.6.0.2.2.0.0-2041
Is it bug in YARN (scheduler/UI) ?
Cheers,
Subroto Sanyal
Send a mail to user-unsubscr...@hadoop.apache.org
Cheers,
Subroto Sanyal
On 19 Aug 2014, at 13:40, Vasantha Kumar Kannaki Kaliappan
wrote:
> unsubscribe
signature.asc
Description: Message signed with OpenPGP using GPGMail
hi Susheel,
Thanks for your input. I did build the libs for 64 bit but, still the problem
was there.
Though the problem is resolved now.
I had to configure the property: yarn.app.mapreduce.am.env
Cheers,
Subroto Sanyal
On 13 Aug 2014, at 10:39, Susheel Kumar Gadalay wrote:
> This messag
comes up when starting any hadoop daemons.
Do I need to pass any specific configuration so that the child jvm is able to
pick up the native lib folder?
Cheers,
Subroto Sanyal
signature.asc
Description: Message signed with OpenPGP using GPGMail
ption)
5) ConnectTimeoutException
Ideally the code should check wrapped exception or the API should throw
IOExceptions like others.
Is this bug being already known to community ??
Do we have a workaround for this??
Cheers,
Subroto Sanyal
signature.asc
Description: Message signed with OpenPGP
:-P
On Dec 9, 2013, at 7:13 AM, Harsh J wrote:
> Wrong list, dial 911 (or whatever your country uses)?
>
> On Mon, Dec 9, 2013 at 9:25 AM, busybody wrote:
>> Help
>
>
>
> --
> Harsh J
Hi Ranjini,
A good example to look into :
http://www.undercloud.org/?p=408
Cheers,
Subroto Sanyal
On Dec 6, 2013, at 12:02 PM, Ranjini Rathinam wrote:
> Hi,
>
> How to read xml file via mapreduce and load them in hbase and hive using java.
>
> Please provide sample code.
&
org/hadoop/HowManyMapsAndReduces
Running parallel mappers and reducers depends on availability of map and reduce
slots in the cluster.
Cheers,
Subroto Sanyal
On Dec 6, 2013, at 12:00 PM, Ranjini Rathinam wrote:
> HI,
>
> How to run more than one mapper and reduce parallelly.?
>
> Please
Hi Shalish,
The client side conf will take precedence. Further you can use the FileSystem
API which can set the block size:
create(Path f, boolean overwrite, int bufferSize, short replication, long
blockSize)
Cheers,
Subroto Sanyal
On Jul 13, 2013, at 9:10 PM, Shalish VJ wrote:
>
reated with JobConf with which the Job is instantiated.
Same context is being used to call outputcommitter.setupJob.
Please let me know if this is a bug or there is some specific intention behind
this ??
Cheers,
Subroto Sanyal
signature.asc
Description: Message signed with OpenPGP using GPGMail
Hi Vinod,
Thanks Vinod…..
I have raised the issue.
https://issues.apache.org/jira/browse/MAPREDUCE-5258
Cheers,
Subroto Sanyal
On May 17, 2013, at 7:33 PM, Vinod Kumar Vavilapalli wrote:
>
> It's inconvenient, but you can try DefaultMetricsSystem.shutDown(). Please
> see if
creates JobTrackerInstrumentation
and QueueMetrics.
While creating this MetricsSystem ; it registers and adds a Callback to
ArrayList which keeps on growing as the DefaultMetricsSystem is Singleton. Is
it a known bug? Is there any workaround to empty this list?
Cheers,
Subroto Sanyal
Hi Brahma,
Not very sure about the problem but, the following link can show some light:
http://stackoverflow.com/questions/8509087/checksum-failed-kerberos-spring-active-directory-2008/13859217#13859217
Cheers,
Subroto Sanyal
On May 8, 2013, at 6:43 AM, Brahma Reddy Battula wrote:
> Caused
Hi George,
Tried as per your suggestion:
hadoop fs -cp "s3n://acessKey:acesssec...@dm.test.bucket/srcData/"
/test/srcData/
Still facing the same problem :-( :
cp: java.io.IOException: mkdirs: Pathname too long. Limit 8000 characters,
1000 levels.
Cheers,
Subroto Sanyal
On Mar 7
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:908)
One more interesting stuff to notice is that same thing works nicely with
hadoop 2.0
Cheers,
Subroto Sanyal
On Mar 6, 2013, at 11:12 AM, Michel Segel wrote:
> Have you tried using dis
Hi Shashwat,
As already mentioned already in my mail setting
dfs.client.use.legacy.blockreader to true fixes the problem.
This looks to be workaround or moreover disabling a feature.
Would like to know, what is the exact problem?
Cheers,
Subroto Sanyal
On Mar 5, 2013, at 6:33 PM, shashwat
Hi Julian,
This is from CDH4.1.2 and I think its based on Apache Hadoop 2.0.
Cheers,
Subroto Sanyal
On Mar 5, 2013, at 3:50 PM, wrote:
> Hi,
> Which revision of hadoop?
> and what's the situation to report the Exception?
> BRs//Julian
>
>
/srcData (keep
on appending srcData).
Cheers,
Subroto Sanyal
On Mar 5, 2013, at 3:32 PM, wrote:
> Hi Subroto,
>
> I didn't use the s3n filesystem.But from the output "cp:
> java.io.IOException: mkdirs: Pathname too long. Limit 8000 characters, 1000
>
drwxr-xr-x - root supergroup 0 2013-03-05 08:49
/test/srcData/srcData/srcData/srcData/srcData
drwxr-xr-x - root supergroup 0 2013-03-05 08:49
/test/srcData/srcData/srcData/srcData/srcData/srcData
Is there a problem with s3n filesystem ??
Cheers,
Subroto Sanyal
signatur
hadoop
?
Is there is JIRA ticket for this??
Cheers,
Subroto Sanyal
Hi,
I have an MapReduce job which uses does some operation of reading from HBase
tables. I have configured the cluster in Secure Mode including Secure HBase.
I am running the Job(classical MR job) from a custom client running under user
"subroto".
The mentioned user has valid pr
23 matches
Mail list logo