I did discuss with the product owner. It seems, it is a business
requirement and we cannot question the customer on whether their need is
valid or not (Or is it legal or ethical).
I was wondering, (on a technical level!), can't we incorporate a change in
Hadoop, whereby it allows the parent
IMHO, you should discuss this with your product owner /business customer
since this relates to the requirement. This forum is for purely technical
questions, although I agree as human being we need to be sensitive about
these.
You can also try posting to Quora and see what folks think.
Cheers!
On
That's it! Thanks.
John Lilley
From: Chris Nauroth [mailto:cnaur...@hortonworks.com]
Sent: Tuesday, May 24, 2016 10:24 AM
To: John Lilley ; 'user@hadoop.apache.org'
Subject: Re: Filing a JIRA
Something is definitely odd about the UI there.
Hi
I am working on running a long lived app on a secure Yarn cluster. After
some reading on this domain, I want to make sure my understanding on the
life cycle of an app on Kerberos-enabled Yarn is correct as below.
1. Client kinit to login into KDC and add the HDFS delegation tokens to the
Hmm..i understand what you are saying Chris :)
However it is like that. You are building a knife (Hadoop in this case) and
you would know that you would kill someone by it. Would you build the knife
at all (So it is about Hadoop also in some sense. But as pointed about by
you, it is not a
Hello Kumar,
I answered at the Stack Overflow link. I'll repeat the same information here
for everyone's benefit.
HDFS implements the POSIX ACL model [1]. The linked documentation explains
that the mask entry is persisted into the group permission bits of the classic
POSIX permission model.
There is also some discussion on that JIRA considering a checksum strategy
independent of block size. I don't think anything was ever implemented
though, and there would be some drawbacks to that approach. Sorry if this
caused confusion.
--Chris Nauroth
On 5/24/16, 9:55 AM, "Dmitry
Hello Deepak,
This is a fascinating question, but it's not the right forum. This list is for
questions on usage of Apache Hadoop. I don't see anything about Hadoop in your
question. Even if the software in question involves Hadoop, it's not a
question about usage. Unfortunately, I'm not
> On 24 May 2016, at 19:53, Chris Nauroth wrote:
>
> Hello Dmitry,
>
> To clarify, the intent of MAPREDUCE-5065 was to message the user that
> using different block sizes on source and destination might cause a
> failure to checksum mismatch. The message to the user
Hello Dmitry,
To clarify, the intent of MAPREDUCE-5065 was to message the user that
using different block sizes on source and destination might cause a
failure to checksum mismatch. The message to the user recommends either
the -pb (preserve block size) or -skipCrc (skip checksum validation) as
Something is definitely odd about the UI there. From your second link, can you
try clicking directly on the "Create" button (not the drop-down arrow leading
to "Create Detailed")?
--Chris Nauroth
From: John Lilley >
Date: Tuesday, May
I'm still confused. When I go to that link, it redirects to:
https://issues.apache.org/jira/servicedesk/agent/INFRA/queues
When I file a bug using that link, it attempts to file it against Atlas, not
Hadoop.
If I select project "HADOOP" from the drop-down, it takes me to here:
Well the bank gives out loans to corporate customers and gets a higher rate
of returns. However there is a higher risk to it and everyone knows in
advance about it. Earlier the savings and corporate account were separate
and not linked to each other (they were separate islands). Now the bank
with
Just out of curiosity, when you say "however it is not completely ethical
to link the corporate account to the savings account of the customer
company", where are you getting the idea that it is not ethical? I am not
saying I think it is or isn't, but I am more curious as to how you arrived
at
It sounds very unethical, if i had the choice i wouldn't do it, think
however that if you don't then someone else with less moral standards will
do
On Tue, May 24, 2016 at 1:59 AM, Deepak Goel wrote:
>
> Hey
>
> Namaskara~Nalama~Guten Tag~Bonjour
>
> (Sorry, as this might not
Hi, Harsh.
The "dfs.namenode.edits.dir" from "hdfs getconf" is
"file:///data1/hadoop/name", the same as "dfs.namenode.name.dir".
However, in the NameNode web UI("namenode:8088/conf"), none of
"dfs.namenode.edits.dir" or "dfs.namenode.name.dir" is set.
In both source, the "hadoop.tmp.dir" is
16 matches
Mail list logo