The data is decrypted on client side after obtaining DEK from KMS, *not*
decrypted by DN.
Right, currently DEK is better to be protected by https on the wire.
If you want to confirm the file is encrypted, one way is to see the content of
file blocks.
Regards,
Yi Liu
From: Rajesh Kartha
it means that there are no patch
for the Hadoop 2.2.0 use by now, i am right ?
2014-11-18 14:48 GMT+08:00 Liu, Yi A
yi.a@intel.commailto:yi.a@intel.com:
HDFS-RAID is not maintained anymore and has been removed from Hadoop. Now we
are re-design Erasure coding in HDFS, refer to
https
HDFS-RAID is not maintained anymore and has been removed from Hadoop. Now we
are re-design Erasure coding in HDFS, refer to
https://issues.apache.org/jira/browse/HDFS-7285
Regards,
Yi Liu
From: Vincent,Wei [mailto:weikun0...@gmail.com]
Sent: Tuesday, November 18, 2014 1:55 PM
To:
It’s still random.
If rack aware is not enabled, all nodes are in “default-rack”, and random nodes
are chosen for block replications.
Regards,
Yi Liu
From: SF Hadoop [mailto:sfhad...@gmail.com]
Sent: Friday, October 03, 2014 7:12 AM
To: user@hadoop.apache.org
Subject: Block placement without
You should configure hadoop.security.authentication to Kerberos in your
core-site.xml. Please refer to
http://hadoop.apache.org/docs/r2.5.1/hadoop-project-dist/hadoop-common/SecureMode.html
Regards,
Yi Liu
-Original Message-
From: Xiaohua Chen [mailto:xiaohua.c...@gmail.com]
Sent:
You need to do authentication using “kinit” (in linux) for the user you want to
use. Then you can run your application.
For more information, please refer to:
http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/SecureMode.html
Regards,
Yi Liu
From: ch huang
Change Hadoop version : mvn versions:set -DnewVersion=NEWVERSION
Regards,
Yi Liu
From: arthur.hk.c...@gmail.com [mailto:arthur.hk.c...@gmail.com]
Sent: Sunday, September 14, 2014 1:51 PM
To: user@hadoop.apache.org
Cc: arthur.hk.c...@gmail.com
Subject: Re: Hadoop 2.4.1 Compilation, How to
1. If your cluster is still alive, then you can build a new cluster and
using distcp to migrate all data to the new cluster.
2. Suppose the master node you mean is the NameNode, the second approach
needs you to copy fsimage/edit logs from master node, I think it doesn’t work
for
Hi Zesheng,
I got from an offline email of you and knew your Hadoop version was 2.0.0-alpha
and you also said “The block is allocated successfully in NN, but isn’t created
in DN”.
Yes, we may have this issue in 2.0.0-alpha. I suspect your issue is similar
with HDFS-4516. And can you try
much!
2014-09-10 16:39 GMT+08:00 Zesheng Wu
wuzeshen...@gmail.commailto:wuzeshen...@gmail.com:
Thanks Yi, I will look into HDFS-4516.
2014-09-10 15:03 GMT+08:00 Liu, Yi A
yi.a@intel.commailto:yi.a@intel.com:
Hi Zesheng,
I got from an offline email of you and knew your Hadoop version
You could refer to the header file: “src/main/native/libhdfs/hdfs.h”, you could
get the APIs in detail.
Regards,
Yi Liu
From: Demai Ni [mailto:nid...@gmail.com]
Sent: Thursday, September 04, 2014 5:21 AM
To: user@hadoop.apache.org
Subject: question about matching java API with libHDFS
hi,
Right, please use FileSystem#append
From: Stanley Shi [mailto:s...@pivotal.io]
Sent: Thursday, August 28, 2014 2:18 PM
To: user@hadoop.apache.org
Subject: Re: Appending to HDFS file
You should not use this method:
FSDataOutputStream fp = fs.create(pt, true)
Here's the java doc for this create
For 1#, since you still have 2 datanodes alive, and the replication is 2,
writing will success. (Read will success)
For 2#, now you only have 1 datanode, and the replication is 2, then initial
writing will success, but later sometime pipeline recovery will fail.
Regards,
Yi Liu
Currently Hadoop doesn't officially support JAVA8
Regards,
Yi Liu
From: Ruebenacker, Oliver A [mailto:oliver.ruebenac...@altisource.com]
Sent: Thursday, August 28, 2014 8:46 PM
To: user@hadoop.apache.org
Subject: Hadoop on Windows 8 with Java 8
Hello,
I can't find any information on
Escape character is \, but please enclose with single quote
For example /foo/{123} should be '/foo/\{123\}'
Regards,
Yi Liu
From: varun kumar [mailto:varun@gmail.com]
Sent: Thursday, August 21, 2014 2:21 PM
To: user; praveen...@gmail.com
Subject: Re: Delete a folder name containing *
Make
Which version are you using?
Regards,
Yi Liu
-Original Message-
From: norbi [mailto:no...@rocknob.de]
Sent: Wednesday, August 20, 2014 10:14 PM
To: user@hadoop.apache.org
Subject: hdfs dfsclient, possible to force storage datanode ?
hi list,
we have 52 DNs and more hundred clients
Yes, that’s correct behavior.
You remove the file, but snapshot is there and it has ref for that file, so the
blocks will not be removed. Only after you delete all snapshots and original
file, then blocks are removed.
Keep in mind that blocks in datanodes are not copied for snapshot.
Regards,
Hi Chhaya,
I have looked into Kerboroes but it doesn't provide encryption for data
already residing in HDFS.
For encryption for data, I suppose you mean data at rest encryption (*not*
encryption for data transport which is already supported), this feature is
still under development and close
@hadoop.apache.org
Subject: RE: Implementing security in hadoop
Yea you are right I need encryption for data-at-rest
From: Liu, Yi A [mailto:yi.a@intel.com]
Sent: Wednesday, August 13, 2014 12:46 PM
To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
Subject: RE: Implementing security in hadoop
Hi
Sure, you can do it. Please follow the quick start:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html
Regards,
Yi Liu
From: harish tangella [mailto:harish.tange...@gmail.com]
Sent: Wednesday, August 13, 2014 8:34 PM
To: user@hadoop.apache.org
Subject:
issuer doesn't exist
Are there any other security tool for the same?
From: Liu, Yi A [mailto:yi.a@intel.com]
Sent: Wednesday, August 13, 2014 1:57 PM
To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
Subject: RE: Implementing security in hadoop
OK, it's still under development
(https
#User
On Jul 21, 2014, at 9:23 PM, Liu, Yi A
yi.a@intel.commailto:yi.a@intel.com wrote:
Regards,
Yi Liu
Regards,
Yi Liu
23 matches
Mail list logo