I was trying out the Transparent data at rest encryption and was able to
setup the KMS, zones etc. and add
files to the zone.
How do I confirm if the files I added to the encryption zone are encrypted
? Is there a way to view
the raw file, a *hdfs fs -cat *shows me the actual contents of the
You can try looking at it with a user who doesn't have permission to the
folder. An alternative is to check which block it is on Linux and looking at
the block using cat from a linux shell.
Olivier
From: Rajesh Kartha karth...@gmail.commailto:karth...@gmail.com
Reply-To:
How does one go about changing the log verbosity in hadoop? What
configuration file should I be looking at?
--
Regards,
Jonathan Aquilina
Founder Eagle Eye T
Hi Jonathan,
For Audit Log you can look log4.properties file. By default, the
log4j.properties file has the log threshold set to WARN. By setting this
level to INFO, audit logging can be turned on. The following snippet shows
the log4j.properties configuration when HDFS and MapReduce audit logs
Hello!
Could you please explain, why this check exists in class
org.apache.hadoop.hdfs.server.namenode.FSNamesystem, method
recoverLeaseInternal:
Lease leaseFile = leaseManager.getLeaseByPath(src);
if (leaseFile != null leaseFile.equals(lease)) {
throw new
Hi everyone,
I have a question about the ResourceManager behavior:
when the ResourceManager allocates a container, it takes some time before
the NMToken is sent and then received by the ApplicationMaster.
During this time, it is possible to receive another heartbeat from the AM,
equal to the last
Can you point me to the doc where you read about the default ?
Since the question is Ambari related, you can consider posting on Ambari
mailing list.
Cheers
On Feb 23, 2015, at 11:22 PM, Krishna Kishore Bonagiri
write2kish...@gmail.com wrote:
Ted, thanks for the reply.
But is
Have you added hadoop-hdfs-.jar into your class path ?
From: xeonmailinglist
xeonmailingl...@gmail.commailto:xeonmailingl...@gmail.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
user@hadoop.apache.orgmailto:user@hadoop.apache.org
Date: Tuesday, February 24, 2015 at 10:57
Hi Drake,
Thanks for a pointer. AM log indeed have information about remote map
tasks. But I'd like to have more low level details. Like on which node each
map task was scheduled and how many bytes was read. That should be exactly
in datanode log and I saw it for another job. But after I
Hi Dmitry!
I suspect its because we don't want two streams from the same DFSClient to
write to the same file. The Lease.holder is a simple string which corresponds
usually to DFSClient_someid .
HTH
Ravi.
On Tuesday, February 24, 2015 12:12 AM, Dmitry Simonov
dimmobor...@gmail.com wrote:
Yes, this was the problem. But I have another question:
When I am starting a remote job, the job will run in the remote resource
manager. But how the job that is running remotely will know where are
the map and reduce functions?
On 24-02-2015 19:03, Xuan Gong wrote:
Have you added
The data is decrypted on client side after obtaining DEK from KMS, *not*
decrypted by DN.
Right, currently DEK is better to be protected by https on the wire.
If you want to confirm the file is encrypted, one way is to see the content of
file blocks.
Regards,
Yi Liu
From: Rajesh Kartha
Only one rm will be active at a time. The other is in standby. When you
started the new rm, the configuration files direct the new rm to come up
and take over, the old primary will go to stand by (or should!). Working as
designed except you will see slowdown in scheduling. I suspect what you
want
Hi, Igor
The AM logs are in the Hdfs if you set log aggregation property. Otherwise,
they are in the container log directory. See this:
http://ko.hortonworks.com/blog/simplifying-user-logs-management-and-access-in-yarn/
Thanks
2015년 2월 25일 수요일, Igor Bogomolovigor.bogomo...@gmail.com님이 작성한 메시지:
Thank you Olivier,
I suppose with the first suggestion - locking the dir to be unreadable for
other users, the HDFS permissions would
kick in and prevent an unwarranted user to read them.
However, I wanted to see the actual encrypted data so I used the second
approach you suggested. With hadoop
You can set some ACL on the KMS so you could have an open access from an HDFS
ACL perspective but a restriction from KSM side.
Olivier
From: Rajesh Kartha karth...@gmail.commailto:karth...@gmail.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
Hi,
In the YARN HA for Resource Manager, I noticed that the HA has been fine
initially during the HA setup but however after sometime I notice that
restarting one resource manager gets the other resource manager
stopped/killed. Below is what I see the logs on the killed resource manager
instance.
On 2/24/2015 8:56 PM, Liu, Yi A wrote:
The data is decrypted on client side after obtaining DEK from KMS, *not*
decrypted by DN.
My colleague Yi is correct that data is not decrypted by the DN with one
exception: WebHDFS uses the DN as the proxy and therefore the DN does
the decryption in
18 matches
Mail list logo