I was trying out the Transparent data at rest encryption and was able to
setup the KMS, zones etc. and add
files to the zone.
How do I confirm if the files I added to the encryption zone are encrypted
? Is there a way to view
the raw file, a *hdfs fs -cat *shows me the actual contents of the
Here is an example:
https://adhoop.wordpress.com/2012/02/16/generate-a-list-of-anagrams-round-3/
-Rajesh
On Thu, Feb 19, 2015 at 9:32 PM, Haoming Zhang haoming.zh...@outlook.com
wrote:
Thanks guys,
I will try your solutions later and update the result!
--
From:
Do you know why the 3 nodes are down ? With replication, the copy of data
that were hosted on those failed nodes will not be available. However, the
data will still be served by the hosts having the other 2 copies - so I
don't think you need to copy the data again.
Unless for some reason the 3
who doesn’t have permission to
the folder. An alternative is to check which block it is on Linux and
looking at the block using cat from a linux shell.
Olivier
From: Rajesh Kartha karth...@gmail.com
Reply-To: user@hadoop.apache.org user@hadoop.apache.org
Date: Tuesday, 24 February 2015 19
, Mar 26, 2015 at 8:46 PM, Rajesh Kartha karth...@gmail.com wrote:
Hello,
I was wondering what are the main differences between LCE and DCE under '
*simple*' Hadoop security.
From my readings LCE gives:
- granularity to control execution like ban users, min uid
- use cgroups to control
Hello,
I was wondering what are the main differences between LCE and DCE under '
*simple*' Hadoop security.
From my readings LCE gives:
- granularity to control execution like ban users, min uid
- use cgroups to control resources
While DCE uses ulimits.
In both cases the container is executed
With Capacity Scheduler, the other useful param would be:
yarn.scheduler.capacity.maximum-applications
http://hadoop.apache.org/docs/r2.6.0/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html
On Thu, Apr 30, 2015 at 11:52 AM, Prashant Kommireddi prash1...@gmail.com
wrote:
Take a look at
Curious, did you check fs.defaultFS in the core-site.xml ? Just to make
sure the HDFS port is 9000 and not 8020
-Rajesh
On Thu, Apr 30, 2015 at 4:42 AM, Mahmood Naderan nt_mahm...@yahoo.com
wrote:
I found out that the $JAVA_HOME specified in hadoop-env.sh was different
from java -version in
Hi Giri,
Have you tried the COALESCE function:
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-ConditionalFunctions
example for a date column type:
hive select COALESCE(dt, cast('2014-02-23' as date)) from simpletest;
In any case, maybe good to post the
Hello,
The 3 main settings in hdfs-site.xml are:
- * dfs.name.dir*: directory where namenode stores its metadata,
default value ${hadoop.tmp.dir}/dfs/name.
- *dfs.data.dir:* directory where HDFS data blocks are stored,
default value ${hadoop.tmp.dir}/dfs/data.
-
Wondering if you have used the REST API to submit jobs:
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_APISubmit_Application
There are some issues that I have come across, but it does seem to work.
Also the message
implemented this
in Ambari, there didn't seem to be one).
Yusaku
From: Rajesh Kartha karth...@gmail.com
Reply-To: user@hadoop.apache.org user@hadoop.apache.org
Date: Thursday, June 25, 2015 4:57 PM
To: user@hadoop.apache.org user@hadoop.apache.org
Subject: DataNode logs have exceptions
Forgot to ask...
- Is this a new behavior in Ambari 2.1, I don't remember seeing it in
Ambari 1.7
- Does a JIRA already exist for any of the components to track this.
Thanks,
Rajesh
On Thu, Jun 25, 2015 at 5:42 PM, Rajesh Kartha karth...@gmail.com wrote:
Ah...I am using Ambari..so that does
Hello,
I am using a Hadoop 2.7.1 build and noticed a constant flow of exceptions
every 60 seconds in the DataNode log files:
2015-06-25 13:02:36,292 ERROR datanode.DataNode (DataXceiver.java:run(278))
- bdavm063.svl.ibm.com:50010:DataXceiver error processing unknown
operation src:
14 matches
Mail list logo