CVE-2021-33036: Apache Hadoop Privilege escalation vulnerability

2022-06-15 Thread Akira Ajisaka
Severity: Critical Description: In Apache Hadoop 2.2.0 to 2.10.1, 3.0.0-alpha1 to 3.1.4, 3.2.0 to 3.2.2, and 3.3.0 to 3.3.1, a user who can escalate to yarn user can possibly run arbitrary commands as root user. Users should upgrade to Apache Hadoop 2.10.2, 3.2.3, 3.3.2 or higher. Mitigation:

Re: Log4j upgrade to 2.x in hadoop for vulnerability fix

2021-09-15 Thread Akira Ajisaka
Hi Pulkit, Hadoop does not use those log4j network classes unless the user and the administrator configured the setting explicitly. The issue is tracked by [HADOOP-16206] Migrate from Log4j1 to Log4j2 - ASF JIRA (apache.org) Thanks, Akira On

Re: Haddop prometheus metrics

2021-08-19 Thread Akira Ajisaka
Hi Shailesh, This feature is not implemented in Hadoop 3.2.1. Please try Hadoop 3.3.1. -Akira On Sat, Aug 14, 2021 at 5:56 PM Shailesh Ligade wrote: > > Hello, > > I am using haddop 3.2.1 and saw the documentation that i can add > > hadoop.prometheus.endpoint.enabled to true in core-site.xml

[CVE-2020-9492] Apache Hadoop Potential privilege escalation

2021-01-25 Thread Akira Ajisaka
CVE-2020-9492. Apache Hadoop Potential privilege escalation Severity: Important Vendor: The Apache Software Foundation Versions Affected: 3.2.0 to 3.2.1, 3.0.0-alpha1 to 3.1.3, 2.0.0-alpha to 2.10.0 Description: WebHDFS client might send SPNEGO authorization header to remote URL without proper

[CVE-2018-11764] Apache Hadoop Privilege escalation in web endpoint

2020-10-21 Thread Akira Ajisaka
CVE-2018-11764: Apache Hadoop Privilege escalation in web endpoint Severity: Critical Vendor: The Apache Software Foundation Versions affected: 3.0.0-alpha4, 3.0.0-beta1, and 3.0.0 Description: Web endpoint authentication check is broken. Authenticated users may impersonate any user even if no

CVE-2018-11765: Potential information disclosure in Hadoop Web interfaces

2020-09-27 Thread Akira Ajisaka
CVE-2018-11765: Potential information disclosure in Hadoop Web interfaces Severity: Important Vendor: The Apache Software Foundation Versions affected: 3.0.0-alpha2 to 3.0.0, 2.9.0 to 2.9.2, 2.8.0 to 2.8.5 Description: When Kerberos authentication is enabled and SPNEGO through HTTP is not

Re: error in installation of hadoop 2.9.2 on windows 10 (64 bit)

2020-09-09 Thread Akira Ajisaka
(+ user ML) Hi Meghna, Please use user@hadoop.apache.org for end-user questions and discussion. Regards, Akira On Thu, Sep 10, 2020 at 1:31 PM meghnaist wrote: > > -- Forwarded message - > From: meghnaist > Date: Wed, Sep 9, 2020, 11:59 PM > Subject: error in installation of

Re: [DISCUSS] fate of branch-2.9

2020-08-28 Thread Akira Ajisaka
+1 Thanks, Akira On Fri, Aug 28, 2020 at 5:51 PM lisheng.sun08 wrote: > +1 on EOL of branch-2.9. > > Thanks, > Lisheng Sun > > > > 发自我的小米手机 > 在 2020年8月27日 下午1:55,Wei-Chiu Chuang 写道: > > Bump up this thread after 6 months. > > Is anyone still interested in the 2.9 release line? Or are we good

Re: Spark Dataset API for secondary sorting

2019-12-24 Thread Akira Ajisaka
Hi Daniel, This is the user mailing list for Apache Hadoop, not Apache Spark. Please use instead. https://spark.apache.org/community.html -Akira On Tue, Dec 3, 2019 at 1:00 AM Daniel Zhang wrote: > Hi, Spark Users: > > I have a question related to the way I use the spark Dataset API for my >

Re: hadoop java compatability

2019-12-24 Thread Akira Ajisaka
Hi Augustine, Java 11 is not supported even in the latest version in Apache Hadoop. I hope Apache Hadoop 3.3.0 will support Java 11 (only runtime support) but 3.3.0 is not released yet. (Our company (Yahoo! JAPAN) builds trunk with OpenJDK 8 and run HDFS dev cluster with OpenJDK 11 successfully.)

CVE-2018-11768: HDFS FSImage Corruption

2019-10-03 Thread Akira Ajisaka
CVE-2018-11768: HDFS FSImage Corruption Severity: Critical Vendor: The Apache Software Foundation Versions affected: 3.1.0 to 3.1.1, 3.0.0-alpha1 to 3.0.3, 2.9.0 to 2.9.1, 2.0.0-alpha to 2.8.4 Description: There is a mismatch in the size of the fields used to store user/group information

Re: Hadoop storage community online sync

2019-08-20 Thread Akira Ajisaka
Thank you for the information. Now US pacific time is GMT-7, isn't it? -Akira On Tue, Aug 20, 2019 at 6:56 AM Wei-Chiu Chuang wrote: > > For this week, > We will have Konstantin and the LinkedIn folks to discuss a recent project > that's been baking for quite a while. This is an exciting

Re: [DISCUSS] EOL 2.8 or another 2.8.x release?

2019-07-25 Thread Akira Ajisaka
I'm +1 for 1 more release in 2.8.x and declare that 2.8 is EoL. > would be even happier if we could move people to 2.9.x Agreed. -Akira On Thu, Jul 25, 2019 at 10:59 PM Steve Loughran wrote: > > I'm in favour of 1 more release (it fixes the off-by 1 bug in > S3AInputStream HADOOP-16109), but

CVE-2018-8029: Apache Hadoop Privilege escalation vulnerability

2019-05-29 Thread Akira Ajisaka
CVE-2018-8029: Apache Hadoop Privilege escalation vulnerability Severity: Critical Vendor: The Apache Software Foundation Versions Affected: 3.0.0-alpha1 to 3.1.0, 2.9.0 to 2.9.1, 2.2.0 to 2.8.4 Description: A user who can escalate to yarn user can possibly run arbitrary commands as root user.

CVE-2018-11767: Apache Hadoop KMS ACL regression

2019-03-11 Thread Akira Ajisaka
CVE-2018-11767: Apache Hadoop KMS ACL regression Severity: Severe Vendor: The Apache Hadoop Software Foundation Versions affected: 2.9.0 to 2.9.1, 2.8.3 to 2.8.4, 2.7.5 to 2.7.6. Description: After the security fix for CVE-2017-15713, KMS has an access control regression, blocking users or

CVE-2018-1296: Apache Hadoop HDFS Permissive listXAttr Authorization

2019-01-23 Thread Akira Ajisaka
CVE-2018-1296: Apache Hadoop HDFS Permissive listXAttr Authorization Severity: Important Vendor: The Apache Software Foundation Versions Affected: 3.0.0-alpha1 to 3.0.0, 2.9.0, 2.8.0 to 2.8.3, 2.5.0 to 2.7.5 Description: HDFS exposes extended attribute key/value pairs during listXAttrs,

Re: Document for hadoop 2.7.7 is not publish

2019-01-16 Thread Akira Ajisaka
The documents were uploaded by https://github.com/apache/hadoop-site/commit/a995a20d4f2bc1a0433dcb4e03d04bdfb49b7990 Thanks Steve Loughran for the work. -Akira 2019年1月15日(火) 13:16 Akira Ajisaka : > > Thank Dapeng for reporting this. > > Hi Steve, could you upload the docume

Fwd: Document for hadoop 2.7.7 is not publish

2019-01-14 Thread Akira Ajisaka
Thank Dapeng for reporting this. Hi Steve, could you upload the documents of 2.7.7? -Akira -- Forwarded message - From: Dapeng Sun Date: 2019年1月15日(火) 11:43 Subject: Document for hadoop 2.7.7 is not publish To: Hello, 2.7.7 have been released, but the document is not

Re: Hadoop shows only one live datanode

2018-12-24 Thread Akira Ajisaka
Hi Jérémy, Would you set "dfs.namenode.rpc-address" to "master:9000" in hdfs-site.xml? The NameNode RPC address is "localhost:8020" by default and that's why only the DataNode running on master is registered. DataNodes running on slave1/slave2 want to connect to "localhost:8020" and cannot find

CVE-2018-11766: Apache Hadoop privilege escalation vulnerability

2018-11-26 Thread Akira Ajisaka
CVE-2018-11766: Apache Hadoop privilege escalation vulnerability Severity: Critical Vendor: The Apache Software Foundation Versions Affected: Apache Hadoop versions from 2.7.4 to 2.7.6 Description: In Apache Hadoop 2.7.4 to 2.7.6, the security fix for CVE-2016-6811 is incomplete. A user who

CVE-2018-8009: Apache Hadoop distributed cache archive vulnerability

2018-11-21 Thread Akira Ajisaka
CVE-2018-8009: Apache Hadoop distributed cache archive vulnerability Severity: Severe Vendor: The Apache Software Foundation Versions Affected: Hadoop 0.23.0 to 0.23.11 Hadoop 2.0.0-alpha to 2.7.6 Hadoop 2.8.0 to 2.8.4 Hadoop 2.9.0 to 2.9.1 Hadoop 3.0.0-alpha to 3.0.2 Hadoop 3.1.0

[ANNOUNCE] Apache Hadoop 2.9.2 release

2018-11-20 Thread Akira Ajisaka
Hi all, I am pleased to announce that the Apache Hadoop 2.9.2 has been released. Apache Hadoop 2.9.2 is the next release of Apache Hadoop 2.9 line, which includes 204 fixes since previous Hadoop 2.9.1 release. - For major changes included in Hadoop 2.9 line, please refer Hadoop 2.9.2 main page

CVE-2016-6811: Apache Hadoop Privilege escalation vulnerability

2018-04-30 Thread Akira Ajisaka
CVE-2016-6811: Apache Hadoop Privilege escalation vulnerability Severity: Critical Vendor: The Apache Software Foundation Versions Affected: All the Apache Hadoop versions from 2.2.0 to 2.7.3 Description: A user who can escalate to yarn user can possibly run arbitrary commands as root user.

Re: Error running 2.5.1 HDFS client on Java 10

2018-04-02 Thread Akira Ajisaka
Hi Enrico, Now Java 10 is not supported in Apache Hadoop. https://issues.apache.org/jira/browse/HADOOP-11423 Please use Java 8. Regards, Akira On 2018/03/23 22:22, Enrico Olivelli wrote: Hi, I am trying to move an application to Java 10 but I get this error. I can't find it in JIRA, has

CVE-2017-15718: Apache Hadoop YARN NodeManager vulnerability

2018-01-24 Thread Akira Ajisaka
CVE-2017-15718: Apache Hadoop YARN NodeManager vulnerability Severity: Important Vendor: The Apache Software Foundation Versions Affected: Hadoop 2.7.3, 2.7.4 Description: In Apache Hadoop 2.7.3 and 2.7.4, the security fix for CVE-2016-3086 is incomplete. The YARN NodeManager can leak the

Re: Representing hadoop metrics on ganglia web interface

2017-09-01 Thread Akira Ajisaka
Hi Nishant, Multicast is used to communicate between Ganglia daemons by default and it is banned in AWS EC2. Would you try unicast setting? Regards, Akira On 2017/08/04 12:37, Nishant Verma wrote: Hello We are supposed to collect hadoop metrics and see the cluster health and performance.

Re: Prime cause of NotEnoughReplicasException

2017-09-01 Thread Akira Ajisaka
Hi Nishant, The debug message shows there are not enough racks configured to satisfy the rack awareness. http://hadoop.apache.org/docs/r3.0.0-alpha4/hadoop-project-dist/hadoop-common/RackAwareness.html If you don't need to place replicas in different racks, you can simply ignore the debug

Re: spark on yarn error -- Please help

2017-09-01 Thread Akira Ajisaka
Hi sidharth, Would you ask Spark related question to the user mailing list of Apache Spark? https://spark.apache.org/community.html Regards, Akira On 2017/08/28 11:49, sidharth kumar wrote: Hi, I have configured apace spark over yarn. I am able to run map reduce job successfully but

Re: Mapreduce example from library isuue

2017-09-01 Thread Akira Ajisaka
Hi Atul, Have you added HADOOP_MAPRED_HOME to yarn.nodemanager.env-whitelist in yarn-site.xml? The document may help: http://hadoop.apache.org/docs/r3.0.0-alpha4/hadoop-project-dist/hadoop-common/SingleCluster.html#YARN_on_a_Single_Node Regards, Akira On 2017/08/29 17:45, Atul Rajan wrote:

Re: Restrict the number of container can be run in Parallel in Yarn?

2017-06-26 Thread Akira Ajisaka
Hi wuchang, If you are using Hadoop 2.7+, you can use the following parameters to limit the number of simultaneously running map/reduce tasks per MapReduce application: * mapreduce.job.running.map.limit (default: 0, for no limit) * mapreduce.job.running.reduce.limit (default: 0, for no limit)

Re: How to Contribute as hadoop admin

2017-06-02 Thread Akira Ajisaka
Hi Sidharth, Please check this wiki: https://cwiki.apache.org/confluence/display/HADOOP/HowToContribute Regards, Akira On 2017/05/31 23:06, Sidharth Kumar wrote: Hi, I have been working as hadoop admin since 2 years, I subscribed to this group 3 months before but since then never able to

Re: Cloudera Manager 5.7.1

2016-06-22 Thread Akira AJISAKA
Hi Chinnappan, Please ask that in Cloudera Manager community instead of Apache Hadoop community. https://groups.google.com/a/cloudera.org/group/scm-users -Akira On 6/22/16 17:47, Chinnappan Chandrasekaran wrote: Dear all, I have installed cloudera manager 5.7.1 with 5 node installed

Re: HDFS Federation

2016-04-29 Thread Akira AJISAKA
d, I can view the namenode from http://localhost:50070 Do you know where is the 50070 come form or where to set it? If I configure the Federation in one node, I probably will change this, right? Thanks a lot. On Thu, Apr 28, 2016 at 1:55 AM, Akira AJISAKA <ajisa...@oss.nttdata.co.jp <mailto:ajisa

Re: HDFS Federation

2016-04-27 Thread Akira AJISAKA
Hi Kun, (1) ViewFileSystem is related. https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/ViewFs.html (2) Yes, NameNode Federation is disabled by default. I suppose it is possible to configure NameNode Federation in one node by setting different HTTP/RPC port and

Re: DistCp CRC failure modes

2016-04-27 Thread Akira AJISAKA
Thank you, Elliot! On 4/28/16 03:40, Elliot West wrote: I've raised this as an issue: https://issues.apache.org/jira/browse/HDFS-10338 On Wednesday, 27 April 2016, Elliot West > wrote: Hello, We are using DistCp V2 to replicate data between

Re: Unsubscribe

2016-03-29 Thread Akira AJISAKA
Hi all, In order to unsubscribe from this ML, you need to send an email to user-unsubscr...@hadoop.apache.org. On 3/30/16 09:29, Venky Mullapudi wrote: Unsubscribe - To unsubscribe, e-mail:

The Activities of Apache Hadoop Community 2015

2016-01-24 Thread Akira AJISAKA
Hi folks, We wrote a blog post about the activities of Apache Hadoop community. http://ajisakaa.blogspot.com/2016/01/the-activities-of-apache-hadoop.html According to the post, the activities of Apache Hadoop Community was continued to expand also in 2015. We really appreciate the continuous

Re: how to build eclipse plugin for hadoop-2.7.1(work on ubuntu15.10)

2015-12-02 Thread Akira AJISAKA
Hi Pandeng, you can build eclipse project files, or import a Maven project by existing pom.xml in the tarball. The below instruction is how to build eclipse project file. ---BUILDING.txt Importing projects to eclipse When you import the project to eclipse, install hadoop-maven-plugins at

Re: Unsubscribe

2015-12-02 Thread Akira AJISAKA
Those who want to unsubscribe this mailing list, 1. Send an e-mail to user-unsubscr...@hadoop.apache.org, and you will receive a confirmation e-mail in a few minutes. 2. Send an e-mail again to the address in the confirmation e-mail. That's all. Please do not send "unsubscribe" to this

Re: Unable to use ./hdfs dfsadmin -report with HDFS Federation

2015-08-24 Thread Akira AJISAKA
If you want to get the summary from a cluster, I'm thinking you can use -fs option to specify the NameNode of the cluster. The command will be hdfs dfsadmin -fs hdfs://nn1:port -report. dfsadmin -report does not support reporting the summary of all the cluster mounted on a viewfs. Regards,

Re: can't copy files between HDFS to a specific dir

2015-08-18 Thread Akira AJISAKA
Have you created the destination directory /input1? I'm thinking you need to create the directory before distcp. Regards, Akira On 8/18/15 17:29, xeonmailinglist wrote: In my case it has nothing to do with directory permissions. I think |distcp| cannot copy a file into a directory. It just can

Re: “dfs.namenode.service.handler.count” and “dfs.namenode.handler.count”

2015-05-21 Thread Akira AJISAKA
RPC Response from NameNode can become slow. Timeouts can happen in the worst case. Regards, Akira On 5/20/15 18:14, jason lu wrote: thanks ajisakaa, what will happen if the number of “dfs.namenode.handler.count” is too low to allocate channel to client? thanks. 在

Re: Hadoop 2.6.0, How to add/remove node to/from running cluster

2015-04-06 Thread Akira AJISAKA
:53 AM, Akira AJISAKA ajisa...@oss.nttdata.co.jp mailto:ajisa...@oss.nttdata.co.jp wrote: Hi Arthur, 1) How to add a new datanode to a running Hadoop 2.6.0 cluster? Just starting a datanode is fine. The datanode will be added to the cluster automatically. 2) How to rebalance

Re: Hadoop 2.6.0, How to add/remove node to/from running cluster

2015-04-04 Thread Akira AJISAKA
Hi Arthur, 1) How to add a new datanode to a running Hadoop 2.6.0 cluster? Just starting a datanode is fine. The datanode will be added to the cluster automatically. 2) How to rebalance the cluster after the new node is added? Please see

Re: The Activities of Apache Hadoop Community

2015-03-03 Thread Akira AJISAKA
Hadoop community was continued to expand also in 2014. We hope it will be the same in 2015. Thanks, Akira On 2/13/14 11:20, Akira AJISAKA wrote: Hi all, We collected and analyzed JIRA tickets to investigate the activities of Apache Hadoop Community in 2013. http://ajisakaa.blogspot.com/2014/02

Re: Adding datanodes to Hadoop cluster - Will data redistribute?

2015-02-08 Thread Akira AJISAKA
Hi Manoj, You need to use balancer to re-balance data between nodes. http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#Balancer *dfs.datanode.fsdataset.volume.choosing.policy* have options 'Round Robin' or 'Available Space', are there any other

Re: How to debug why example not finishing (or even starting)

2015-01-28 Thread Akira AJISAKA
04:09, Frank Lanitz wrote: Am 28.01.2015 um 19:00 schrieb Akira AJISAKA: I reproduced the condition. It seems to be a bug. Can you please give the JIRA or describe what was triggering the thing? Cheers, Frank

Re: How to debug why example not finishing (or even starting)

2015-01-28 Thread Akira AJISAKA
Hi Frank, I reproduced the condition. It seems to be a bug. 15/01/28 14:32:15 WARN mapreduce.JobSubmitter: No job jar file set. User classes may not be found. See Job or Job#setJar(String). I found user classes are not set in Grep.java. I'll file a jira and create a patch shortly. Thanks,

Re: How to get hadoop issues data for research?

2014-12-09 Thread Akira AJISAKA
You can use REST API. Example: https://issues.apache.org/jira/rest/api/2/search?jql=project%20%3D%20HADOOP This general@ mailing list is for announcements and project management. For end-user questions and discussions, please use user@ mailing list. Regards, Akira (12/9/14, 18:22), zfx wrote:

Re: YARN Newbie Question in ApplicationMaster

2014-09-26 Thread Akira AJISAKA
Hi Dhanasekaran, 1 . Again submitting mapreduce job, It's create new application master or It's bind with already running application master. Create new MapReduce application master. 2 . when job completed application master process killed by resource manager or it's still alive

Re: Why not remove the block on the disk if the snapshot?

2014-08-20 Thread Akira AJISAKA
The blocks of a file and the blocks of the snapshot are the same. (i.e. There is no data copying when creating snapshots) Therefore the blocks are not removed from datanode disks if a file is removed. Thanks, Akira (2014/08/20 15:22), juil cho wrote: hadoop version 2.4.1. I have tested the

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager and JobHistoryServer do not auto-failover to Standby Node

2014-08-05 Thread Akira AJISAKA
You need additional settings to make ResourceManager auto-failover. http://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html JobHistoryServer does not have automatic failover feature. Regards, Akira (2014/08/05 20:15), arthur.hk.c...@gmail.com wrote: Hi I

Re: Rolling upgrades

2014-08-01 Thread Akira AJISAKA
HDFS Rolling Upgrade supports from 2.4 to 2.4+, so it's not possible. Regards, Akira (2014/08/02 1:05), Pradeep Gollakota wrote: Hi All, Is it possible to do a rolling upgrade from Hadoop 2.2 to 2.4? Thanks, Pradeep

Re: Hadoop(version 2.4.1) is a symbolic link support?

2014-07-14 Thread Akira AJISAKA
Hadoop 2.4.1 doesn't support symbolic link. (2014/07/14 11:34), cho ju il wrote: My hadoop version is 2.4.1. Hdfs(version 2.4.1) is a symbolic link support? How do I create symbolic links?

Re: Issues with documentation on YARN

2014-07-10 Thread Akira AJISAKA
Thanks for the report! You can create a issue to https://issues.apache.org/jira/browse/YARN and submit a patch. The wiki page describes how to contribute to Apache Hadoop. https://wiki.apache.org/hadoop/HowToContribute Thanks, Akira (2014/07/08 19:54), Никитин Константин wrote: Hi! I'm

Re: OIV Tool

2014-07-08 Thread Akira AJISAKA
Hi Ashish, OIV tool (~2.3.0) is compatible with the Hadoop 1.X series. However, OIV tool in 2.4.x is not compatible with previous releases (~2.3.0) since the format of fsimage has been changed. Thanks, Akira (2014/07/09 10:23), Ashish Dobhal wrote: Hey Adam Thanks, Could you tell me if the

Re: heartbeat timeout doesn't work

2014-07-07 Thread Akira AJISAKA
The timeout value is set by the following formula: heartbeatExpireInterval = 2 * (heartbeatRecheckInterval) + 10 * 1000 * (heartbeatIntervalSeconds); Note that heartbeatRecheckInterval is set by dfs.namenode.heartbeat.recheck-interval property (5*60*1000 [msec] by

Re: Bugs while installing apache hadoop 2.4.0

2014-07-06 Thread Akira AJISAKA
Did you move your native library to /usr/local/hadoop/lib/native ? Thanks, Akira (2014/07/07 0:59), Ritesh Kumar Singh wrote: My hadoop is still giving the above mentioned error. Please help. On Thu, Jul 3, 2014 at 12:50 PM, Akira AJISAKA ajisa...@oss.nttdata.co.jp mailto:ajisa

Re: Bugs while installing apache hadoop 2.4.0

2014-07-03 Thread Akira AJISAKA
It looks like the native library is not compatible with your environment. You should delete '/usr/local/hadoop/lib/native' directory or compile source code to get your own native library. Thanks, Akira (2014/07/03 15:05), Ritesh Kumar Singh wrote: When I try to start dfs using start-dfs.sh I

Re: Bugs while installing apache hadoop 2.4.0

2014-07-03 Thread Akira AJISAKA
You can download the source code and generate your own native library by $ mvn package -Pdist,native -Dtar -DskipTests You should see the library in 'hadoop-dist/target/hadoop-2.4.0/lib/native' Thanks, Akira (2014/07/03 15:32), Ritesh Kumar Singh wrote: @Akira : if i delete my native

Re: HDFS command giving error

2014-06-30 Thread Akira AJISAKA
'hadoop hdfs dfs -ls' is not correct. You should use 'hdfs dfs -ls path' to access HDFS. For more details, see http://hadoop.apache.org/docs/r2.0.6-alpha/hadoop-project-dist/hadoop-common/FileSystemShell.html FYI: It's preferred to use user@hadoop.apache.org if you have a question.

Re: LVM to JBOD conversion without data loss

2014-05-13 Thread Akira AJISAKA
Hi Bharath, The steps are not correct for me. Data loss can happen if you reduce the replication and remove a DataNode at the same time. 1) decomission a DataNode (or some DataNodes) 2) change the configuration of the DataNode(s) 3) add the DataNode(s) to the cluster repeat 1) - 3) for all

Re: hdfs cache

2014-04-21 Thread Akira AJISAKA
Hi, I think you should check the followings: * The setting is already written to /etc/security/limits.conf. * The command is executed by root user. * If you use sshd to connect the server and execute the command, you should make sure you have UsePAM yes in /etc/ssh/sshd_config and restart sshd

Re: log4j.appender.DRFA.MaxBackupIndex is not it nonsense!?

2014-04-20 Thread Akira AJISAKA
Hi Eremikhin, Thank you for the detailed information! # 30-day backup # log4j.appender.DRFA.MaxBackupIndex=30 Since the comment is confusing to the user, I think it should be removed. I've filed a jira to track this issue. https://issues.apache.org/jira/browse/HADOOP-10525 Thanks, Akira

Re: Update interval of default counters

2014-04-16 Thread Akira AJISAKA
. I want to get more finer values, instead of directly jumping from 280 to 516. Did that make sense? mapreduce.client.progressmonitor.pollinterval does not seem to effect it. Any workaround ? Thanks, Dharmesh On Tue, Apr 15, 2014 at 7:51 PM, Akira AJISAKA ajisa...@oss.nttdata.co.jpwrote: Moved

Re: Offline image viewer - account for edits ?

2014-04-15 Thread Akira AJISAKA
If you want to parse the edits, please use the Offline Edits Viewer. http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html Thanks, Akira (2014/04/15 16:41), Mingjiang Shi wrote: I think you are right because the the offline image viewer only takes the

Re: Update interval of default counters

2014-04-15 Thread Akira AJISAKA
Moved to user@hadoop.apache.org. You can configure the interval by setting mapreduce.client.progressmonitor.pollinterval parameter. The default value is 1000 ms. For more details, please see

Re: Offline image viewer - account for edits ?

2014-04-15 Thread Akira AJISAKA
Yes, I think you are right. (2014/04/16 1:20), Manoj Samel wrote: So, is it correct to say that if one wants to get the latest state of the Name node, the information from imageviewer and from edits viewer has to be combined somehow ? Thanks, On Tue, Apr 15, 2014 at 7:26 AM, Akira AJISAKA

Re: issue with defaut yarn.application.classpath value from yarn-default.xml for hadoop-2.3.0

2014-03-17 Thread Akira AJISAKA
This is intentional. See https://issues.apache.org/jira/browse/YARN-1138 for the detail. If you want to use the default parameter for your application, you should write the same parameter to config file or you can use YarnConfiguration.DEFAULT_YARN_APPLICATION_CLASSPATH instead of using

Re: Assertion error while builing hdoop 2.3.0

2014-03-06 Thread Akira AJISAKA
Hi Mahmood, I have downloaded hadoop-2.3.0-src and followed the guide from http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/SingleCluster.html The documentation is still old, and you don't need to compile the source code to build a cluster. I built the latest document

Re: unsubscribe

2014-02-19 Thread Akira AJISAKA
Please send email to user-unsubscr...@hadoop.apache.org See http://hadoop.apache.org/mailing_lists.html#User (2014/02/19 21:51), dexter morgan wrote: -- Forwarded message -- From: *dinesh dakshan* dinesh...@gmail.com mailto:dinesh...@gmail.com Date: Wed, Feb 19, 2014 at 5:40

Re: question about reduce method

2014-02-17 Thread Akira AJISAKA
Moving to user@hadoop.apache.org. If you have a question about this, please reply to user mailing list instead of mapreduce-dev@. Thanks, Akira (2014/02/17 10:06), Akira AJISAKA wrote: I know map method put these text file into map,like follows,right? 001, 35.99 001, 35.99 002, 12.49 004

The Activities of Apache Hadoop Community

2014-02-13 Thread Akira AJISAKA
Hi all, We collected and analyzed JIRA tickets to investigate the activities of Apache Hadoop Community in 2013. http://ajisakaa.blogspot.com/2014/02/the-activities-of-apache-hadoop.html We counted the number of the organizations, the lines of code, and the number of the issues. As a result, we

Re: Hadoop 2.2.0 from source configuration

2013-12-02 Thread Akira AJISAKA
Hi Daniel, I agree with you that 2.2 documents are very unfriendly. In many documents, the change in 2.2 from 1.x is just a format. There are still many documents to be converted. (ex. Hadoop Streaming) Furthermore, there are a lot of dead links in documents. I've been trying to fix dead links,