Re: How can I get yarn log with yarn api

2021-11-09 Thread Hariharan
The appattempts API response has a link to the logs. I think that should get you what you need. Thanks, Hariharan On Mon, Nov 8, 2021 at 8:23 AM igyu wrote: > I know

How can I get yarn log with yarn api

2021-11-07 Thread igyu
I know "yarn logs -applicationId application_1436784252938_0022" but this is in the shell I want to make java application EnumSet enumStates = Sets.newEnumSet(applicationStates, YarnApplicationState.class); List reports = client.getApplications(applicationTypes,

How can I get log with yarn java api

2021-11-03 Thread igyu
YarnClient client = YarnClient.createYarnClient(); Configuration yarnconf = new YarnConfiguration(); yarnconf.addResource(new File("D:\\file\\yarn-site.xml").toURI().toURL()); client.init(yarnconf); client.start(); Set

RE: Can't Change Retention Period for YARN Log Aggregation

2019-11-24 Thread David M
if that doesn’t solve the issue. From: Prabhu Josephraj Sent: Friday, November 22, 2019 12:13 AM To: David M Cc: user@hadoop.apache.org Subject: Re: Can't Change Retention Period for YARN Log Aggregation The deletion service runs as part of MapReduce JobHistoryServer. Can you try restarting

Re: Can't Change Retention Period for YARN Log Aggregation

2019-11-21 Thread Prabhu Josephraj
The deletion service runs as part of MapReduce JobHistoryServer. Can you try restarting it? On Fri, Nov 22, 2019 at 3:42 AM David M wrote: > All, > > > > I have an HDP 2.6.1 cluster where we’ve had > yarn.log-aggregation.retain-seconds set to 30 days for a while, and > everything was working

Can't Change Retention Period for YARN Log Aggregation

2019-11-21 Thread David M
All, I have an HDP 2.6.1 cluster where we've had yarn.log-aggregation.retain-seconds set to 30 days for a while, and everything was working properly. Four days ago we changed the property to 15 days instead and restarted the services. The check interval is set to the default, so we expected

Re: get a spark job full log via yarn api (instead of yarn cli)

2018-11-30 Thread GERARD Nicolas
Your best option if you are using spark is here: https://spark.apache.org/docs/latest/monitoring.html#viewing-after-the-fact Standard, you can have access to the log via: * the web ui * yarn log * or the file system As far as I know, there is no simple call rest api. The cli logs

Re: get a spark job full log via yarn api (instead of yarn cli)

2018-11-27 Thread Lian Jiang
Any idea? or I should ask another user group? Thanks. On Mon, Nov 26, 2018 at 2:02 PM Lian Jiang wrote: > On HDP3, I cannot get the full log of a failing spark job by using yarn > api: > > curl -k -u guest:"" -X GET https:// > myhost.com/gateway/ui/resourcemanager/

get a spark job full log via yarn api (instead of yarn cli)

2018-11-26 Thread Lian Jiang
On HDP3, I cannot get the full log of a failing spark job by using yarn api: curl -k -u guest:"" -X GET https:// myhost.com/gateway/ui/resourcemanager/v1/cluster/apps/ <https://confluence.oci.oraclecorp.com/display/BDW/dw-knox-prod1.us-phoenix-1.oracleiaas.com/gateway/ui/resou

Re: Yarn mapreduce Logging : syslog vs stderr log files

2018-03-20 Thread Sultan Alamro
LOG.info(“text”) —> syslog > On Mar 20, 2018, at 9:02 PM, chandan prakash <chandanbaran...@gmail.com> > wrote: > > Hi All, > Currently my yarn MR job is writing logs to syslog and stderr. > I want to know : > how it is decided which log will go to syslog and whi

Yarn mapreduce Logging : syslog vs stderr log files

2018-03-20 Thread chandan prakash
Hi All, Currently my yarn MR job is writing logs to syslog and stderr. I want to know : 1. how it is decided which log will go to syslog and which will go to stderr ? 2. Can I redirect logs instead of going to stderr to syslog ? If YES : how? If NO : Can we ensure log

Re: Strange log on yarn commands

2018-01-27 Thread Soheil Pourbafrani
madmin -replaceLabelsOnNode "datanode1=online" > > it execute successfully the command but print the following strange log: > INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm2 > > is something wrong in the cluster or it is natural? >

Strange log on yarn commands

2018-01-27 Thread Soheil Pourbafrani
Hi, I've set up YARN HA cluster with id rm1 and rm2, rm2 is active resourcemanager when I run yarn commands on terminal like: yarn node -list or yarn rmadmin -replaceLabelsOnNode "datanode1=online" it execute successfully the command but print the following strange

Re: Will Backup Node download image and edits log from NameNode?

2017-09-30 Thread Gurmukh Singh
your observation is correct. backup node will also download. If you look at the journey/evolution of hadoop, we had primary, backup only, checkpointing node and then a generic secondary node. checking node will do the merge of fsimage and edits On 25/9/17 5:57 pm, Chang.Wu wrote: From the

Will Backup Node download image and edits log from NameNode?

2017-09-25 Thread Chang.Wu
From the official document of Backup Node?? it Says: The Backup node does not need to download fsimage and edits files from the active NameNode in order to create a checkpoint, as would be required with a Checkpoint node or Secondary NameNode, since it already has an up-to-date state of the

NodeManager exit without spesific log messages.

2017-09-22 Thread Nur Kholis Majid
Hi, one of my NM nodes periodically exit with this error log https://paste.ee/p/hc104 Anyone have idea about this? Thank you. - To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org For additional commands, e-mail: user-h

Re: Access to yarn application log files for different user or group

2017-07-20 Thread Manfred Halper(Arge)
is released hopefully November. Am 19.07.2017 um 12:07 schrieb had...@x5h.eu: > Hello, > > I currently try to access the application log files from yarn > automatically. Yarn produces log files in the folder > /userlogs/applicationid/containerid/*.log. The problem I have is, that

Access to yarn application log files for different user or group

2017-07-19 Thread hadoop
Hello, I currently try to access the application log files from yarn automatically. Yarn produces log files in the folder /userlogs/applicationid/containerid/*.log. The problem I have is, that the generated application and container directories are constructed with a specific umask (067) that I

Re: Modifying container log granularity at job submission time

2017-05-03 Thread Sunil Govind
son@salesforce.com> wrote: > When I view container logs, I only see "INFO:" log lines. How do I make > the log lines more fine grained? > > I've tried the following, without success: > > Configuration.setStrings(MRJobConfig.MR_AM_LOG_LEVEL, "DEBUG"); > > > Thanks, > Benson >

Modifying container log granularity at job submission time

2017-05-01 Thread Benson Qiu
When I view container logs, I only see "INFO:" log lines. How do I make the log lines more fine grained? I've tried the following, without success: Configuration.setStrings(MRJobConfig.MR_AM_LOG_LEVEL, "DEBUG"); Thanks, Benson

Access YARN container's log directory from map reduce

2016-11-28 Thread Eqbal
Hi, I would like to be able to access yarn container's log directory from my map reduce implementation. For example if I have the following config yarn.nodemanager.log-dirs=/mnt/yarn/log I want to access the subfolder for the container log that's written to this directory e.g., /mnt/yarn/log

Re:Re: Fw:Re:How to add custom field to hadoop MR task log?

2016-11-04 Thread Maria
if the log4j.properties file is what I edited, or is there any other way to add the to log message except directly written to LOG.info(). Thank you . _Maria At 2016-11-05 11:35:02, "Ravi Prakash" <ravihad...@gmail.com> wrote: Hi Maria! You have to be careful which log4j

Re: Fw:Re:How to add custom field to hadoop MR task log?

2016-11-04 Thread Ravi Prakash
ot; " to every > >LOG.info()/LOG.warn()like this: > > > >logger.info(ID + " start map logic"); > >BUT,every LOG info has to add "ID" is not wise. > >Or else, can someone know how to modify the mapreduce task ConversionPattern > >configurati

Re:Fw:Re:How to add custom field to hadoop MR task log?

2016-11-04 Thread Maria
At 2016-11-04 17:01:16, "Maria" <linanmengxia...@126.com> wrote: > >I know that, A simple way is to write " " to every >LOG.info()/LOG.warn()....like this: > >logger.info(ID + " start map logic"); >BUT,every LOG info has to add "ID"

Fw:Re:How to add custom field to hadoop MR task log?

2016-11-04 Thread Maria
I know that, A simple way is to write " " to every LOG.info()/LOG.warn()like this: logger.info(ID + " start map logic"); BUT,every LOG info has to add "ID" is not wise. Or else, can someone know how to modify the mapreduce task ConversionPattern  configu

How to add custom field to hadoop MR task log?

2016-11-03 Thread Maria
Hi, dear developers, I'm trying to reconfig $HADOOP/etc.hadoop/log4j.properties, I want to add an to mapreduce log before LOGmessage. Like this: "ID:234521 start map logic" My steps as follow: (1)In my Mapper Class: static Logger logger = LoggerFactory.getLogger(Mapper.class); ..

Logging garbage collection metrics into yarn nodemanager application log directory for mappers in Hadoop

2016-09-01 Thread Gopi Krishnan
What is the correct way to log garbage collection metrics into the same location where yarn syslogs/stderr and stdout logs are located for mappers in Hadoop Yarn. Any insights wold be helpful: Here are the setting I tried in mapred-site.xml, but no gc logs were available in the location

Change log level

2016-04-19 Thread Kun Ren
Hi All, I compiled the source code, and used eclipse to remotely debug the code, I want to see the Debug information from the log, so I changed the log level for some classes, for example, I changed the FsShell's log level to DEBUG(change it from http://localhost:50070/logLevel), then I add

datanode process log Replica not found

2016-01-07 Thread yaoxiaohua
Hi, I grep the excetion in datanode process log, I found a lot of Replicate not found error, why this happened? org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Replica not found for BP-426197605-ip-1406809206259:blk_1237410030_163690921 Thanks Best Regards, Evan

map task frozen from master(s) perspective, but no process is there, and task log reports completion

2015-11-18 Thread Nicolae Marasoiu
top 5 map+reduce tasks running in current config. I cannot change this while the job is still running right?) I have found a log of the task shwn completion: 2015-11-19 04:01:14,719 INFO [main] org.apache.hadoop.mapred.MapTask: Starting flush of map output 2015-11-19 04:01:14,719

Log "Start loading edits file" on standby node

2015-11-11 Thread Xizhen Du
Hi All, We have a standard active/standby hdfs setup, and everything looks working well by now. The question is that, on the standby namenode, we are seeing in most of cases it fetches edits log from 2 journal nodes; but it also happens that it loads edits from all the 3 journal nodes sometimes

Namenode log size keep growing - Hadoop v1.2.1

2015-07-28 Thread Viswanathan J
n our cluster recently we had issue in Namenode file log size, it's keep growing with the following type of logs. 2015-07-28 13:37:38,730 INFO org.apache.hadoop.hdfs. StateChange: BLOCK* addToInvalidates: blk_-2946593971266165812 to 192.168.x.x:50010 2015-07-28 13:37:38,730 INFO

Re: how to write custom log loader and store in JSON format

2015-07-05 Thread James Bond
, removing some blacklisted fields (like SSN etc) which we thought was easier to do it in MR than pig. Thanks, Ashwin On Sat, Jul 4, 2015 at 10:50 AM, Divya Gehlot divya.htco...@gmail.com wrote: Hi, I am new to pig and I have a log file in below format (Message,NIL,2015-07-01,22:58:53.66,E

how to write custom log loader and store in JSON format

2015-07-03 Thread Divya Gehlot
Hi, I am new to pig and I have a log file in below format (Message,NIL,2015-07-01,22:58:53.66,E,xx.xxx.x.xxx,12,0xd6,BIZ,Componentname,0,0.0,key_1=valueKEY_2=KEY_3=VALUEKEY_4=AUKEY_5=COMPANYKEY_6=VALUEKEY_7=1222KEY_8=VALUEKEY_9=VALUEKEY_10=VALUEKEY_10=VALUE) for which I need

RE: DataNode not creating/writing log?

2015-06-11 Thread yves callaert
an administrator console, look into the HDFS , Yarn and Mapreduce section. With Regards, Yves From: caesarsa...@mac.com To: user@hadoop.apache.org Subject: DataNode not creating/writing log? Date: Wed, 10 Jun 2015 18:26:37 -0400 Hello, I’m running Hadoop 2.6.0 and while the cluster runs I’ve not seen

DataNode not creating/writing log?

2015-06-10 Thread Caesar Samsi
Hello, I'm running Hadoop 2.6.0 and while the cluster runs I've not seen a log created/written in the expected place. What could cause this? Is it writing to another place? What is the default directory? Thank you, Caesar.

Log aggregation ownership issue with non-HDFS setup

2015-05-19 Thread Shay Rojansky
Hi. I'm trying to set up yarn log aggregation on a cluster using a shared NFS filesystem (no HDFS). The issue is that the user directories that get created under yarn.nodemanager.remote-app-log-dir are owned by the node manager owner, and not by the submitting user (which therefore can't access

Re: Question about log files

2015-04-06 Thread Joep Rottinghuis
to recreate it. Not sure if it's a Log4j problem or an Hadoop one... yanghaogn, which is the *correct* way to delete the Hadoop logs? I didn't find anything better than deleting the file and restarting the service... On Mon, Apr 6, 2015 at 9:27 AM, 杨浩 yangha...@gmail.com wrote: I think the log

Re: Question about log files

2015-04-06 Thread 杨浩
I think the log information has lost. the hadoop is not designed for that you deleted these files incorrectly 2015-04-02 11:45 GMT+08:00 煜 韦 yu20...@hotmail.com: Hi there, If log files are deleted without restarting service, it seems that the logs is to be lost for later operation

Re: Question about log files

2015-04-06 Thread Fabio C.
anything better than deleting the file and restarting the service... On Mon, Apr 6, 2015 at 9:27 AM, 杨浩 yangha...@gmail.com wrote: I think the log information has lost. the hadoop is not designed for that you deleted these files incorrectly 2015-04-02 11:45 GMT+08:00 煜 韦 yu20...@hotmail.com

Question about log files

2015-04-01 Thread 煜 韦
Hi there, If log files are deleted without restarting service, it seems that the logs is to be lost for later operation. For example, on namenode, datanode. Why not log files could be re-created when deleted by mistake or on purpose during cluster is running? Thanks, Jared

changing log verbosity

2015-02-24 Thread Jonathan Aquilina
How does one go about changing the log verbosity in hadoop? What configuration file should I be looking at? -- Regards, Jonathan Aquilina Founder Eagle Eye T

Re: changing log verbosity

2015-02-24 Thread Ram Kumar
Hi Jonathan, For Audit Log you can look log4.properties file. By default, the log4j.properties file has the log threshold set to WARN. By setting this level to INFO, audit logging can be turned on. The following snippet shows the log4j.properties configuration when HDFS and MapReduce audit logs

Log Aggregation

2015-02-13 Thread Dmitry Sivachenko
reports: Logs not available for attempt_1422914757889_1881_m_00_0. Aggregation may not be complete, Check back later or try the nodemanager at host It seems that it does not depend on log size, it is not so big to take 20 minutes to copy to hdfs. Why this can happen? How can I debug

Re: Log Aggregation

2015-02-13 Thread Xuan Gong
Hey, could you check whether the logs appear in HDFS ? If not, could you check the NodeManager logs to find out when/whether the log aggregation for that applications started ? Thanks Xuan Gong On 2/13/15, 6:54 AM, Dmitry Sivachenko trtrmi...@gmail.com wrote: Hello! I am using hadoop-2.4.1

Re: Edits log apply performance

2015-01-19 Thread Chris Nauroth
Hi Daniel, Unfortunately, there likely isn't anything you can do to speed this up. The process of applying edits will be bound by the available read throughput on the edits stream and the CPU needs for processing each transaction. This is inherently a single-threaded process in the current

Re: Edits log apply performance

2015-01-19 Thread Daniel Haviv
Thought so... Thank you for the info and assistance. BR, Daniel On 19 בינו׳ 2015, at 19:48, Chris Nauroth cnaur...@hortonworks.com wrote: Hi Daniel, Unfortunately, there likely isn't anything you can do to speed this up. The process of applying edits will be bound by the available read

Edits log apply performance

2015-01-17 Thread Daniel Haviv
Hi, After restarting the namenode we discovered that there was no checkpoint for quite a while. We are waiting for all the changes to be applied to the fsimage, but it seems like it will take hours. Is there something we can do to expedite the process? Increases parallelism? Something at all?

InputFormat for dealing with log files.

2014-10-05 Thread Guillermo Ortiz
I'd like to know if there's an InputFormat to be able to deal with log files. The problem that I have it's that if I have to read an Tomcat log for example, sometimes the exceptions are typed on several lines, but they should be processed just like one line, I mean all the lines together

Re: InputFormat for dealing with log files.

2014-10-05 Thread Ted Yu
Have you read http://blog.rguha.net/?p=293 ? Cheers On Sun, Oct 5, 2014 at 6:24 AM, Guillermo Ortiz gor...@pragsis.com wrote: I'd like to know if there's an InputFormat to be able to deal with log files. The problem that I have it's that if I have to read an Tomcat log for example

Re: InputFormat for dealing with log files.

2014-10-05 Thread Guillermo Ortiz
Asunto: Re: InputFormat for dealing with log files. Have you read http://blog.rguha.net/?p=293 ? Cheers On Sun, Oct 5, 2014 at 6:24 AM, Guillermo Ortiz gor...@pragsis.com wrote: I'd like to know if there's an InputFormat to be able to deal with log files. The problem that I have it's

Re: InputFormat for dealing with log files.

2014-10-05 Thread Ted Yu
with log files. Have you read http://blog.rguha.net/?p=293? Cheers On Sun, Oct 5, 2014 at 6:24 AM, Guillermo Ortiz gor...@pragsis.com wrote: I'd like to know if there's an InputFormat to be able to deal with log files. The problem that I have it's that if I have to read an Tomcat log

Cannot fine profiling log file

2014-09-23 Thread Jakub Stransky
Hello experienced users, I did try to use profiling of tasks during mapreduce property namemapreduce.task.profile/name valuetrue/value /property property namemapreduce.task.profile.maps/name value0-5/value /property property

RE: Cannot fine profiling log file

2014-09-23 Thread Rohith Sharma K S
HI Have you enable log aggregation..? 1. If log aggregation is enabled then you can get logs from hdfs below path. Both aggregated logs and profiler will be in same file. ${yarn.nodemanager.remote-app-log-dir}/${user}/logs/app-id/ If not enabled, then check inside

How to check what is the log directory for container logs

2014-07-31 Thread Krishna Kishore Bonagiri
Hi, Is there a way to check what is the log directory for container logs in my currently running instance of YARN from the command line, I mean using the yarn command or hadoop command or so? Thanks, Kishore

Re: How to check what is the log directory for container logs

2014-07-31 Thread Haiyang Fu
1.change to the nodemanager log dir according to yarn-site.xml property nameyarn.nodemanager.log-dirs/name value/path/to/hdfs/nodemanager_log//value descriptionthe directories used by Nodemanagers as log directories/description

In progress edit log from last run not being played in case of a cluster (HA) restart

2014-07-04 Thread Nitin Goyal
across NN JNs are updated and DN physically delete the blocks). But before the current in-progress edit log segment can be closed, the NN is stopped. Now when the NN is started again, it reads all edit logs from JNs but it does not consider the last in-progress edit log from the last run. Due to this NN

How to interpret the execution log of a Hadoop job

2014-06-29 Thread Freddy Chua
I am wondering is there a documented specification of what each line of the output log represents. Freddy Chua

I am struggling with random job failures with no log traces

2014-05-26 Thread Steve Lewis
For several years I have been using a Hadoop 0.2 cluster successfully. I execute jobs from a remote system specifying a jar file put together on my local machine. Suddenly all that stopped working. On some machines jobs work and on some they fail. Failures look like the text below and whem they

Re: question about search log use yarn command

2014-05-23 Thread Wangda Tan
, if you're not sure about this, you can checkout the whole application log with all containers (don't use -containerId and -nodeAddress) and find what you interested. Thanks, Wangda On Thu, May 15, 2014 at 4:34 PM, ch huang justlo...@gmail.com wrote: hi,mailist: doc say

Re: Setting debug log level for individual daemons

2014-04-16 Thread Ashwin Shankar
at 2:06 AM, Ashwin Shankar ashwinshanka...@gmail.com wrote: Thanks Gordon and Stanley, but this would require us to bounce the process. Is there a way to change log levels without bouncing the process ? On Tue, Apr 15, 2014 at 3:23 AM, Gordon Wang gw...@gopivotal.com wrote: Put

namenode log Inconsistent size

2014-04-16 Thread 조주일
Occurs when uploading. Logs are generated in any situation? It is dangerous problem? * hadoop version 1.1.2 * namenode log 2014-04-17 09:30:34,280 INFO namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(2374)) - commitBlockSynchronization(lastblock=blk_

Re: Setting debug log level for individual daemons

2014-04-15 Thread Gordon Wang
Put the following line in the log4j setting file. log4j.logger.org.apache.hadoop.yarn.server.resourcemanager=DEBUG,console On Tue, Apr 15, 2014 at 8:33 AM, Ashwin Shankar ashwinshanka...@gmail.comwrote: Hi, How do we set log level to debug for lets say only Resource manager

Re: Setting debug log level for individual daemons

2014-04-15 Thread Ashwin Shankar
Thanks Gordon and Stanley, but this would require us to bounce the process. Is there a way to change log levels without bouncing the process ? On Tue, Apr 15, 2014 at 3:23 AM, Gordon Wang gw...@gopivotal.com wrote: Put the following line in the log4j setting file

Re: Setting debug log level for individual daemons

2014-04-15 Thread Stanley Shi
to bounce the process. Is there a way to change log levels without bouncing the process ? On Tue, Apr 15, 2014 at 3:23 AM, Gordon Wang gw...@gopivotal.com wrote: Put the following line in the log4j setting file. log4j.logger.org.apache.hadoop.yarn.server.resourcemanager=DEBUG,console

Setting debug log level for individual daemons

2014-04-14 Thread Ashwin Shankar
Hi, How do we set log level to debug for lets say only Resource manager and not the other hadoop daemons ? -- Thanks, Ashwin

Re: Setting debug log level for individual daemons

2014-04-14 Thread Stanley Shi
Add -Dhadoop.root.logger=DEBUG to Something like HADOOP_resourcemanager_opts in yarn-env.sh On Tuesday, April 15, 2014, Ashwin Shankar ashwinshanka...@gmail.com wrote: Hi, How do we set log level to debug for lets say only Resource manager and not the other hadoop daemons ? -- Thanks

issue of Log aggregation has not completed or is not enabled.

2014-03-18 Thread ch huang
hi,maillist: i try look application log use the following process # yarn application -list Application-Id Application-Name User Queue State Final-State Tracking-URL application_1395126130647_0014 select user_id

RE: issue of Log aggregation has not completed or is not enabled.

2014-03-18 Thread Rohith Sharma K S
Just for confirmation, 1. Does NodeManager is restarted after enabling LogAggregation? If Yes, check for NodeManager start up logs for Log Aggregation Service start is success. Thanks Regards Rohith Sharma K S From: ch huang [mailto:justlo...@gmail.com] Sent: 18 March 2014 13:09

Re: issue of Log aggregation has not completed or is not enabled.

2014-03-18 Thread Jian He
LogAggregation? If Yes, check for NodeManager start up logs for Log Aggregation Service start is success. Thanks Regards Rohith Sharma K S *From:* ch huang [mailto:justlo...@gmail.com] *Sent:* 18 March 2014 13:09 *To:* user@hadoop.apache.org *Subject:* issue of Log aggregation has

Can a YARN Cient or Application Master determine when log aggregation has completed?

2014-03-10 Thread Geoff Thompson
Hello, Log aggregation is great. However, if a yarn application runs a large number of tasks which generate large logs, it takes some finite amount of time for all of the logs to be collected and written to the HDFS. Currently our client code runs the equivalent of the yarn logs command once

Re: Can a YARN Cient or Application Master determine when log aggregation has completed?

2014-03-10 Thread Zhijie Shen
Hi Geoff, Unfortunately, there's no such a API for users to determine whether the log aggregation is completed or not, but the issue has been tackled. You can keep an eye on YARN-1279. - Zhijie On Mon, Mar 10, 2014 at 10:18 AM, Geoff Thompson ge...@bearpeak.com wrote: Hello, Log

Re: Can a YARN Cient or Application Master determine when log aggregation has completed?

2014-03-10 Thread Geoff Thompson
Hi Zhijie, Thanks for letting us know this issue has been recognized. Thanks, Geoff On Mar 10, 2014, at 12:09 PM, Zhijie Shen zs...@hortonworks.com wrote: Hi Geoff, Unfortunately, there's no such a API for users to determine whether the log aggregation is completed

Warning in secondary namenode log

2014-03-06 Thread Vimal Jain
Hi, I am setting up 2 node hadoop cluster ( 1.2.1) After formatting the FS and starting namenode,datanode and secondarynamenode , i am getting below warning in SecondaryNameNode logs. *WARN org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Checkpoint Period :3600 secs (60 min)* Please

Re: Warning in secondary namenode log

2014-03-06 Thread Nitin Pawar
you can ignore this on 2 node cluster. This value means time it waits between two periodic checkpoints on secondary namenode. On Thu, Mar 6, 2014 at 4:10 PM, Vimal Jain vkj...@gmail.com wrote: Hi, I am setting up 2 node hadoop cluster ( 1.2.1) After formatting the FS and starting

Meaning of messages in log and debugging

2014-03-04 Thread Yves Weissig
Hello list, I'm currently debugging my Hadoop MR application and I have some general questions to the messages in the log and the debugging process. - What does Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 mean? What does 143 stand for? - I also see

Re: Meaning of messages in log and debugging

2014-03-04 Thread Zhijie Shen
...@uni-mainz.de wrote: Hello list, I'm currently debugging my Hadoop MR application and I have some general questions to the messages in the log and the debugging process. - What does Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 mean? What does 143

Possible to get counters such as Spill file numbers from mapreduce log?

2014-03-03 Thread Felix . 徐
Hi all, I'm wondering if it is possible to get information such as Spill file numbers, Spill start/ end time for each Mapper from mapreduce's log? Thanks!

Exception in data node log

2014-01-31 Thread Vimal Jain
Hi, I have set up hbase in pseudo distributed mode. I keep on getting below exceptions in data node log. Is it a problem ? ( Hadoop version - 1.1.2 , Hbase version - 0.94.7 ) Please help. java.net.SocketTimeoutException: 48 millis timeout while waiting for channel to be ready for write. ch

RE: YARN log access

2014-01-05 Thread John Lilley
Thanks, I missed the target 2.4.0 release. For 2.2.0, is there any way to reach the individual task container logs? John From: Ted Yu [mailto:yuzhih...@gmail.com] Sent: Saturday, January 04, 2014 10:47 AM To: common-u...@hadoop.apache.org Subject: Re: YARN log access YARN-649 is targeted

Re: YARN log access

2014-01-05 Thread Ted Yu
2.4.0 release. For 2.2.0, is there any way to reach the individual task container logs? John *From:* Ted Yu [mailto:yuzhih...@gmail.com] *Sent:* Saturday, January 04, 2014 10:47 AM *To:* common-u...@hadoop.apache.org *Subject:* Re: YARN log access YARN-649 is targeted at 2.4.0 release

RE: YARN log access

2014-01-04 Thread John Lilley
for querying the logs of a YARN application run?  Ideally I would start with the AppID, query the AppMaster log, and then descend into the task logs.    

YARN log access

2014-01-03 Thread John Lilley
Is there a programmatic or HTTP interface for querying the logs of a YARN application run? Ideally I would start with the AppID, query the AppMaster log, and then descend into the task logs. Thanks John

Re: any suggestions on IIS log storage and analysis?

2014-01-02 Thread Fengyun RAO
, if some log line is related to another line, e.g. based on sessionId, you can emit the sessionId as the key of your mapper output with the value being on the rows associated with the sessionId, so on the reducer side data from different blocks will be coming together. Of course that is just one

Re: any suggestions on IIS log storage and analysis?

2013-12-31 Thread Peyman Mohajerian
You can run a series of map-reduce jobs on your data, if some log line is related to another line, e.g. based on sessionId, you can emit the sessionId as the key of your mapper output with the value being on the rows associated with the sessionId, so on the reducer side data from different blocks

any suggestions on IIS log storage and analysis?

2013-12-30 Thread Fengyun RAO
Hi, HDFS splits files into blocks, and mapreduce runs a map task for each block. However, Fields could be changed in IIS log files, which means fields in one block may depend on another, and thus make it not suitable for mapreduce job. It seems there should be some preprocess before storing

Re: any suggestions on IIS log storage and analysis?

2013-12-30 Thread Azuryy Yu
be changed in IIS log files, which means fields in one block may depend on another, and thus make it not suitable for mapreduce job. It seems there should be some preprocess before storing and analyzing the IIS log files. We plan to parse each line to the same fields and store in Avro files

Re: any suggestions on IIS log storage and analysis?

2013-12-30 Thread Fengyun RAO
what do you mean by join the data sets? a fake sample log file: #Software: Microsoft Internet Information Services 7.5 #Version: 1.0 #Date: 2013-07-04 20:00:00 #Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status

RE: any suggestions on IIS log storage and analysis?

2013-12-30 Thread java8964
I don't know any example of IIS log files. But from what you described, it looks like analyzing one line of log data depends on some previous lines data. You should be more clear about what is this dependence and what you are trying to do. Just based on your questions, you still have different

Re: any suggestions on IIS log storage and analysis?

2013-12-30 Thread Fengyun RAO
java8964 java8...@hotmail.com I don't know any example of IIS log files. But from what you described, it looks like analyzing one line of log data depends on some previous lines data. You should be more clear about what is this dependence and what you are trying to do. Just based on your

RE: any suggestions on IIS log storage and analysis?

2013-12-30 Thread java8964
Google Hadoop WholeFileInputFormat or search it in book Hadoop: The Definitive Guide Yong Date: Tue, 31 Dec 2013 09:39:58 +0800 Subject: Re: any suggestions on IIS log storage and analysis? From: raofeng...@gmail.com To: user@hadoop.apache.org Thanks, Yong! The dependence never cross files

Re: NN stopped and cannot recover with error There appears to be a gap in the edit log

2013-11-27 Thread Adam Kawa
: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE Offline edits viewer Parse a Hadoop edits log file INPUT_FILE and save results -f,--fix-txids Renumber the transaction IDs in the input, so that there are no gaps or invalid transaction IDs. -r,--recover

Re: How can I see the history log of non-mapreduce job in yarn

2013-11-26 Thread Mayank Bansal
is part of the MR project and only tracks/shows MR jobs)? On Tue, Nov 26, 2013 at 7:50 AM, Jeff Zhang jezh...@gopivotal.com wrote: I have configured the history server of yarn. But it looks like it can only help me to see the history log of mapreduce jobs. I still could not see the logs

Re: How can I see the history log of non-mapreduce job in yarn

2013-11-26 Thread Jeff Zhang
the history server of yarn. But it looks like it can only help me to see the history log of mapreduce jobs. I still could not see the logs of non-mapreduce job. How can I see the history log of non-mapreduce job ? -- Harsh J

How can I see the history log of non-mapreduce job in yarn

2013-11-25 Thread Jeff Zhang
I have configured the history server of yarn. But it looks like it can only help me to see the history log of mapreduce jobs. I still could not see the logs of non-mapreduce job. How can I see the history log of non-mapreduce job ?

NN HA : will edit log writes to journal nodes impacts the NN's perforrmace ?

2013-11-14 Thread kumar y
still continue to write its own edit logs to its local disk with out waiting for Journal node confirmation, is this right ? or does NN actually wait for the edit log transactions to be written to all journal nodes ?

NN stopped and cannot recover with error There appears to be a gap in the edit log

2013-11-14 Thread Joshua Tu
Hi there, I deployed a single node for testing, today the NN stopped and cannot start it with eror: There appears to be a gap in the edit log. 2013-11-14 15:00:01,431 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2013-11-14 15:00:01,432

Re: NN stopped and cannot recover with error There appears to be a gap in the edit log

2013-11-14 Thread bharath vissapragada
wrote: Hi there, I deployed a single node for testing, today the NN stopped and cannot start it with eror: There appears to be a gap in the edit log. 2013-11-14 15:00:01,431 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2013-11-14

RE: NN stopped and cannot recover with error There appears to be a gap in the edit log

2013-11-14 Thread Joshua Tu
+0530 Subject: Re: NN stopped and cannot recover with error There appears to be a gap in the edit log To: user@hadoop.apache.org What is your hadoop version? Did you manually delete any files from the nn edits dir? Do you see this gap in the file listing of edits directory too? Ideally all

namnode log parsing

2013-11-12 Thread Patai Sangbutsarakum
I wonder if anyone parsing namenode log in routine basis. i am shopping the tool for this, and no $plunk as because it is expensive, so far logstash seems as a good candidate. The thing is i don't want to spend time to reinvent the wheel if there is no value there. i am on 0.20 branch. P

  1   2   3   4   5   >