[jira] [Resolved] (HADOOP-8688) Hadoop in Pseudo-Distributed mode on Mac OS X 10.8

2012-08-12 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HADOOP-8688.
-

Resolution: Invalid

Thanks!

> Hadoop in Pseudo-Distributed mode on Mac OS X 10.8
> --
>
> Key: HADOOP-8688
> URL: https://issues.apache.org/jira/browse/HADOOP-8688
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: Mac OS X 10.8, Java_1.6.0_33-b03-424
>Reporter: Subho Banerjee
>Priority: Minor
>
> When running hadoop on pseudo-distributed mode, the map seems to work, but it 
> cannot compute the reduce.
> 12/08/13 08:58:12 INFO mapred.JobClient: Running job: job_201208130857_0001
> 12/08/13 08:58:13 INFO mapred.JobClient:  map 0% reduce 0%
> 12/08/13 08:58:27 INFO mapred.JobClient:  map 20% reduce 0%
> 12/08/13 08:58:33 INFO mapred.JobClient:  map 30% reduce 0%
> 12/08/13 08:58:36 INFO mapred.JobClient:  map 40% reduce 0%
> 12/08/13 08:58:39 INFO mapred.JobClient:  map 50% reduce 0%
> 12/08/13 08:58:42 INFO mapred.JobClient:  map 60% reduce 0%
> 12/08/13 08:58:45 INFO mapred.JobClient:  map 70% reduce 0%
> 12/08/13 08:58:48 INFO mapred.JobClient:  map 80% reduce 0%
> 12/08/13 08:58:51 INFO mapred.JobClient:  map 90% reduce 0%
> 12/08/13 08:58:54 INFO mapred.JobClient:  map 100% reduce 0%
> 12/08/13 08:59:14 INFO mapred.JobClient: Task Id : 
> attempt_201208130857_0001_m_00_0, Status : FAILED
> Too many fetch-failures
> 12/08/13 08:59:14 WARN mapred.JobClient: Error reading task outputServer 
> returned HTTP response code: 403 for URL: 
> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_00_0&filter=stdout
> 12/08/13 08:59:14 WARN mapred.JobClient: Error reading task outputServer 
> returned HTTP response code: 403 for URL: 
> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_00_0&filter=stderr
> 12/08/13 08:59:18 INFO mapred.JobClient:  map 89% reduce 0%
> 12/08/13 08:59:21 INFO mapred.JobClient:  map 100% reduce 0%
> 12/08/13 09:00:14 INFO mapred.JobClient: Task Id : 
> attempt_201208130857_0001_m_01_0, Status : FAILED
> Too many fetch-failures
> Here is what I get when I try to see the tasklog using the links given in the 
> output
> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_00_0&filter=stderr
>  --->
> 2012-08-13 08:58:39.189 java[74092:1203] Unable to load realm info from 
> SCDynamicStore
> http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_00_0&filter=stdout
>  --->
> I have changed my hadoop-env.sh acoording to Mathew Buckett in 
> https://issues.apache.org/jira/browse/HADOOP-7489
> Also this error of Unable to load realm info from SCDynamicStore does not 
> show up when I do 'hadoop namenode -format' or 'start-all.sh'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Cannot create a new Jira issue for MapReduce

2012-08-12 Thread Steve Loughran
On 12 August 2012 01:20, Jun Ping Du  wrote:

> Thanks Ted. Those are very good suggestions as backup solutions when JIRA
> is down.
> Besides alleviating the impact of JIRA downtime as you mentioned above, do
> we think of some way to keep JIRA system highly available? It is a little
> embarrassing that we deliver all kinds of HA systems to rest of world, but
> we suffering from this. :(
>
>


Whatever happened with JIRA it took down the entire VMWare Host; Tony and
some of the other volunteers have had to build up a new rack specifically
for JIRA.  If you were willing to offer the ASF some extra VMWAre licenses
or tools, then infrastruct...@apache.org are the people to talk to.
Otherwise, people are going to have to wait for the volunteers to get the
new disks brought up at the rack at Oregon State University, rebuild the
database and gradually bring things up.

One problem that JIRA suffers from is all the JIRA client tools generate
load as soon as it comes back online, as do all the people -there's a heavy
peak load on service resumption. A way to help the team is for everyone to
not rush to JIRA when it comes up, but instead let it warm up gradually.

As for the irony, the ASF Infrastructure people are not fans of JIRA. It
may be built from a lot of Apache code, but it takes handholding to keep
going. Which is something we should strive to avoid in our own systems.

-steve


[jira] [Created] (HADOOP-8688) Hadoop in Pseudo-Distributed mode on Mac OS X 10.8

2012-08-12 Thread Subho Banerjee (JIRA)
Subho Banerjee created HADOOP-8688:
--

 Summary: Hadoop in Pseudo-Distributed mode on Mac OS X 10.8
 Key: HADOOP-8688
 URL: https://issues.apache.org/jira/browse/HADOOP-8688
 Project: Hadoop Common
  Issue Type: Bug
 Environment: Mac OS X 10.8, Java_1.6.0_33-b03-424
Reporter: Subho Banerjee
Priority: Minor


When running hadoop on pseudo-distributed mode, the map seems to work, but it 
cannot compute the reduce.

12/08/13 08:58:12 INFO mapred.JobClient: Running job: job_201208130857_0001
12/08/13 08:58:13 INFO mapred.JobClient:  map 0% reduce 0%
12/08/13 08:58:27 INFO mapred.JobClient:  map 20% reduce 0%
12/08/13 08:58:33 INFO mapred.JobClient:  map 30% reduce 0%
12/08/13 08:58:36 INFO mapred.JobClient:  map 40% reduce 0%
12/08/13 08:58:39 INFO mapred.JobClient:  map 50% reduce 0%
12/08/13 08:58:42 INFO mapred.JobClient:  map 60% reduce 0%
12/08/13 08:58:45 INFO mapred.JobClient:  map 70% reduce 0%
12/08/13 08:58:48 INFO mapred.JobClient:  map 80% reduce 0%
12/08/13 08:58:51 INFO mapred.JobClient:  map 90% reduce 0%
12/08/13 08:58:54 INFO mapred.JobClient:  map 100% reduce 0%
12/08/13 08:59:14 INFO mapred.JobClient: Task Id : 
attempt_201208130857_0001_m_00_0, Status : FAILED
Too many fetch-failures
12/08/13 08:59:14 WARN mapred.JobClient: Error reading task outputServer 
returned HTTP response code: 403 for URL: 
http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_00_0&filter=stdout
12/08/13 08:59:14 WARN mapred.JobClient: Error reading task outputServer 
returned HTTP response code: 403 for URL: 
http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_00_0&filter=stderr
12/08/13 08:59:18 INFO mapred.JobClient:  map 89% reduce 0%
12/08/13 08:59:21 INFO mapred.JobClient:  map 100% reduce 0%
12/08/13 09:00:14 INFO mapred.JobClient: Task Id : 
attempt_201208130857_0001_m_01_0, Status : FAILED
Too many fetch-failures

Here is what I get when I try to see the tasklog using the links given in the 
output

http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_00_0&filter=stderr
 --->
2012-08-13 08:58:39.189 java[74092:1203] Unable to load realm info from 
SCDynamicStore

http://10.1.66.17:50060/tasklog?plaintext=true&attemptid=attempt_201208130857_0001_m_00_0&filter=stdout
 --->


I have changed my hadoop-env.sh acoording to Mathew Buckett in 
https://issues.apache.org/jira/browse/HADOOP-7489

Also this error of Unable to load realm info from SCDynamicStore does not show 
up when I do 'hadoop namenode -format' or 'start-all.sh'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8687) Bump log4j to version 1.2.17

2012-08-12 Thread Eli Collins (JIRA)
Eli Collins created HADOOP-8687:
---

 Summary: Bump log4j to version 1.2.17
 Key: HADOOP-8687
 URL: https://issues.apache.org/jira/browse/HADOOP-8687
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor


Let's bump log4j from 1.2.15 to version 1.2.17. It and 16 are maintenance 
releases with good fixes and also remove some jar dependencies (javamail, jmx, 
jms).

http://logging.apache.org/log4j/1.2/changes-report.html#a1.2.17

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Hadoop cluster/monitoring

2012-08-12 Thread Harsh J
Nagaraju,

On Wed, Aug 8, 2012 at 10:52 PM, Nagaraju Bingi
 wrote:
> Hi,
>
> I'm beginner in Hadoop concepts. I have few basic questions:
> 1) looking for APIs to retrieve the capacity of the cluster. so that i can 
> write a script to when to add a new slave node to the cluster
>
>  a) No.of Task trackers and  capacity of  each task tracker  to 
> spawn  max No.of Mappers

For this, see: 
http://hadoop.apache.org/common/docs/stable/api/org/apache/hadoop/mapred/ClusterStatus.html

>   b) CPU,RAM and disk capacity of each tracker

Rely on other tools to provide this one. Tools such as Ganglia and
Nagios can report this, for instance.

>   c) how to decide to add a new  slave node to the cluster

This is highly dependent on the workload that is required out of your clusters.

>  2) what is the API to retrieve metrics like current usage of resources and 
> currently running/spawned Mappers/Reducers

See 1.a. for some, and 1.b for some more.

>  3) what is the purpose of Hadoop-common?Is it API to interact with hadoop

Hadoop Common encapsulates the utilities shared by both of the other
sub-projects - MapReduce and HDFS. Among other things, it does provide
a general interaction API for all things 'Hadoop'

-- 
Harsh J


Re: Failed reduce job in some node

2012-08-12 Thread Harsh J
Hi Owen,

That your reducer seems to make some maps re-run during its shuffle
(copy) phase is suggestive of DNS issues. Can you ensure that in your
cluster, one node can fully resolve the others' hostname to the right
IP, and have identical /etc/hosts files (if you're using file-based
lookups)?

On Thu, Aug 9, 2012 at 6:22 PM, Owen Duan  wrote:
> Summary:  Failed reduce job
> Hadoop Versions: 1.0.3
> Environment: hadoop-1.0.3 JDK_1.6.0_24  Ubuntu-11.04
> Description: when I try to run page rank algorithm in python with  three
> nodes cluster using hadoop streaming, the reduce job hang at 16%, the job
> finished after a long time. As I check the jobtracker and find that all the
> reduce tasks are done in one single node. Here is the map and reduce file.
>
> hduser@ubuntu:hadoop jar hadoop-streaming-1.0.3.jar -mapper
> ~/mapreduce/PageMap.py -reducer ~/mapreduce/PageReduce.py -input
> /pageValue10 -output /pageOut
> Warning: $HADOOP_HOME is deprecated.
>
> packageJobJar: [/app/hadoop/tmp/hadoop-unjar595645452276364982/] []
> /tmp/streamjob2716673000685326551.jar tmpDir=null
> 12/08/09 20:25:07 INFO util.NativeCodeLoader: Loaded the native-hadoop
> library
> 12/08/09 20:25:07 WARN snappy.LoadSnappy: Snappy native library not loaded
> 12/08/09 20:25:07 INFO mapred.FileInputFormat: Total input paths to process
> : 1
> 12/08/09 20:25:07 INFO streaming.StreamJob: getLocalDirs():
> [/app/hadoop/tmp/mapred/local]
> 12/08/09 20:25:07 INFO streaming.StreamJob: Running job:
> job_201208092013_0002
> 12/08/09 20:25:07 INFO streaming.StreamJob: To kill this job, run:
> 12/08/09 20:25:07 INFO streaming.StreamJob:
> /home/hduser/hadoop/libexec/../bin/hadoop job
> -Dmapred.job.tracker=master:54311 -kill job_201208092013_0002
> 12/08/09 20:25:07 INFO streaming.StreamJob: Tracking URL:
> http://master:50030/jobdetails.jsp?jobid=job_201208092013_0002
> 12/08/09 20:25:08 INFO streaming.StreamJob:  map 0%  reduce 0%
> 12/08/09 20:25:24 INFO streaming.StreamJob:  map 18%  reduce 0%
> 12/08/09 20:25:27 INFO streaming.StreamJob:  map 47%  reduce 0%
> 12/08/09 20:25:30 INFO streaming.StreamJob:  map 62%  reduce 0%
> 12/08/09 20:25:33 INFO streaming.StreamJob:  map 67%  reduce 0%
> 12/08/09 20:25:42 INFO streaming.StreamJob:  map 67%  reduce 2%
> 12/08/09 20:25:45 INFO streaming.StreamJob:  map 78%  reduce 6%
> 12/08/09 20:25:48 INFO streaming.StreamJob:  map 85%  reduce 8%
> 12/08/09 20:25:51 INFO streaming.StreamJob:  map 90%  reduce 9%
> 12/08/09 20:25:54 INFO streaming.StreamJob:  map 95%  reduce 9%
> 12/08/09 20:25:57 INFO streaming.StreamJob:  map 99%  reduce 9%
> 12/08/09 20:26:00 INFO streaming.StreamJob:  map 100%  reduce 9%
> 12/08/09 20:26:12 INFO streaming.StreamJob:  map 100%  reduce 13%
> 12/08/09 20:32:06 INFO streaming.StreamJob:  map 83%  reduce 13%
> 12/08/09 20:32:12 INFO streaming.StreamJob:  map 98%  reduce 13%
> 12/08/09 20:32:15 INFO streaming.StreamJob:  map 100%  reduce 13%
> 12/08/09 20:32:27 INFO streaming.StreamJob:  map 100%  reduce 16%
> 12/08/09 20:38:42 INFO streaming.StreamJob:  map 83%  reduce 16%
> 12/08/09 20:38:48 INFO streaming.StreamJob:  map 98%  reduce 16%
> 12/08/09 20:38:51 INFO streaming.StreamJob:  map 100%  reduce 16%
> 12/08/09 20:39:00 INFO streaming.StreamJob:  map 100%  reduce 17%
> 12/08/09 20:39:03 INFO streaming.StreamJob:  map 100%  reduce 25%
> 12/08/09 20:39:06 INFO streaming.StreamJob:  map 100%  reduce 36%
> 12/08/09 20:39:09 INFO streaming.StreamJob:  map 100%  reduce 40%
> 12/08/09 20:39:13 INFO streaming.StreamJob:  map 100%  reduce 42%
> 12/08/09 20:39:16 INFO streaming.StreamJob:  map 100%  reduce 44%
> 12/08/09 20:39:19 INFO streaming.StreamJob:  map 100%  reduce 51%
> 12/08/09 20:39:22 INFO streaming.StreamJob:  map 100%  reduce 61%
> 12/08/09 20:39:25 INFO streaming.StreamJob:  map 100%  reduce 66%
> 12/08/09 20:39:28 INFO streaming.StreamJob:  map 100%  reduce 77%
> 12/08/09 20:39:31 INFO streaming.StreamJob:  map 100%  reduce 82%
> 12/08/09 20:39:43 INFO streaming.StreamJob:  map 100%  reduce 86%
> 12/08/09 20:39:46 INFO streaming.StreamJob:  map 100%  reduce 93%
> 12/08/09 20:39:49 INFO streaming.StreamJob:  map 100%  reduce 98%
> 12/08/09 20:39:55 INFO streaming.StreamJob:  map 100%  reduce 100%
> 12/08/09 20:40:01 INFO streaming.StreamJob: Job complete:
> job_201208092013_0002
> 12/08/09 20:40:01 INFO streaming.StreamJob: Output: /pageOut
>
>
> the map file
>
> #!/usr/bin/env python
> #encoding=utf-8
>
> import sys
>
>
> if __name__ == "__main__":
>
> for line in sys.stdin:
> line = line.rstrip()
> data = line.split()
>
> #initial pagerank value
> pr = float(data[1])
>
> #number of sites that linked
> count = len(line)-2
>
> #avg pr
> avgpr =  pr/count
>
> for term in data[2:]:
> print term + "  @" + str(avgpr)
> print data[0] + "  &" + term
>
>
> the reduce file:
>
> #!/usr/bin/env python
> #encoding=utf-8
>
> import sys
>
>
> if __name__ == "__main__":
>
>
> 

Re: Checksum Error during Reduce Phase hadoop-1.0.2

2012-08-12 Thread Harsh J
Hi Pavan,

Do you see this happen on a specific node every time (i.e. when the
reducer runs there)?

On Fri, Aug 10, 2012 at 11:43 PM, Pavan Kulkarni
 wrote:
> Hi,
>
>  I am running a Terasort with a cluster of 8 nodes.The map phase completes
> but when the reduce phase is around 68-70% I get this following error.
>
> *
> 12/08/10 11:02:36 INFO mapred.JobClient: Task Id :
> attempt_201208101018_0001_r_27_0, Status : FAILED
> java.lang.RuntimeException: problem advancing post rec#38320220
> *
> *at
> org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1214)*
> *at
> org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.moveToNext(ReduceTask.java:249)
> *
> *at
> org.apache.hadoop.mapred.ReduceTask$ReduceValuesIterator.next(ReduceTask.java:245)
> *
> *at
> org.apache.hadoop.mapred.lib.IdentityReducer.reduce(IdentityReducer.java:40)
> *
> *at
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:519)*
> *at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420)*
> *at org.apache.hadoop.mapred.Child$4.run(Child.java:255)*
> *at java.security.AccessController.doPrivileged(Native Method)*
> *at javax.security.auth.Subject.doAs(Subject.java:416)*
> *at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
> *
> *at org.apache.hadoop.mapred.Child.main(Child.java:249)*
> *Caused by: org.apache.hadoop.fs.ChecksumException: Checksum Error*
> *at
> org.apache.hadoop.mapred.IFileInputStream.doRead(IFileInputStream.java:164)*
> *at
> org.apache.hadoop.mapred.IFileInputStream.read(IFileInputStream.java:101)*
> *at org.apache.hadoop.mapred.IFile$Reader.readData(IFile.java:328)*
> *at org.apache.hadoop.mapred.IFile$Reader.rejigData(IFile.java:358)*
> *at
> org.apache.hadoop.mapred.IFile$Reader.readNextBlock(IFile.java:342)*
> *at org.apache.hadoop.mapred.IFile$Reader.next(IFile.java:374)*
> *at org.apache.hadoop.mapred.Merger$Segment.next(Merger.java:220)*
> *at
> org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:330)
> *
> *at org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:350)
> *
> *at
> org.apache.hadoop.mapred.ReduceTask$ReduceCopier$RawKVIteratorReader.next(ReduceTask.java:2531)
> *
> *at org.apache.hadoop.mapred.Merger$Segment.next(Merger.java:220)*
> *at
> org.apache.hadoop.mapred.Merger$MergeQueue.adjustPriorityQueue(Merger.java:330)
> *
> *at org.apache.hadoop.mapred.Merger$MergeQueue.next(Merger.java:350)
> *
> *at
> org.apache.hadoop.mapred.Task$ValuesIterator.readNextKey(Task.java:1253)*
> *at
> org.apache.hadoop.mapred.Task$ValuesIterator.next(Task.java:1212)*
> *... 10 more*
>
> I came across somone facing the same
> issuein
> the mail-archives and he seemed to resolve it by listing hostnames in
> the */etc/hosts *file,
> but all my nodes have correct info about the hostnames in /etc/hosts, but I
> still have these reducers throwing error.
> Any help regarding this issue is appreciated .Thanks
>
> --
>
> --With Regards
> Pavan Kulkarni



-- 
Harsh J


Build failed in Jenkins: Hadoop-Common-trunk #501

2012-08-12 Thread Apache Jenkins Server
See 

--
[...truncated 27050 lines...]
[DEBUG]   (s) debug = false
[DEBUG]   (s) effort = Default
[DEBUG]   (s) failOnError = true
[DEBUG]   (s) findbugsXmlOutput = false
[DEBUG]   (s) findbugsXmlOutputDirectory = 

[DEBUG]   (s) fork = true
[DEBUG]   (s) includeTests = false
[DEBUG]   (s) localRepository =id: local
  url: file:///home/jenkins/.m2/repository/
   layout: none

[DEBUG]   (s) maxHeap = 512
[DEBUG]   (s) nested = false
[DEBUG]   (s) outputDirectory = 

[DEBUG]   (s) outputEncoding = UTF-8
[DEBUG]   (s) pluginArtifacts = 
[org.codehaus.mojo:findbugs-maven-plugin:maven-plugin:2.3.2:, 
com.google.code.findbugs:bcel:jar:1.3.9:compile, 
org.codehaus.gmaven:gmaven-mojo:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-api:jar:1.3:compile, 
org.codehaus.gmaven.feature:gmaven-feature-api:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-1.5:jar:1.3:compile, 
org.codehaus.gmaven.feature:gmaven-feature-support:jar:1.3:compile, 
org.codehaus.groovy:groovy-all-minimal:jar:1.5.8:compile, 
org.apache.ant:ant:jar:1.7.1:compile, 
org.apache.ant:ant-launcher:jar:1.7.1:compile, jline:jline:jar:0.9.94:compile, 
org.codehaus.plexus:plexus-interpolation:jar:1.1:compile, 
org.codehaus.gmaven:gmaven-plugin:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-loader:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-support:jar:1.3:compile, 
org.sonatype.gshell:gshell-io:jar:2.0:compile, 
com.thoughtworks.qdox:qdox:jar:1.10:compile, 
org.apache.maven.shared:file-management:jar:1.2.1:compile, 
org.apache.maven.shared:maven-shared-io:jar:1.1:compile, 
commons-lang:commons-lang:jar:2.4:compile, 
org.slf4j:slf4j-api:jar:1.5.10:compile, 
org.sonatype.gossip:gossip:jar:1.2:compile, 
org.apache.maven.reporting:maven-reporting-impl:jar:2.1:compile, 
commons-validator:commons-validator:jar:1.2.0:compile, 
commons-beanutils:commons-beanutils:jar:1.7.0:compile, 
commons-digester:commons-digester:jar:1.6:compile, 
commons-logging:commons-logging:jar:1.0.4:compile, oro:oro:jar:2.0.8:compile, 
xml-apis:xml-apis:jar:1.0.b2:compile, 
org.codehaus.groovy:groovy-all:jar:1.7.4:compile, 
org.apache.maven.reporting:maven-reporting-api:jar:3.0:compile, 
org.apache.maven.doxia:doxia-core:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-logging-api:jar:1.1.3:compile, 
xerces:xercesImpl:jar:2.9.1:compile, 
commons-httpclient:commons-httpclient:jar:3.1:compile, 
commons-codec:commons-codec:jar:1.2:compile, 
org.apache.maven.doxia:doxia-sink-api:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-decoration-model:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-site-renderer:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-module-xhtml:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-module-fml:jar:1.1.3:compile, 
org.codehaus.plexus:plexus-i18n:jar:1.0-beta-7:compile, 
org.codehaus.plexus:plexus-velocity:jar:1.1.7:compile, 
org.apache.velocity:velocity:jar:1.5:compile, 
commons-collections:commons-collections:jar:3.2:compile, 
org.apache.maven.shared:maven-doxia-tools:jar:1.2.1:compile, 
commons-io:commons-io:jar:1.4:compile, 
com.google.code.findbugs:findbugs-ant:jar:1.3.9:compile, 
com.google.code.findbugs:findbugs:jar:1.3.9:compile, 
com.google.code.findbugs:jsr305:jar:1.3.9:compile, 
com.google.code.findbugs:jFormatString:jar:1.3.9:compile, 
com.google.code.findbugs:annotations:jar:1.3.9:compile, 
dom4j:dom4j:jar:1.6.1:compile, jaxen:jaxen:jar:1.1.1:compile, 
jdom:jdom:jar:1.0:compile, xom:xom:jar:1.0:compile, 
xerces:xmlParserAPIs:jar:2.6.2:compile, xalan:xalan:jar:2.6.0:compile, 
com.ibm.icu:icu4j:jar:2.6.1:compile, asm:asm:jar:3.1:compile, 
asm:asm-analysis:jar:3.1:compile, asm:asm-commons:jar:3.1:compile, 
asm:asm-util:jar:3.1:compile, asm:asm-tree:jar:3.1:compile, 
asm:asm-xml:jar:3.1:compile, jgoodies:plastic:jar:1.2.0:compile, 
org.codehaus.plexus:plexus-resources:jar:1.0-alpha-4:compile, 
org.codehaus.plexus:plexus-utils:jar:1.5.1:compile]
[DEBUG]   (s) project = MavenProject: 
org.apache.hadoop:hadoop-common-project:3.0.0-SNAPSHOT @ 

[DEBUG]   (s) relaxed = false
[DEBUG]   (s) remoteArtifactRepositories = [   id: apache.snapshots.https
  url: https://repository.apache.org/content/repositories/snapshots
   layout: default
snapshots: [enabled => true, update => daily]
 releases: [enabled => true, update => daily]
,id: repository.jboss.org
  url: http://repository.jboss.org/nexus/content/groups/public/
   layout: default
snapshots: [enabled => false, update => daily]
 releases: [enabled => true, update => daily]
,id: central
  url: http://repo1.maven.org/maven2
   layout: default
snapshots:

Re: Cannot create a new Jira issue for MapReduce

2012-08-12 Thread Jun Ping Du
Thanks Ted. Those are very good suggestions as backup solutions when JIRA is 
down.
Besides alleviating the impact of JIRA downtime as you mentioned above, do we 
think of some way to keep JIRA system highly available? It is a little 
embarrassing that we deliver all kinds of HA systems to rest of world, but we 
suffering from this. :(

- Original Message -
From: "Ted Yu" 
To: mapreduce-...@hadoop.apache.org
Cc: hdfs-...@hadoop.apache.org, common-dev@hadoop.apache.org
Sent: Sunday, August 12, 2012 12:17:36 PM
Subject: Re: Cannot create a new Jira issue for MapReduce

I made some suggestions to hbase dev mailing list a few weeks ago. The
following suggestion is about hbase development which can be extrapolated
to other Apache projects.


People can continue discussion through dev mailing list when JIRA is down.
When JIRA comes back up, transcript of such discussion can be posted back
on related issues.
Use of https://reviews.apache.org is encouraged. The review board wasn't
affected by JIRA downtime.
Running test suite by contributors and committers is encouraged which
alleviates the burden on Hadoop QA.

Goal for the above suggestions is for alleviating the impact of JIRA down
time.

BTW I have kept notifications from iss...@hbase.apache.org in my Inbox.
This shows benefit when JIRA is down.

Cheers

On Sat, Aug 11, 2012 at 7:14 PM, Jun Ping Du  wrote:

> Yes. I saw JIRA is in maintenance now and the schedule is as below:
>
> Host Name   Service Entry Time  Author  Comment Start Time
>  End TimeTypeDurationDowntime ID Trigger ID
>  Actions
> ull.zones.apache.orgIssues - JIRA - General 2012-08-11 19:06:08
> danielshMigrating to a different physical host  2012-08-11 19:06:08
> 2012-08-13 19:06:08 Fixed   2d 0h 0m 0s 1663N/A
> Delete/Cancel This Scheduled Downtime Entry
>
> Looks like it will take 2 days to migrate to a different host. As JIRA is
> a key component to dev process in community, do we think of some ways to
> lower the maintenance overhead?
>
>
> Thanks,
>
> Junping
>
> - Original Message -
> From: "Steve Loughran" 
> To: mapreduce-...@hadoop.apache.org
> Sent: Friday, August 10, 2012 7:33:04 AM
> Subject: Re: Cannot create a new Jira issue for MapReduce
>
> There's been disk problems w/ Jira recently. Githubs been playing up
> this morning to. Time to put away the dev tools and get powerpoint out
> instead
>
> On 9 August 2012 13:38, Robert Evans  wrote:
> > It is a bit worse then that though.  I found that it did create the JIRA,
> > but it is in a bad state where you cannot put it in patch available or
> > close it. So we may need to do some cleanup of these JIRAs later.
> >
> > --Bobby
> >
> > On 8/9/12 3:19 PM, "Ted Yu"  wrote:
> >
> >>This has been reported by HBase developers as well.
> >>
> >>See https://issues.apache.org/jira/browse/INFRA-5131
> >>
> >>On Thu, Aug 9, 2012 at 1:10 PM, Benoy Antony  wrote:
> >>
> >>> Hi,
> >>>
> >>> I am getting the following error when I try to create a Jira issue.
> >>>
> >>> Error creating issue: com.atlassian.jira.util.RuntimeIOException:
> >>> java.io.IOException: read past EOF
> >>>
> >>> Anyone else face the same problem ?
> >>>
> >>> Thanks ,
> >>> Benoy
> >>>
> >
>