[jira] [Created] (HADOOP-15910) Javadoc for LdapAuthenticationHandler#ENABLE_START_TLS is wrong

2018-11-08 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-15910:
---

 Summary: Javadoc for LdapAuthenticationHandler#ENABLE_START_TLS is 
wrong
 Key: HADOOP-15910
 URL: https://issues.apache.org/jira/browse/HADOOP-15910
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu


In LdapAuthenticationHandler, the javadoc for ENABLE_START_TLS has the same 
contents for BASE_DN



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15876) Use keySet().removeAll() to remove multiple keys from Map in AzureBlobFileSystemStore

2018-10-23 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-15876:
---

 Summary: Use keySet().removeAll() to remove multiple keys from Map 
in AzureBlobFileSystemStore
 Key: HADOOP-15876
 URL: https://issues.apache.org/jira/browse/HADOOP-15876
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ted Yu


Looking at 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
 , {{removeDefaultAcl}} in particular:
{code}
for (Map.Entry defaultAclEntry : 
defaultAclEntries.entrySet()) {
  aclEntries.remove(defaultAclEntry.getKey());
}
{code}
The above operation can be written this way:
{code}
aclEntries.keySet().removeAll(defaultAclEntries.keySet());
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15850) Allow CopyCommitter to skip concatenating source files specified by DistCpConstants.CONF_LABEL_LISTING_FILE_PATH

2018-10-13 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-15850:
---

 Summary: Allow CopyCommitter to skip concatenating source files 
specified by DistCpConstants.CONF_LABEL_LISTING_FILE_PATH
 Key: HADOOP-15850
 URL: https://issues.apache.org/jira/browse/HADOOP-15850
 Project: Hadoop Common
  Issue Type: Task
Reporter: Ted Yu


I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
hbase against hadoop 3.1.1

hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
{code}
LOG.debug("creating input listing " + listing + " , totalRecords=" + 
totalRecords);
cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
totalRecords);
{code}
For the test case, two bulk loaded hfiles are in the listing:
{code}
2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 2 
files of 10242
{code}
Later on, CopyCommitter#concatFileChunks would throw the following exception:
{code}
2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
job_local1795473782_0004
java.io.IOException: Inconsistent sequence file: current chunk file 
org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
   
160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
 length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
   
2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
 length = 5142 aclEntries = null, xAttrs = null}
  at 
org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
  at 
org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
  at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
{code}
The above warning shouldn't happen - the two bulk loaded hfiles are independent.

>From the contents of the two CopyListingFileStatus instances, we can see that 
>their isSplit() return false. Otherwise the following from toString should be 
>logged:
{code}
if (isSplit()) {
  sb.append(", chunkOffset = ").append(this.getChunkOffset());
  sb.append(", chunkLength = ").append(this.getChunkLength());
}
{code}
>From hbase side, we can specify one bulk loaded hfile per job but that defeats 
>the purpose of using DistCp.

There should be a way for DistCp to specify the skipping of source file 
concatenation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15831) Include modificationTime in the toString method of CopyListingFileStatus

2018-10-08 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-15831:
---

 Summary: Include modificationTime in the toString method of 
CopyListingFileStatus
 Key: HADOOP-15831
 URL: https://issues.apache.org/jira/browse/HADOOP-15831
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ted Yu


I was looking at a DistCp error observed in hbase backup test:
{code}
2018-10-08 18:12:03,067 WARN  [Thread-933] mapred.LocalJobRunner$Job(590): 
job_local1175594345_0004
java.io.IOException: Inconsistent sequence file: current chunk file 
org.apache.hadoop.tools.CopyListingFileStatus@7ac56817{hdfs://localhost:41712/user/hbase/test-data/
   
c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_
 length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
org.apache.hadoop.tools.CopyListingFileStatus@7aa4deb2{hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10-
   
57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_
 length = 5142 aclEntries = null, xAttrs = null}
  at 
org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
  at 
org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
  at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
2018-10-08 18:12:03,150 INFO  [Time-limited test] 
mapreduce.MapReduceBackupCopyJob$BackupDistCp(226): Progress: 100.0% subTask: 
1.0 mapProgress: 1.0
{code}
I noticed that modificationTime was not included in the toString of 
CopyListingFileStatus.

I propose including modificationTime so that it is easier to tell when the 
respective files last change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15290) Imprecise assertion in FileStatus w.r.t. symlink

2018-03-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HADOOP-15290.
-
Resolution: Duplicate

Dup of HADOOP-15289

> Imprecise assertion in FileStatus w.r.t. symlink
> 
>
> Key: HADOOP-15290
> URL: https://issues.apache.org/jira/browse/HADOOP-15290
> Project: Hadoop Common
>  Issue Type: Bug
>    Reporter: Ted Yu
>Priority: Minor
>
> In HBASE-20123, I logged the following stack trace:
> {code}
> 2018-03-03 14:46:10,858 ERROR [Time-limited test] 
> mapreduce.MapReduceBackupCopyJob$BackupDistCp(237): java.io.IOException: Path 
> hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic 
> link
> java.io.IOException: Path 
> hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic 
> link
>   at org.apache.hadoop.fs.FileStatus.getSymlink(FileStatus.java:338)
>   at org.apache.hadoop.fs.FileStatus.readFields(FileStatus.java:461)
>   at 
> org.apache.hadoop.tools.CopyListingFileStatus.readFields(CopyListingFileStatus.java:155)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2308)
>   at 
> org.apache.hadoop.tools.CopyListing.validateFinalListing(CopyListing.java:163)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:91)
>   at 
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
>   at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
>   at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:382)
>   at 
> org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.createInputFileListing(MapReduceBackupCopyJob.java:297)
> {code}
> [~ste...@apache.org] pointed out that the assertion in FileStatus.java is not 
> accurate:
> {code}
> assert (isDirectory() && getSymlink() == null) || !isDirectory();
> {code}
> {quote}
> It's assuming that getSymlink() returns null if there is no symlink, but 
> instead it raises and exception.
> {quote}
> Steve proposed the following replacement:
> {code}
> assert (!(isDirectory() && isSymlink())
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15290) Imprecise assertion in FileStatus w.r.t. symlink

2018-03-05 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-15290:
---

 Summary: Imprecise assertion in FileStatus w.r.t. symlink
 Key: HADOOP-15290
 URL: https://issues.apache.org/jira/browse/HADOOP-15290
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu


In HBASE-20123, I logged the following stack trace:
{code}
2018-03-03 14:46:10,858 ERROR [Time-limited test] 
mapreduce.MapReduceBackupCopyJob$BackupDistCp(237): java.io.IOException: Path 
hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic link
java.io.IOException: Path 
hdfs://localhost:40578/backupUT/.tmp/backup_1520088356047 is not a symbolic link
  at org.apache.hadoop.fs.FileStatus.getSymlink(FileStatus.java:338)
  at org.apache.hadoop.fs.FileStatus.readFields(FileStatus.java:461)
  at 
org.apache.hadoop.tools.CopyListingFileStatus.readFields(CopyListingFileStatus.java:155)
  at 
org.apache.hadoop.io.SequenceFile$Reader.getCurrentValue(SequenceFile.java:2308)
  at 
org.apache.hadoop.tools.CopyListing.validateFinalListing(CopyListing.java:163)
  at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:91)
  at 
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
  at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
  at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:382)
  at 
org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.createInputFileListing(MapReduceBackupCopyJob.java:297)
{code}
[~ste...@apache.org] pointed out that the assertion in FileStatus.java is not 
accurate:
{code}
assert (isDirectory() && getSymlink() == null) || !isDirectory();
{code}
{quote}
It's assuming that getSymlink() returns null if there is no symlink, but 
instead it raises and exception.
{quote}
Steve proposed the following replacement:
{code}
assert (!(isDirectory() && isSymlink())
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15051) FSDataOutputStream returned by LocalFileSystem#createNonRecursive doesn

2017-11-17 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-15051:
---

 Summary: FSDataOutputStream returned by 
LocalFileSystem#createNonRecursive doesn
 Key: HADOOP-15051
 URL: https://issues.apache.org/jira/browse/HADOOP-15051
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0-beta1
Reporter: Ted Yu






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-10642) Provide option to limit heap memory consumed by dynamic metrics2 metrics

2017-10-11 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HADOOP-10642.
-
Resolution: Later

> Provide option to limit heap memory consumed by dynamic metrics2 metrics
> 
>
> Key: HADOOP-10642
> URL: https://issues.apache.org/jira/browse/HADOOP-10642
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>    Reporter: Ted Yu
>
> User sunweiei provided the following jmap output in HBase 0.96 deployment:
> {code}
>  num #instances #bytes  class name
> --
>1:  14917882 3396492464  [C
>2:   1996994 2118021808  [B
>3:  43341650 1733666000  java.util.LinkedHashMap$Entry
>4:  14453983 1156550896  [Ljava.util.HashMap$Entry;
>5:  14446577  924580928  
> org.apache.hadoop.metrics2.lib.Interns$CacheWith2Keys$2
> {code}
> Heap consumption by Interns$CacheWith2Keys$2 (and indirectly by [C) could be 
> due to calls to Interns.info() in DynamicMetricsRegistry which was cloned off 
> metrics2/lib/MetricsRegistry.java.
> This scenario would arise when large number of regions are tracked through 
> metrics2 dynamically.
> Interns class doesn't provide API to remove entries in its internal Map.
> One solution is to provide an option that allows skipping calls to 
> Interns.info() in metrics2/lib/MetricsRegistry.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14942) DistCp#cleanup() should check whether jobFS is null

2017-10-10 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-14942:
---

 Summary: DistCp#cleanup() should check whether jobFS is null
 Key: HADOOP-14942
 URL: https://issues.apache.org/jira/browse/HADOOP-14942
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


Over in HBASE-18975, we observed the following:
{code}
2017-10-10 17:22:53,211 DEBUG [main] mapreduce.MapReduceBackupCopyJob(313): 
Doing COPY_TYPE_DISTCP
2017-10-10 17:22:53,272 DEBUG [main] mapreduce.MapReduceBackupCopyJob(322): 
DistCp options: [hdfs://localhost:55247/backupUT/.tmp/backup_1507681285309, 
hdfs://localhost:55247/   backupUT]
2017-10-10 17:22:53,283 ERROR [main] tools.DistCp(167): Exception encountered
java.lang.reflect.InvocationTargetException
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at 
org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob$BackupDistCp.execute(MapReduceBackupCopyJob.java:234)
  at org.apache.hadoop.tools.DistCp.run(DistCp.java:153)
  at 
org.apache.hadoop.hbase.backup.mapreduce.MapReduceBackupCopyJob.copy(MapReduceBackupCopyJob.java:331)
  at 
org.apache.hadoop.hbase.backup.impl.IncrementalTableBackupClient.incrementalCopyHFiles(IncrementalTableBackupClient.java:286)
...
Caused by: java.lang.NullPointerException
  at org.apache.hadoop.tools.DistCp.cleanup(DistCp.java:460)
  ... 45 more
{code}
NullPointerException came from second line below:
{code}
  if (metaFolder == null) return;

  jobFS.delete(metaFolder, true);
{code}
in which case jobFS was null.
A check against null should be added.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14930) Upgrade Jetty to 9.4 version

2017-10-05 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-14930:
---

 Summary: Upgrade Jetty to 9.4 version
 Key: HADOOP-14930
 URL: https://issues.apache.org/jira/browse/HADOOP-14930
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ted Yu


Currently 9.3.19.v20170502 is used.

In hbase 2.0+, 9.4.6.v20170531 is used.

When starting mini dfs cluster in hbase unit tests, we get the following:
{code}
java.lang.NoSuchMethodError: 
org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager;
  at 
org.apache.hadoop.http.HttpServer2.initializeWebServer(HttpServer2.java:548)
  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:529)
  at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:119)
  at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:415)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:157)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:887)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:949)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:928)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1637)
  at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1277)
  at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1046)
  at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:921)
{code}
This issue is to upgrade Jetty to 9.4 version



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-10202) OK_JAVADOC_WARNINGS is out of date, leading to negative javadoc warning count

2017-09-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HADOOP-10202.
-
Resolution: Cannot Reproduce

> OK_JAVADOC_WARNINGS is out of date, leading to negative javadoc warning count
> -
>
> Key: HADOOP-10202
> URL: https://issues.apache.org/jira/browse/HADOOP-10202
> Project: Hadoop Common
>  Issue Type: Task
>    Reporter: Ted Yu
>Priority: Minor
>
> From https://builds.apache.org/job/PreCommit-HDFS-Build/5813//testReport/ :
> {code}
> -1 javadoc. The javadoc tool appears to have generated -14 warning messages.
> {code}
> OK_JAVADOC_WARNINGS should be updated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14895) Consider exposing SimpleCopyListing#computeSourceRootPath() for downstream project

2017-09-21 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-14895:
---

 Summary: Consider exposing 
SimpleCopyListing#computeSourceRootPath() for downstream project
 Key: HADOOP-14895
 URL: https://issues.apache.org/jira/browse/HADOOP-14895
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ted Yu


Over in HBASE-18843, [~vrodionov] needs to override 
SimpleCopyListing#computeSourceRootPath() .

Since the method is private, some duplicated code appears in hbase.

We should consider exposing SimpleCopyListing#computeSourceRootPath() so that 
its behavior can be overridden.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Pre-commit Build is failing

2017-04-25 Thread Ted Yu
Please see:
INFRA-13985

> On Apr 25, 2017, at 5:18 AM, Brahma Reddy Battula 
>  wrote:
> 
> Hi All
> 
> 
> Pre-commit build for all the project is failing with following error, any 
> idea on this..?
> 
> 
> 
> 
> HEAD is now at 2ba21d6 YARN-6392. Add submit time to Application Summary log. 
> (Zhihai Xu via wangda)
> 
> Already on 'trunk'
> 
> Your branch is up-to-date with 'origin/trunk'.
> 
> fatal: unable to access 
> 'https://git-wip-us.apache.org/repos/asf/hadoop.git/': server certificate 
> verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
> 
> ERROR: git pull is failing
> 
> 
> 
> 
> 
> References:
> 
> https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-HADOOP-Build/12178/console
> 
> https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-HDFS-Build/19194/console
> 
> https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-YARN-Build/15733/console
> 
> 
> 
> 
> Regards
> Brahma Reddy Battula
> 

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14222) Create specialized IOException subclass to represent closed filesystem

2017-03-23 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-14222:
---

 Summary: Create specialized IOException subclass to represent 
closed filesystem
 Key: HADOOP-14222
 URL: https://issues.apache.org/jira/browse/HADOOP-14222
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ted Yu


I was working on HBASE-17287 where hbase master didn't recognize that file 
system had closed due to extended unavailability of hdfs.

Chatting with [~steve_l], he suggested creating IOException subclass to 
represent closed filesystem so that downstream projects don't have to rely on 
the specific exception message.

The string in existing exception message can't be changed.
We should add clear comment around that part to avoid breakage.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14076) Allow Configuration to be persisted given path to file

2017-03-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HADOOP-14076.
-
Resolution: Later

This can be done client side.

> Allow Configuration to be persisted given path to file
> --
>
> Key: HADOOP-14076
> URL: https://issues.apache.org/jira/browse/HADOOP-14076
> Project: Hadoop Common
>  Issue Type: Improvement
>    Reporter: Ted Yu
>
> Currently Configuration has the following methods for persistence:
> {code}
>   public void writeXml(OutputStream out) throws IOException {
>   public void writeXml(Writer out) throws IOException {
> {code}
> Adding API for persisting to file given path would be useful:
> {code}
>   public void writeXml(String path) throws IOException {
> {code}
> Background: I recently worked on exporting Configuration to a file using JNI.
> Without the proposed API, I resorted to some trick such as the following:
> http://www.kfu.com/~nsayer/Java/jni-filedesc.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14076) Allow Configuration to be persisted given path to file

2017-02-10 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-14076:
---

 Summary: Allow Configuration to be persisted given path to file
 Key: HADOOP-14076
 URL: https://issues.apache.org/jira/browse/HADOOP-14076
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ted Yu


Currently Configuration has the following methods for persistence:
{code}
  public void writeXml(OutputStream out) throws IOException {

  public void writeXml(Writer out) throws IOException {
{code}
Adding API for persisting to file given path would be useful:
{code}
  public void writeXml(String path) throws IOException {
{code}

Background: I recently worked on exporting Configuration to a file using JNI.
Without the proposed API, I resorted to some trick such as the following:
http://www.kfu.com/~nsayer/Java/jni-filedesc.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] (HADOOP-14043) Shade netty dependency

2017-01-31 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-14043:
---

 Summary: Shade netty dependency
 Key: HADOOP-14043
 URL: https://issues.apache.org/jira/browse/HADOOP-14043
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ted Yu


During review of HADOOP-13866, [~andrew.wang] mentioned considering  shading 
netty before putting the fix into branch-2.

This would give users better experience when upgrading hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Not be able to receive notification mail for watch JIRA (YARN)

2017-01-24 Thread Ted Yu
The ticket has been resolved.

You should be able to receive notifications now.

On Tue, Jan 24, 2017 at 11:05 AM, Chris Trezzo  wrote:

> There seems to be an issue with JIRA notifications again. I see this INFRA
> ticket filed for HBase: https://issues.apache.org/jira/browse/INFRA-13374
>
> I have left a comment for what I am seeing with my current watched jiras on
> the YARN project.
>
> On Fri, Jan 13, 2017 at 6:20 PM, Brahma Reddy Battula <
> brahmareddy.batt...@huawei.com> wrote:
>
> > Yes, Even I noticed same. Can we raise a ticket in INFRA..?
> >
> > Marking common-dev also in loop.
> >
> >
> > Thanks and Regards
> > Brahma Reddy Battula
> >
> > -Original Message-
> > From: Wangda Tan [mailto:wheele...@gmail.com]
> > Sent: 14 January 2017 02:15
> > To: yarn-...@hadoop.apache.org
> > Subject: Not be able to receive notification mail for watch JIRA (YARN)
> >
> > Hi yarn-devs,
> >
> > I'm not be able to receive any watched JIRA updates after JIRA
> maintenance
> > completed, does anybody see the same issue?
> >
> > Thanks,
> > Wangda
> >
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
>


[jira] [Resolved] (HADOOP-13489) DistCp may incorrectly return success status when the underlying Job failed

2017-01-18 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HADOOP-13489.
-
Resolution: Won't Fix

After adjusting hbase code, the problem is gone.

> DistCp may incorrectly return success status when the underlying Job failed
> ---
>
> Key: HADOOP-13489
> URL: https://issues.apache.org/jira/browse/HADOOP-13489
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: distcp
> Attachments: HADOOP-13489.v1.patch, HADOOP-13489.v2.patch, 
> HADOOP-13489.v3.patch, MapReduceBackupCopyService.java, 
> testIncrementalBackup-8-12-rethrow.txt, testIncrementalBackup-8-12.txt, 
> TestIncrementalBackup-output.txt
>
>
> I was troubleshooting HBASE-14450 where at the end of BackupdistCp#execute(), 
> distcp job was marked unsuccessful (BackupdistCp is a wrapper of DistCp).
> Yet in IncrementalTableBackupProcedure#incrementalCopy(), the return value 
> from copyService.copy() was 0.
> Here is related code from DistCp:
> {code}
> try {
>   execute();
> } catch (InvalidInputException e) {
>   LOG.error("Invalid input: ", e);
>   return DistCpConstants.INVALID_ARGUMENT;
> } catch (DuplicateFileException e) {
>   LOG.error("Duplicate files in input path: ", e);
>   return DistCpConstants.DUPLICATE_INPUT;
> } catch (AclsNotSupportedException e) {
>   LOG.error("ACLs not supported on at least one file system: ", e);
>   return DistCpConstants.ACLS_NOT_SUPPORTED;
> } catch (XAttrsNotSupportedException e) {
>   LOG.error("XAttrs not supported on at least one file system: ", e);
>   return DistCpConstants.XATTRS_NOT_SUPPORTED;
> } catch (Exception e) {
>   LOG.error("Exception encountered ", e);
>   return DistCpConstants.UNKNOWN_ERROR;
> }
> return DistCpConstants.SUCCESS;
> {code}
> We don't check whether the Job returned by execute() was successful - we rely 
> on all failure cases going through the last catch clause but there may be 
> special case.
> Even if the Job fails, DistCpConstants.SUCCESS is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: YARN Jenkins Build get consistent failed.

2016-12-21 Thread Ted Yu
Precommit build #14423 has completed.

The exclusion (H5 and H6) has been done.

See the other thread started by Sangjin.

On Wed, Dec 21, 2016 at 11:59 AM, Junping Du  wrote:

> Hi hadoop folks,
>
>I noticed that our recent YARN jenkins tests are consistently failed (
> https://builds.apache.org/job/PreCommit-YARN-Build) due to test
> environment issues below.
>
>I already filed blocker issue https://issues.apache.org/
> jira/browse/INFRA-13141 to our INFRA team yesterday but haven't get any
> response yet. All commit work on YARN project are fully blocked. Anyone
> have ideas on how to move things forward?
>
> btw, Jenkins tests for hadoop/hdfs/mapreduce seems to be OK.
>
>
> FATAL: Command "git clean -fdx" returned status code 1:
> stdout:
> stderr: warning: failed to remove hadoop-common-project/hadoop-
> common/target/test/data/3
>
> hudson.plugins.git.GitException search?query=hudson.plugins.git.GitException>: Command "git clean -fdx"
> returned status code 1:
> stdout:
> stderr: warning: failed to remove hadoop-common-project/hadoop-
> common/target/test/data/3
>
> at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.
> launchCommandIn(CliGitAPIImpl.java:1723)
> at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.
> launchCommandIn(CliGitAPIImpl.java:1699)
> at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.
> launchCommandIn(CliGitAPIImpl.java:1695)
> at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.
> launchCommand(CliGitAPIImpl.java:1317)
>
>
>
>
> Thanks,
>
>
> Junping
>
>


Re: Maven build: YARN timeline service downloading maven-metadata from personal repository?

2016-12-13 Thread Ted Yu
bq. out of which 1.2.4 is released

Actually 1.2.4 has already been released. 1.1.8 RC is being voted upon.

FYI

On Tue, Dec 13, 2016 at 5:48 PM, Sangjin Lee  wrote:

> According to HBASE-16749, the fix went into HBase 1.2.4 and 1.1.8 (out of
> which 1.2.4 is released). To resolve this issue, we'd need to upgrade to
> 1.2.4 or later.
>
>
> Sangjin
>
> On Tue, Dec 13, 2016 at 3:41 PM, Vrushali Channapattan <
> vchannapat...@twitter.com> wrote:
>
> > Yes, I think bumping up the hbase version to 1.2 should help with this
> > build time taken issue. I will start looking into this upgrade right
> > away.
> >
> > Thanks
> > Vrushali
> >
> > > On Dec 13, 2016, at 3:02 PM, Li Lu  wrote:
> > >
> > > I could not reproduce this issue locally but this may be related to
> some
> > local maven repos. This may be related to the private repo issues of
> HBase?
> > If this is the case, bumping up hbase dependency version of YARN timeline
> > module might be helpful?
> > >
> > > +Sangjin, Vrushali, and Joep: In YARN-5976 we’re proposing to bump up
> > HBase dependency version into 1.2. Shall we prioritize that JIRA? Thanks!
> > >
> > > Li Lu
> > >
> > >> On Dec 13, 2016, at 14:43, Wangda Tan  wrote:
> > >>
> > >> Hi folks,
> > >>
> > >> It looks like HBASE-16749 is fixed, and Phoenix version is updated
> (per
> > >> Li). But I'm still experiencing slow build of ATSv2 component:
> > >>
> > >> [INFO] Apache Hadoop YARN . SUCCESS [
> > >> 1.378 s]
> > >> [INFO] Apache Hadoop YARN API . SUCCESS [
> > >> 10.559 s]
> > >> [INFO] Apache Hadoop YARN Common .. SUCCESS [
> > >> 6.993 s]
> > >> [INFO] Apache Hadoop YARN Server .. SUCCESS [
> > >> 0.057 s]
> > >> [INFO] Apache Hadoop YARN Server Common ... SUCCESS [
> > >> 2.266 s]
> > >> [INFO] Apache Hadoop YARN NodeManager . SUCCESS [
> > >> 4.075 s]
> > >> [INFO] Apache Hadoop YARN Web Proxy ... SUCCESS [
> > >> 0.924 s]
> > >> [INFO] Apache Hadoop YARN ApplicationHistoryService ... SUCCESS [
> > >> 1.549 s]
> > >> [INFO] Apache Hadoop YARN Timeline Service  SUCCESS
> > [05:14
> > >> min]
> > >> [INFO] Apache Hadoop YARN ResourceManager . SUCCESS [
> > >> 8.554 s]
> > >> [INFO] Apache Hadoop YARN Server Tests  SUCCESS [
> > >> 1.561 s]
> > >> [INFO] Apache Hadoop YARN Client .. SUCCESS [
> > >> 1.321 s]
> > >> [INFO] Apache Hadoop YARN SharedCacheManager .. SUCCESS [
> > >> 0.843 s]
> > >> [INFO] Apache Hadoop YARN Timeline Plugin Storage . SUCCESS [
> > >> 0.949 s]
> > >> [INFO] Apache Hadoop YARN Timeline Service HBase tests  SUCCESS [
> > >> 3.137 s]
> > >> [INFO] Apache Hadoop YARN Applications  SUCCESS [
> > >> 0.055 s]
> > >> [INFO] Apache Hadoop YARN DistributedShell  SUCCESS [
> > >> 0.807 s]
> > >> [INFO] Apache Hadoop YARN Unmanaged Am Launcher ... SUCCESS [
> > >> 0.602 s]
> > >> [INFO] Apache Hadoop YARN Site  SUCCESS [
> > >> 0.060 s]
> > >> [INFO] Apache Hadoop YARN Registry  SUCCESS [
> > >> 0.910 s]
> > >> [INFO] Apache Hadoop YARN UI .. SUCCESS [
> > >> 0.072 s]
> > >> [INFO] Apache Hadoop YARN Project . SUCCESS [
> > >> 0.749 s]
> > >> [INFO]
> > >> 
> > 
> > >> [INFO] BUILD SUCCESS
> > >> [INFO]
> > >> 
> > 
> > >> [INFO] Total time: 06:02 min
> > >>
> > >> This doesn't happen every time when I run build on latest Hadoop
> trunk,
> > but
> > >> I can often see this happens.
> > >>
> > >> Thoughts about how to solve it?
> > >>
> > >> Thanks,
> > >> Wangda
> > >>
> > >>
> > >>
> > >>> On Tue, Oct 4, 2016 at 6:50 PM, Sangjin Lee 
> wrote:
> > >>>
> > >>> Thanks Wangda.
> > >>>
> > >>> To answer Steve's question, I don't think maven downloads anything
> from
> > >>> that location (it's a very old content). It just does a wasted effort
> > by
> > >>> hitting this repo.
> > >>>
> >  On Mon, Oct 3, 2016 at 10:25 AM, Wangda Tan 
> > wrote:
> > 
> >  Filed: https://issues.apache.org/jira/browse/HBASE-16749
> > 
> > > On Mon, Oct 3, 2016 at 10:18 AM, Wangda Tan 
> > wrote:
> > >
> > > Thanks Sangjin/Ted/Steve for your comments/suggestions, I will
> file a
> > > HBase JIRA later.
> > >
> > > Regards,
> > > Wangda
> > >
> > > On Mon, Oct 3, 2016 at 4:43 AM, Steve Loughran <
> > ste...@hortonworks.com>
> > > wrote:
> > >
> > >> HBase really ought to have a profile for D/Ling from somewhere
> like
> >  this,
> > >> and, perhaps, list the ASF snapsphot repo first
> > >>
> > >>> On 1 Oct 2016, at 17:37, Wangda Tan  wrote:
> > >>>
> > >>> Hi Y

Re: Maven build: YARN timeline service downloading maven-metadata from personal repository?

2016-12-13 Thread Ted Yu
Just performed a clean build on my MacBook:

[INFO] Apache Hadoop Mini-Cluster . SUCCESS [
 0.952 s]
[INFO] Apache Hadoop Scheduler Load Simulator . SUCCESS [
 1.838 s]
[INFO] Apache Hadoop Azure Data Lake support .. SUCCESS [
 2.308 s]
[INFO] Apache Hadoop Tools Dist ... SUCCESS [
 0.514 s]
[INFO] Apache Hadoop Kafka Library support  SUCCESS [
 0.365 s]
[INFO] Apache Hadoop Tools  SUCCESS [
 0.024 s]
[INFO] Apache Hadoop Distribution . SUCCESS [
 0.062 s]
[INFO]

[INFO] BUILD SUCCESS
[INFO]

[INFO] Total time: 02:39 min
[INFO] Finished at: 2016-12-13T14:48:46-08:00

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
MaxPermSize=512M; support was removed in 8.0
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
2015-11-10T08:41:47-08:00)
Maven home: /Users/tyu/apache-maven-3.3.9
Java version: 1.8.0_91, vendor: Oracle Corporation

I can try on other machines if I have time.

On Tue, Dec 13, 2016 at 2:43 PM, Wangda Tan  wrote:

> Hi folks,
>
> It looks like HBASE-16749 is fixed, and Phoenix version is updated (per
> Li). But I'm still experiencing slow build of ATSv2 component:
>
> [INFO] Apache Hadoop YARN . SUCCESS [
>  1.378 s]
> [INFO] Apache Hadoop YARN API . SUCCESS [
> 10.559 s]
> [INFO] Apache Hadoop YARN Common .. SUCCESS [
>  6.993 s]
> [INFO] Apache Hadoop YARN Server .. SUCCESS [
>  0.057 s]
> [INFO] Apache Hadoop YARN Server Common ... SUCCESS [
>  2.266 s]
> [INFO] Apache Hadoop YARN NodeManager . SUCCESS [
>  4.075 s]
> [INFO] Apache Hadoop YARN Web Proxy ... SUCCESS [
>  0.924 s]
> [INFO] Apache Hadoop YARN ApplicationHistoryService ... SUCCESS [
>  1.549 s]
> [INFO] Apache Hadoop YARN Timeline Service  SUCCESS [05:14
> min]
> [INFO] Apache Hadoop YARN ResourceManager . SUCCESS [
>  8.554 s]
> [INFO] Apache Hadoop YARN Server Tests  SUCCESS [
>  1.561 s]
> [INFO] Apache Hadoop YARN Client .. SUCCESS [
>  1.321 s]
> [INFO] Apache Hadoop YARN SharedCacheManager .. SUCCESS [
>  0.843 s]
> [INFO] Apache Hadoop YARN Timeline Plugin Storage . SUCCESS [
>  0.949 s]
> [INFO] Apache Hadoop YARN Timeline Service HBase tests  SUCCESS [
>  3.137 s]
> [INFO] Apache Hadoop YARN Applications  SUCCESS [
>  0.055 s]
> [INFO] Apache Hadoop YARN DistributedShell  SUCCESS [
>  0.807 s]
> [INFO] Apache Hadoop YARN Unmanaged Am Launcher ... SUCCESS [
>  0.602 s]
> [INFO] Apache Hadoop YARN Site  SUCCESS [
>  0.060 s]
> [INFO] Apache Hadoop YARN Registry  SUCCESS [
>  0.910 s]
> [INFO] Apache Hadoop YARN UI .. SUCCESS [
>  0.072 s]
> [INFO] Apache Hadoop YARN Project . SUCCESS [
>  0.749 s]
> [INFO]
> 
> [INFO] BUILD SUCCESS
> [INFO]
> 
> [INFO] Total time: 06:02 min
>
> This doesn't happen every time when I run build on latest Hadoop trunk, but
> I can often see this happens.
>
> Thoughts about how to solve it?
>
> Thanks,
> Wangda
>
>
>
> On Tue, Oct 4, 2016 at 6:50 PM, Sangjin Lee  wrote:
>
> > Thanks Wangda.
> >
> > To answer Steve's question, I don't think maven downloads anything from
> > that location (it's a very old content). It just does a wasted effort by
> > hitting this repo.
> >
> > On Mon, Oct 3, 2016 at 10:25 AM, Wangda Tan  wrote:
> >
> >> Filed: https://issues.apache.org/jira/browse/HBASE-16749
> >>
> >> On Mon, Oct 3, 2016 at 10:18 AM, Wangda Tan 
> wrote:
> >>
> >> > Thanks Sangjin/Ted/Steve for your comments/suggestions, I will file a
> >> > HBase JIRA later.
> >> >
> >> > Regards,
> >> > Wangda
> >> >
> >> > On Mon, Oct 3, 2016 at 4:43 AM, Steve Loughran <
> ste...@hortonworks.com>
> >> > wrote:
> >> >
> >> >> HBase really ought to have a profile for D/Ling from somewhere like
> >> this,
> >> >> and, perhaps, list the ASF snapsphot repo first
> >> >>
> >> >> On 1 Oct 2016, at 17:37, Wangda Tan  wrote:
> >> >> >
> >> >> > Hi YARN-dev,
> >> >> >
> >> >> > (cc common-dev),
> >> >> >
> >> >> > YARN timeline service currently sometimes downloads
> >> maven-metadata.xml
> >> >> from
> >> >> > a personal apache site, log looks like:
> >> >> >
> >> >> > [INFO] --
> --
> >> >> > 
> >> >> > [INFO] Building Apache Hadoop YARN Timeline Service
> >> >> 3.0.0-alpha2-SNAPSHOT
> >> >> > [INFO

[jira] [Resolved] (HADOOP-12854) Move to netty 4.1.x release

2016-12-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HADOOP-12854.
-
Resolution: Duplicate

Dup of HADOOP-13866

> Move to netty 4.1.x release
> ---
>
> Key: HADOOP-12854
> URL: https://issues.apache.org/jira/browse/HADOOP-12854
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> Netty is getting close to having a final release of a 4.1 netty-all artifact; 
> HDFS currently pulls in 4.1.0.Beta5
> Once a 4.1 release is out, switch to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2016-12-06 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reopened HADOOP-13866:
-

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>    Reporter: Ted Yu
>    Assignee: Ted Yu
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2016-12-05 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-13866:
---

 Summary: Upgrade netty-all to 4.1.1.Final
 Key: HADOOP-13866
 URL: https://issues.apache.org/jira/browse/HADOOP-13866
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ted Yu


netty-all 4.1.1.Final is stable release which we should upgrade to.

See bottom of HADOOP-12927 for related discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Maven build: YARN timeline service downloading maven-metadata from personal repository?

2016-10-01 Thread Ted Yu
Among hbase releases (1.1.x, 1.2.y), root pom.xml contains "ghelmling.testing"
repo.

There is no such repo in 1.3 branch but there hasn't been a 1.3 release yet.

We can drop the repo in 1.1 and 1.2 branches.

Cheers

On Sat, Oct 1, 2016 at 1:37 PM, Sangjin Lee  wrote:

> We should raise a JIRA. I suspect this is more of an hbase issue. I believe
> it's coming from the hbase pom that contains a definition of that repo
> (named "ghelmling.testing"). This would entail upgrading to a later version
> of hbase that removes this repo. I don't think this is a major issue, but
> it would be good to keep track of this. Wangda, could you kindly file a
> JIRA for this?
>
> I'm also cc'ing Gary to see if he is aware of a release that does not have
> the repo definition.
>
> Sangjin
>
> On Sat, Oct 1, 2016 at 9:37 AM, Wangda Tan  wrote:
>
> > Hi YARN-dev,
> >
> > (cc common-dev),
> >
> > YARN timeline service currently sometimes downloads maven-metadata.xml
> from
> > a personal apache site, log looks like:
> >
> > [INFO] 
> > 
> > [INFO] Building Apache Hadoop YARN Timeline Service 3.0.0-alpha2-SNAPSHOT
> > [INFO] 
> > 
> > Downloading: http://conjars.org/repo/org/apache/hadoop/hadoop-client/3.
> > 0.0-alpha2-SNAPSHOT/maven-metadata.xml
> > ...
> > Downloading: http://people.apache.org/~garyh/mvn/org/apache/hadoop/
> > hadoop-client/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
> > ...
> > Downloading: http://people.apache.org/~garyh/mvn/org/apache/hadoop/
> > hadoop-client/3.0.0-alpha2-SNAPSHOT/maven-metadata.xml
> >
> > I noticed this happens for a while, I'm not sure if it causes by my local
> > environment or not.
> >
> > I don't know if it could be a potential security issue, and this
> > significantly slows my build, typically "mvn -DskipTests clean install"
> > runs 1-2 mins on my laptop, but when it downloads files from "~garyh"
> link,
> > it takes 8-9 mins.
> >
> > Please let me know if you also see this happens.
> >
> > Thanks,
> > Wangda
> >
>


Re: Is anyone seeing this during trunk build?

2016-09-28 Thread Ted Yu
I used the same command but didn't see the error you saw.

Here is my environment:

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
MaxPermSize=512M; support was removed in 8.0
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5;
2015-11-10T08:41:47-08:00)
Maven home: /Users/tyu/apache-maven-3.3.9
Java version: 1.8.0_91, vendor: Oracle Corporation
Java home:
/Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "mac os x", version: "10.11.3", arch: "x86_64", family: "mac"

FYI

On Wed, Sep 28, 2016 at 3:54 PM, Kihwal Lee 
wrote:

> I just noticed this during a trunk build. I was doing "mvn clean install
> -DskipTests".  The build succeeds.
> Is anyone seeing this?  I am using openjdk8u102.
>
>
>
> ===
> [WARNING] Unable to process class org/apache/hadoop/hdfs/StripeReader.class
> in JarAnalyzer File /home1/kihwal/devel/apache/hadoop/hadoop-hdfs-project/
> hadoop-hdfs-client/target/hadoop-hdfs-client-3.0.0-alpha2-SNAPSHOT.jar
> org.apache.bcel.classfile.ClassFormatException: Invalid byte tag in
> constant pool: 18
> at org.apache.bcel.classfile.Constant.readConstant(Constant.java:146)
> at org.apache.bcel.classfile.ConstantPool.(ConstantPool.java:67)
> at org.apache.bcel.classfile.ClassParser.readConstantPool(
> ClassParser.java:222)
> at org.apache.bcel.classfile.ClassParser.parse(ClassParser.java:136)
> at org.apache.maven.shared.jar.classes.JarClassesAnalysis.
> analyze(JarClassesAnalysis.java:92)
> at org.apache.maven.report.projectinfo.dependencies.Dependencies.
> getJarDependencyDetails(Dependencies.java:255)
> at org.apache.maven.report.projectinfo.dependencies.
> renderer.DependenciesRenderer.hasSealed(DependenciesRenderer.java:1454)
> at org.apache.maven.report.projectinfo.dependencies.
> renderer.DependenciesRenderer.renderSectionDependencyFileDet
> ails(DependenciesRenderer.java:536)
> at org.apache.maven.report.projectinfo.dependencies.
> renderer.DependenciesRenderer.renderBody(DependenciesRenderer.java:263)
> at org.apache.maven.reporting.AbstractMavenReportRenderer.render(
> AbstractMavenReportRenderer.java:79)
> at org.apache.maven.report.projectinfo.DependenciesReport.
> executeReport(DependenciesReport.java:186)
> at org.apache.maven.reporting.AbstractMavenReport.generate(
> AbstractMavenReport.java:190)
> at org.apache.maven.report.projectinfo.AbstractProjectInfoReport.
> execute(AbstractProjectInfoReport.java:202)
> at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(
> DefaultBuildPluginManager.java:101)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> MojoExecutor.java:209)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> MojoExecutor.java:153)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute(
> MojoExecutor.java:145)
> at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.
> buildProject(LifecycleModuleBuilder.java:84)
> at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.
> buildProject(LifecycleModuleBuilder.java:59)
> at org.apache.maven.lifecycle.internal.LifecycleStarter.
> singleThreadedBuild(LifecycleStarter.java:183)
> at org.apache.maven.lifecycle.internal.LifecycleStarter.
> execute(LifecycleStarter.java:161)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.codehaus.plexus.classworlds.launcher.Launcher.
> launchEnhanced(Launcher.java:290)
> at org.codehaus.plexus.classworlds.launcher.Launcher.
> launch(Launcher.java:230)
> at org.codehaus.plexus.classworlds.launcher.Launcher.
> mainWithExitCode(Launcher.java:414)
> at org.codehaus.plexus.classworlds.launcher.Launcher.
> main(Launcher.java:357)
> ===
>


[jira] [Created] (HADOOP-13496) Include file lengths in Mismatch in length error for distcp

2016-08-12 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-13496:
---

 Summary: Include file lengths in Mismatch in length error for 
distcp
 Key: HADOOP-13496
 URL: https://issues.apache.org/jira/browse/HADOOP-13496
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ted Yu
Priority: Minor


Currently RetriableFileCopyCommand doesn't show the perceived lengths in 
Mismatch in length error:
{code}
2016-08-12 10:23:14,231 ERROR [LocalJobRunner Map Task Executor #0] 
util.RetriableCommand(89): Failure in Retriable command: Copying 
hdfs://localhost:53941/user/tyu/test-data/dc7c674a-c463-4798-8260-   
c5d1e3440a4b/WALs/10.22.9.171,53952,1471022508087/10.22.9.171%2C53952%2C1471022508087.regiongroup-1.1471022510182
 to hdfs://localhost:53941/backupUT/backup_1471022580616/WALs/10.22.9.  
 171%2C53952%2C1471022508087.regiongroup-1.1471022510182
java.io.IOException: Mismatch in length of 
source:hdfs://localhost:53941/user/tyu/test-data/dc7c674a-c463-4798-8260-c5d1e3440a4b/WALs/10.22.9.171,53952,1471022508087/10.22.9.171%2C53952%2C1471022508087.regiongroup-1.1471022510182
 and 
target:hdfs://localhost:53941/backupUT/backup_1471022580616/WALs/.distcp.tmp.attempt_local344329843_0006_m_00_0
  at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareFileLengths(RetriableFileCopyCommand.java:193)
  at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:126)
  at 
org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99)
  at 
org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
  at 
org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:281)
  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:253)
  at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
{code}
It would be helpful to include what's the expected length and what's the real 
length.

Thanks to [~yzhangal] for offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13489) DistCp may incorrectly return success status when the underlying Job failed

2016-08-11 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-13489:
---

 Summary: DistCp may incorrectly return success status when the 
underlying Job failed
 Key: HADOOP-13489
 URL: https://issues.apache.org/jira/browse/HADOOP-13489
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu


I was troubleshooting HBASE-14450 where at the end of BackupdistCp#execute(), 
distcp job was marked unsuccessful (BackupdistCp is a wrapper of DistCp).
Yet in IncrementalTableBackupProcedure#incrementalCopy(), the return value from 
copyService.copy() was 0.

Here is related code from DistCp:
{code}
try {
  execute();
} catch (InvalidInputException e) {
  LOG.error("Invalid input: ", e);
  return DistCpConstants.INVALID_ARGUMENT;
} catch (DuplicateFileException e) {
  LOG.error("Duplicate files in input path: ", e);
  return DistCpConstants.DUPLICATE_INPUT;
} catch (AclsNotSupportedException e) {
  LOG.error("ACLs not supported on at least one file system: ", e);
  return DistCpConstants.ACLS_NOT_SUPPORTED;
} catch (XAttrsNotSupportedException e) {
  LOG.error("XAttrs not supported on at least one file system: ", e);
  return DistCpConstants.XATTRS_NOT_SUPPORTED;
} catch (Exception e) {
  LOG.error("Exception encountered ", e);
  return DistCpConstants.UNKNOWN_ERROR;
}
return DistCpConstants.SUCCESS;
{code}
We don't check whether the Job returned by execute() was successful.
Even if the Job fails, DistCpConstants.SUCCESS is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13394) Swift should have proper HttpClient dependencies

2016-07-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HADOOP-13394.
-
Resolution: Duplicate

Close as dup of HADOOP-11614

> Swift should have proper HttpClient dependencies
> 
>
> Key: HADOOP-13394
> URL: https://issues.apache.org/jira/browse/HADOOP-13394
> Project: Hadoop Common
>  Issue Type: Bug
>    Reporter: Ted Yu
>
> In hadoop-tools/hadoop-openstack/pom.xml :
> {code}
> 
>   commons-httpclient
>   commons-httpclient
>   compile
> 
> {code}
> The dependency should be migrated to httpclient of org.apache.httpcomponents



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13394) Swift should have proper HttpClient dependencies

2016-07-20 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-13394:
---

 Summary: Swift should have proper HttpClient dependencies
 Key: HADOOP-13394
 URL: https://issues.apache.org/jira/browse/HADOOP-13394
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu


In hadoop-tools/hadoop-openstack/pom.xml :
{code}

  commons-httpclient
  commons-httpclient
  compile

{code}
The dependency should be migrated to httpclient of org.apache.httpcomponents



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Different JIRA permissions for HADOOP and HDFS

2016-05-14 Thread Ted Yu
Looks like you attached some images which didn't go through.

Consider using 3rd party image site.

Cheers

On Sat, May 14, 2016 at 7:07 AM, Zheng, Kai  wrote:

> Hi,
>
>
>
> Noticed this difference but not sure if it’s intended. YARN is similar
> with HDFS. It’s not convenient. Any clarifying? Thanks. -kai
>
>
>
>
>
>
>
>


Re: Jira Lock Down Upgraded?

2016-05-12 Thread Ted Yu
Looks like side effects of this lock down are:

1. person (non-admin) who logged JIRA cannot comment on the JIRA
2. result of QA run cannot be posted onto JIRA (at least for hbase tests)

:-(

On Thu, May 12, 2016 at 3:10 PM, Andrew Wang 
wrote:

> Try asking on infra.chat (Apache INFRA's hipchat). I was in that room
> earlier today, and they were working on the ongoing JIRA spam.
>
> On Thu, May 12, 2016 at 3:03 PM, Xiao Chen  wrote:
>
> > Hello,
> >
> > I'm not sure if common-dev is the right contact list, please redirect me
> if
> > not.
> >
> > It seems the jira lock down is somehow being more strict?
> > I was able to comment on an HDFS jira
> > <
> >
> https://issues.apache.org/jira/browse/HDFS-4210?focusedCommentId=15282111&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15282111
> > >
> > at around 14:45 PDT today, but now I cannot.
> >
> > The banner still says:
> >
> > Jira is in Temporary Lockdown mode as a spam countermeasure. Only
> logged-in
> > users with active roles (committer, contributor, PMC, etc.) will be able
> to
> > create issues or comments during this time. Lockdown period from 11 May
> > 2300 UTC to estimated 12 May 2300 UTC.
> >
> >
> > But with a quick check with Yongjun and Yufei, contributors are locked
> down
> > as well.
> >
> > Thanks,
> > -Xiao
> >
>


[jira] [Created] (HADOOP-13135) Encounter response code 500 when accessing /metrics endpoint

2016-05-11 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-13135:
---

 Summary: Encounter response code 500 when accessing /metrics 
endpoint
 Key: HADOOP-13135
 URL: https://issues.apache.org/jira/browse/HADOOP-13135
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.1
Reporter: Ted Yu


When accessing /metrics endpoint on hbase master through hadoop 2.7.1, I got:
{code}
HTTP ERROR 500

Problem accessing /metrics. Reason:

INTERNAL_SERVER_ERROR
Caused by:

java.lang.NullPointerException
at 
org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1029)
at 
org.apache.hadoop.metrics.MetricsServlet.doGet(MetricsServlet.java:109)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
{code}
[~ajisakaa] suggested that code 500 should be 404 (NOT FOUND).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-12805) Annotate CanUnbuffer with LimitedPrivate

2016-02-13 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-12805:
---

 Summary: Annotate CanUnbuffer with LimitedPrivate
 Key: HADOOP-12805
 URL: https://issues.apache.org/jira/browse/HADOOP-12805
 Project: Hadoop Common
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu


See comments toward the tail of HBASE-9393.

The change in HBASE-9393 adds dependency on CanUnbuffer interface which is 
currently marked @InterfaceAudience.Private

To facilitate downstream projects such as HBase in using this interface, 
CanUnbuffer interface should be annotated LimitedPrivate({"HBase"}).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.6.4 RC0

2016-02-03 Thread Ted Yu
I modified hbase pom.xml (0.98 branch) to point to staged maven artifacts.

All unit tests passed.

Cheers

On Tue, Feb 2, 2016 at 11:01 PM, Junping Du  wrote:

> Hi community folks,
>I've created a release candidate RC0 for Apache Hadoop 2.6.4 (the next
> maintenance release to follow up 2.6.3.) according to email thread of
> release plan 2.6.4 [1]. Below is details of this release candidate:
>
> The RC is available for validation at:
> *http://people.apache.org/~junping_du/hadoop-2.6.4-RC0/
> *
>
> The RC tag in git is: release-2.6.4-RC0
>
> The maven artifacts are staged via repository.apache.org at:
> *https://repository.apache.org/content/repositories/orgapachehadoop-1028/?
>  >*
>
> You can find my public key at:
> http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS
>
> Please try the release and vote. The vote will run for the usual 5 days.
>
> Thanks!
>
>
> Cheers,
>
> Junping
>
>
> [1]: 2.6.4 release plan: http://markmail.org/message/fk3ud3c665lscvx5?
>
>


[jira] [Created] (HADOOP-12724) Let BufferedFSInputStream implement CanUnbuffer

2016-01-20 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-12724:
---

 Summary: Let BufferedFSInputStream implement CanUnbuffer
 Key: HADOOP-12724
 URL: https://issues.apache.org/jira/browse/HADOOP-12724
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ted Yu


When trying to determine reason for test failure over in HBASE-9393, I saw the 
following exception:
{code}
testSeekTo[4](org.apache.hadoop.hbase.io.hfile.TestSeekTo)  Time elapsed: 0.033 
sec  <<< ERROR!
java.lang.UnsupportedOperationException: this stream does not support 
unbuffering.
at 
org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:229)
at 
org.apache.hadoop.fs.FSDataInputStream.unbuffer(FSDataInputStream.java:227)
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:518)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:562)
at 
org.apache.hadoop.hbase.io.hfile.TestSeekTo.testSeekToInternals(TestSeekTo.java:307)
at 
org.apache.hadoop.hbase.io.hfile.TestSeekTo.testSeekTo(TestSeekTo.java:298)
{code}
Here is the cause:
{code}
java.lang.ClassCastException: org.apache.hadoop.fs.BufferedFSInputStream cannot 
be cast to org.apache.hadoop.fs.CanUnbuffer
{code}
See the comments starting with 
https://issues.apache.org/jira/browse/HBASE-9393?focusedCommentId=15105939&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15105939
 for background on the HBase patch.

This issue is to make BufferedFSInputStream implement CanUnbuffer.

Thanks to [~cmccabe] for discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: is jenkins testing PRs?

2016-01-11 Thread Ted Yu
Once you log in, you can specify the YARN JIRA number using:

https://builds.apache.org/job/PreCommit-yarn-Build/build?delay=0sec

FYI

On Mon, Jan 11, 2016 at 9:01 AM, Steve Loughran 
wrote:

>
> I submitted some PR-based patches last week —they haven't been tested yet
>
> https://issues.apache.org/jira/browse/YARN-4567
>
> Is there a way for someone like me with Jenkins admin rights to kick this
> off?
>


Re: [VOTE] Release Apache Hadoop 2.7.2 RC1

2015-12-17 Thread Ted Yu
Hi,
I have run test suite for tip of hbase 0.98 branch against this RC.

All tests passed.

+1

On Wed, Dec 16, 2015 at 6:49 PM, Vinod Kumar Vavilapalli  wrote:

> Hi all,
>
> I've created a release candidate RC1 for Apache Hadoop 2.7.2.
>
> As discussed before, this is the next maintenance release to follow up
> 2.7.1.
>
> The RC is available for validation at:
> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/ <
> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/>
>
> The RC tag in git is: release-2.7.2-RC1
>
> The maven artifacts are available via repository.apache.org <
> http://repository.apache.org/> at
> https://repository.apache.org/content/repositories/orgapachehadoop-1026/ <
> https://repository.apache.org/content/repositories/orgapachehadoop-1026/>
>
> The release-notes are inside the tar-balls at location
> hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html. I
> hosted this at
> http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html for
> quick perusal.
>
> As you may have noted,
>  - The RC0 related voting thread got halted due to some critical issues.
> It took a while again for getting all those blockers out of the way. See
> the previous voting thread [3] for details.
>  - Before RC0, an unusually long 2.6.3 release caused 2.7.2 to slip by
> quite a bit. This release's related discussion threads are linked below:
> [1] and [2].
>
> Please try the release and vote; the vote will run for the usual 5 days.
>
> Thanks,
> Vinod
>
> [1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes <
> http://markmail.org/message/oozq3gvd4nhzsaes>
> [2]: Planning Apache Hadoop 2.7.2
> http://markmail.org/message/iktqss2qdeykgpqk <
> http://markmail.org/message/iktqss2qdeykgpqk>
> [3]: [VOTE] Release Apache Hadoop 2.7.2 RC0:
> http://markmail.org/message/5txhvr2qdiqglrwc
>
>


Re: Disable some of the Hudson integration comments on JIRA

2015-11-26 Thread Ted Yu
Looking at a few Hadoop-trunk-Commit builds, I saw 'Some Enforcer rules
have failed.'
Below was from build #8895 :

[WARNING]
Dependency convergence error for
org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT paths to dependency are:
+-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-auth:3.0.0-20151126.050629-7794

[WARNING] Rule 0:
org.apache.maven.plugins.enforcer.DependencyConvergence failed with
message:
Failed while enforcing releasability the error(s) are [
Dependency convergence error for
org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT paths to dependency are:
+-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-auth:3.0.0-20151126.050629-7794


FYI


On Thu, Nov 26, 2015 at 2:46 AM, Steve Loughran 
wrote:

>
> > On 26 Nov 2015, at 01:41, Andrew Wang  wrote:
> >
> > Hi all,
> >
> > Right now we get something like 7 comments from Hudson whenever a change
> is
> > committed. Would anyone object if I turned off 6 of them? We have
> > variations like:
> >
> > Hadoop-trunk-Commit
> > Hadoop-Hdfs-trunk-Java8
> > Hadoop-Yarn-trunk
> > ...etc
> >
> > I propose leaving notifications on for just Hadoop-trunk-Commit.
> +1
>
> I'd also like to understand why those changes are always tagged as FAILED
>


Re: [VOTE] Release Apache Hadoop 2.6.2

2015-10-22 Thread Ted Yu
Ran hbase test suite (0.98 branch) by pointing to maven repo below.

All tests passed.

Cheers

On Thu, Oct 22, 2015 at 2:14 PM, Sangjin Lee  wrote:

> Hi all,
>
> I have created a release candidate (RC0) for Hadoop 2.6.2.
>
> The RC is available at: http://people.apache.org/~sjlee/hadoop-2.6.2-RC0/
>
> The RC tag in git is: release-2.6.2-RC0
>
> The list of JIRAs committed for 2.6.2:
>
> https://issues.apache.org/jira/browse/YARN-4101?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20YARN%2C%20MAPREDUCE)%20AND%20fixVersion%20%3D%202.6.2
>
> The maven artifacts are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1022/
>
> Please try out the release candidate and vote. The vote will run for 5
> days.
>
> Thanks,
> Sangjin
>


Re: hadoop-hdfs-client splitoff is going to break code

2015-10-14 Thread Ted Yu
+1 on option 2.

On Wed, Oct 14, 2015 at 10:56 AM, larry mccay  wrote:

> Interesting...
>
> As long as #2 provides full backward compatibility and the ability to
> explicitly exclude the server dependencies that seems the best way to go.
> That would get my non-binding +1.
> :)
>
> Perhaps we could add another artifact called hadoop-thin-client that would
> not be backward compatible at some point?
>
> On Wed, Oct 14, 2015 at 1:36 PM, Steve Loughran 
> wrote:
>
> > just an FYI, the split off of hadoop hdfs into client and server is going
> > to break things.
> >
> > I know that, as my code is broken; DFSConfigKeys off the path,
> > HdfsConfiguration, the class I've been loading to force pickup of
> > hdfs-site.xml -all missing.
> >
> > This is because hadoop-client  POM now depends on hadoop-hdfs-client, not
> > hadoop-hdfs, so the things I'm referencing are gone. I'm particularly sad
> > about DfsConfigKeys, as everybody uses it as the one hard-coded resource
> of
> > HDFS constants, HDFS-6566 covering the issue of making this public,
> > something that's been sitting around for a year.
> >
> > I'm fixing my build by explicitly adding a hadoop-hdfs dependency.
> >
> > Any application which used stuff which has now been declared server-side
> > isn't going to compile any more, which does appear to break the
> > compatibility guidelines we've adopted, specifically "The hadoop-client
> > artifact (maven groupId:artifactId) stays compatible within a major
> release"
> >
> >
> >
> http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html#Build_artifacts
> >
> >
> > We need to do one of
> >
> > 1. agree that this change, is considered acceptable according to policy,
> > and mark it as incompatible in hdfs/CHANGES.TXT
> > 2. Change the POMs to add both hdfs-client and -hdfs server in
> > hadoop-client -with downstream users free to exclude the server code
> >
> > We unintentionally caused similar grief with the move of the s3n clients
> > to hadoop-aws , HADOOP-11074 -something we should have picked up and
> -1'd.
> > This time we know the problems going to arise, so lets explicitly make a
> > decision this time, and share it with our users.
> >
> > -steve
> >
>


Re: [VOTE] Release Apache Hadoop 2.6.1 RC1

2015-09-17 Thread Ted Yu
Ran hbase test suite based on 2.6.1 RC1 which passed.

FYI

On Wed, Sep 16, 2015 at 7:10 PM, Vinod Kumar Vavilapalli  wrote:

> Hi all,
>
> After a nearly month long [1] toil, with loads of help from Sangjin Lee and
> Akira Ajisaka, and 153 (RC0)+7(RC1) commits later, I've created a release
> candidate RC1 for hadoop-2.6.1.
>
> RC1 is RC0 [0] (for which I opened and closed a vote last week) + UI fixes
> for the issue Sangjin raised (YARN-3171 and the dependencies YARN-3779,
> YARN-3248), additional fix to avoid incompatibility (YARN-3740), other UI
> bugs (YARN-1884, YARN-3544) and the MiniYARNCluster issue (right patch for
> YARN-2890) that Jeff Zhang raised.
>
> The RC is available at:
> http://people.apache.org/~vinodkv/hadoop-2.6.1-RC1/
>
> The RC tag in git is: release-2.6.1-RC1
>
> The maven artifacts are available via repository.apache.org at
> https://repository.apache.org/content/repositories/orgapachehadoop-1021
>
> Some notes from our release process
>  -  - Sangjin and I moved out a bunch of items pending from 2.6.1 [2] -
> non-committed but desired patches. 2.6.1 is already big as is and is late
> by any standard, we can definitely include them in the next release.
>  - The 2.6.1 wiki page [3] captures some (but not all) of the context of
> the patches that we pushed in.
>  - Given the number of fixes pushed [4] in, we had to make a bunch of
> changes to our original plan - we added a few improvements that helped us
> backport patches easier (or in many cases made backports possible), and we
> dropped a few that didn't make sense (HDFS-7831, HDFS-7926, HDFS-7676,
> HDFS-7611, HDFS-7843, HDFS-8850).
>  - I ran all the unit tests which (surprisingly?) passed. (Except for one,
> which pointed out a missing fix HDFS-7552).
>
> As discussed before [5]
>  - This release is the first point release after 2.6.0
>  - I’d like to use this as a starting release for 2.6.2 in a few weeks and
> then follow up with more of these.
>
> Please try the release and vote; the vote will run for the usual 5 days.
>
> Thanks,
> Vinod
>
> [0] Hadoop 2.6.1 RC0 vote: http://markmail.org/thread/ubut2rn3lodc55iy
> [1] Hadoop 2.6.1 Release process thread:
> http://markmail.org/thread/wkbgkxkhntx5tlux
> [2] 2.6.1 Pending tickets:
> https://issues.apache.org/jira/issues/?filter=12331711
> [3] 2.6.1 Wiki page: https://wiki.apache.org/hadoop/Release-2.6.1
> -Working-Notes
> [4] List of 2.6.1 patches pushed:
> https://issues.apache.org/jira/issues/?jql=fixVersion%20%3D%202.6.1
> %20and%20labels%20%3D%20%222.6.1-candidate%22
> [5] Planning Hadoop 2.6.1 release:
> http://markmail.org/thread/sbykjn5xgnksh6wg
>
> PS:
>  - Note that branch-2.6 which will be the base for 2.6.2 doesn't have these
> fixes yet. Once 2.6.1 goes through, I plan to rebase branch-2.6 based off
> 2.6.1.
>  - The additional patches in RC1 that got into 2.6.1 all the way from 2.8
> are NOT in 2.7.2 yet, this will be done as a followup.
>


Re: [VOTE] Release Apache Hadoop 2.6.1 RC0

2015-09-10 Thread Ted Yu
I pointed master branch of hbase to 2.6.1 RC0.
Ran unit test suite and results are good.

Cheers

On Thu, Sep 10, 2015 at 5:16 PM, Sangjin Lee  wrote:

> I verified the signatures for both source and the binary tarballs. I
> started up a pseudo-distributed cluster, and tested simple apps such as
> sleep and terasort.
>
> I do see one issue with the RM UI where the sorting by id is broken. The
> table is not rendered in the expected id-descending order, and when I click
> the sort control, nothing happens. Sorting by other columns works fine.
>
> Is anyone else able to reproduce the issue? I checked 2.6.0, and it works
> fine on 2.6.0.
>
> On Wed, Sep 9, 2015 at 6:00 PM, Vinod Kumar Vavilapalli <
> vino...@apache.org>
> wrote:
>
> > Hi all,
> >
> > After a nearly month long [1] toil, with loads of help from Sangjin Lee
> > and Akira Ajisaka, and 153 commits later, I've created a release
> candidate
> > RC0 for hadoop-2.6.1.
> >
> > The RC is available at:
> > http://people.apache.org/~vinodkv/hadoop-2.6.1-RC0/
> >
> > The RC tag in git is: release-2.6.1-RC0
> >
> > The maven artifacts are available via repository.apache.org at
> > https://repository.apache.org/content/repositories/orgapachehadoop-1020
> >
> > Some notes from our release process
> >  -  - Sangjin and I moved out a bunch of items pending from 2.6.1 [2] -
> > non-committed but desired patches. 2.6.1 is already big as is and is late
> > by any standard, we can definitely include them in the next release.
> >  - The 2.6.1 wiki page [3] captures some (but not all) of the context of
> > the patches that we pushed in.
> >  - Given the number of fixes pushed [4] in, we had to make a bunch of
> > changes to our original plan - we added a few improvements that helped us
> > backport patches easier (or in many cases made backports possible), and
> we
> > dropped a few that didn't make sense (HDFS-7831, HDFS-7926, HDFS-7676,
> > HDFS-7611, HDFS-7843, HDFS-8850).
> >  - I ran all the unit tests which (surprisingly?) passed. (Except for
> one,
> > which pointed out a missing fix HDFS-7552).
> >
> > As discussed before [5]
> >  - This release is the first point release after 2.6.0
> >  - I’d like to use this as a starting release for 2.6.2 in a few weeks
> > and then follow up with more of these.
> >
> > Please try the release and vote; the vote will run for the usual 5 days.
> >
> > Thanks,
> > Vinod
> >
> > [1] Hadoop 2.6.1 Release process thread:
> > http://markmail.org/thread/wkbgkxkhntx5tlux
> > [2] 2.6.1 Pending tickets:
> > https://issues.apache.org/jira/issues/?filter=12331711
> > [3] 2.6.1 Wiki page:
> > https://wiki.apache.org/hadoop/Release-2.6.1-Working-Notes
> > [4] List of 2.6.1 patches pushed:
> >
> https://issues.apache.org/jira/issues/?jql=fixVersion%20%3D%202.6.1%20and%20labels%20%3D%20%222.6.1-candidate%22
> > [5] Planning Hadoop 2.6.1 release:
> > http://markmail.org/thread/sbykjn5xgnksh6wg
> >
> > PS:
> >  - Note that branch-2.6 which will be the base for 2.6.2 doesn't have
> > these fixes yet. Once 2.6.1 goes through, I plan to rebase branch-2.6
> based
> > off 2.6.1.
> >  - Patches that got into 2.6.1 all the way from 2.8 are NOT in 2.7.2 yet,
> > this will be done as a followup.
> >
> >
>


Re: Unsubscribe from list

2015-09-04 Thread Ted Yu
Email common-dev-unsubscr...@hadoop.apache.org



> On Sep 4, 2015, at 6:20 AM, Vinod Sashittal  wrote:
> 
> Regards
> Vinod Sashittal


Re: Jenkins : Unable to create new native thread

2015-08-07 Thread Ted Yu
I observed the same behavior in hbase QA run as well:
https://builds.apache.org/job/PreCommit-HBASE-Build/15000/console

This was on ubuntu-2 
Looks like certain machines may have environment issue.

FYI

On Wed, Aug 5, 2015 at 12:59 AM, Brahma Reddy Battula <
brahmareddy.batt...@huawei.com> wrote:

> Dear All
>
> had seen following error (OOM) in HDFS-1148 and Hadoop-12302..jenkin
> machine have some problem..?
>
>
>
> [
> https://builds.apache.org/static/ea60962f/images/16x16/document_delete.png]
> Error Details
>
> unable to create new native thread
>
> [
> https://builds.apache.org/static/ea60962f/images/16x16/document_delete.png]
> Stack Trace
>
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
>
>
>
> Thanks & Regards
>
>  Brahma Reddy Battula
>
>
>
>


Re: [VOTE] Release Apache Hadoop 2.7.1 RC0

2015-07-03 Thread Ted Yu
Tsuyoshi:
I tried just now with the following:

tar --version
tar (GNU tar) 1.23

uname -a
Linux a.com 2.6.32-504.el6.x86_64 #1 SMP Wed Oct 15 04:27:16 UTC 2014
x86_64 x86_64 x86_64 GNU/Linux

I was able to expand the tarball.

Can you use another machine ?

Cheers


On Fri, Jul 3, 2015 at 9:53 AM, Tsuyoshi Ozawa  wrote:

> Thank you for starting voting, Vinod.
> I tried to untar the tarball, but the command exited with an error. Is
> binary tarball broken?
>
> $ tar xzvf hadoop-2.7.1-RC0.tar.gz
> ...
>
> hadoop-2.7.1/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/hadoop-common-2.7.1.jar
>
> gzip: stdin: unexpected end of file
> tar: Unexpected EOF in archive
> tar: Unexpected EOF in archive
> tar: Error is not recoverable: exiting now
>
> $ tar --version
> tar (GNU tar) 1.27.1
>
> $ uname -a
> Linux ip-172-31-4-8 3.13.0-48-generic #80-Ubuntu SMP Thu Mar 12
> 11:16:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
>
> Can anyone reproduce this problem? If this is my environment-depend
> problem, please ignore it.
>
> Thanks,
> - Tsuyoshi
>
> On Thu, Jul 2, 2015 at 11:01 PM, Masatake Iwasaki
>  wrote:
> > +1 (non-binding)
> >
> > + verified mds of source and binary tarball
> > + built from source tarball
> > + deployed binary tarball to 4 nodes cluster and run some
> > hadoop-mapreduce-examples jobs
> >
> > Thanks,
> > Masatake Iwasaki
> >
> >
> >
> > On 6/29/15 17:45, Vinod Kumar Vavilapalli wrote:
> >>
> >> Hi all,
> >>
> >> I've created a release candidate RC0 for Apache Hadoop 2.7.1.
> >>
> >> As discussed before, this is the next stable release to follow up 2.6.0,
> >> and the first stable one in the 2.7.x line.
> >>
> >> The RC is available for validation at:
> >> *http://people.apache.org/~vinodkv/hadoop-2.7.1-RC0/
> >> *
> >>
> >> The RC tag in git is: release-2.7.1-RC0
> >>
> >> The maven artifacts are available via repository.apache.org at
> >> *
> https://repository.apache.org/content/repositories/orgapachehadoop-1019/
> >>
> >> <
> https://repository.apache.org/content/repositories/orgapachehadoop-1019/>*
> >>
> >> Please try the release and vote; the vote will run for the usual 5 days.
> >>
> >> Thanks,
> >> Vinod
> >>
> >> PS: It took 2 months instead of the planned [1] 2 weeks in getting this
> >> release out: post-mortem in a separate thread.
> >>
> >> [1]: A 2.7.1 release to follow up 2.7.0
> >> http://markmail.org/thread/zwzze6cqqgwq4rmw
> >>
> >
>


Re: [VOTE] Release Apache Hadoop 2.7.1 RC0

2015-06-29 Thread Ted Yu
+1 (non-binding)

Compiled hbase branch-1 with Java 1.8.0_45
Ran unit test suite which passed.

On Mon, Jun 29, 2015 at 7:22 AM, Steve Loughran 
wrote:

>
> +1 binding from me.
>
> Tests:
>
> Rebuild slider with Hadoop.version=2.7.1; ran all the tests including
> against a secure cluster.
> Repeated for windows running Java 8.
>
> All tests passed
>
>
> > On 29 Jun 2015, at 09:45, Vinod Kumar Vavilapalli 
> wrote:
> >
> > Hi all,
> >
> > I've created a release candidate RC0 for Apache Hadoop 2.7.1.
> >
> > As discussed before, this is the next stable release to follow up 2.6.0,
> > and the first stable one in the 2.7.x line.
> >
> > The RC is available for validation at:
> > *http://people.apache.org/~vinodkv/hadoop-2.7.1-RC0/
> > *
> >
> > The RC tag in git is: release-2.7.1-RC0
> >
> > The maven artifacts are available via repository.apache.org at
> > *
> https://repository.apache.org/content/repositories/orgapachehadoop-1019/
> > <
> https://repository.apache.org/content/repositories/orgapachehadoop-1019/>*
> >
> > Please try the release and vote; the vote will run for the usual 5 days.
> >
> > Thanks,
> > Vinod
> >
> > PS: It took 2 months instead of the planned [1] 2 weeks in getting this
> > release out: post-mortem in a separate thread.
> >
> > [1]: A 2.7.1 release to follow up 2.7.0
> > http://markmail.org/thread/zwzze6cqqgwq4rmw
>
>


Re: JIRA admin question: Moving a bug from Hadoop to YARN

2015-05-27 Thread Ted Yu
When you click on More button, you should see an action called Move.

You can move the existing JIRA.

FYI

On Wed, May 27, 2015 at 8:47 AM, Alan Burlison 
wrote:

> HADOOP-11952
> Native compilation on Solaris fails on Yarn due to use of FTS
>
> is actually a YARN bug, not a Hadoop one and should be moved under the
> top-level Solaris/YARN Jira:
>
> YARN-3719 Improve Solaris support in YARN
>
> Is that possible or do I have to close the current bug and open a fresh
> one against YARN, copying everything across manually?
>
> Thanks,
>
> --
> Alan Burlison
> --
>


Re: Remove me

2015-04-29 Thread Ted Yu
Have you used 'unsubscribe' as subject of email ?

Cheers

On Wed, Apr 29, 2015 at 9:17 PM, Maity, Debashish <
debashish.ma...@softwareag.com> wrote:

> HI,
>
> I have tried so many times but never been unsubscribed.
> Please do the needful.
>
> Cheers,
> Deb
>
> -Original Message-
> From: Tsuyoshi Ozawa [mailto:oz...@apache.org]
> Sent: Thursday, April 30, 2015 8:07 AM
> To: common-dev@hadoop.apache.org
> Subject: Re: Remove me
>
> Hi Srinivas,
>
> Please send the removal request to
> common-dev-unsubscr...@hadoop.apache.org.
>
> Thanks,
> - Tsuyoshi
>
> On Thu, Apr 30, 2015 at 11:24 AM, Srinivas Reddy G 
> wrote:
> > Please remove me from this.
>


Re: subscribe the mailing list

2015-03-11 Thread Ted Yu
See http://hadoop.apache.org/mailing_lists.html

Cheers



> On Mar 11, 2015, at 10:29 PM, 张铎  wrote:
> 
> Thanks.


[jira] [Resolved] (HADOOP-10501) Server#getHandlers() accesses handlers without synchronization

2015-03-07 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HADOOP-10501.
-
Resolution: Later

> Server#getHandlers() accesses handlers without synchronization
> --
>
> Key: HADOOP-10501
> URL: https://issues.apache.org/jira/browse/HADOOP-10501
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>    Reporter: Ted Yu
>Priority: Minor
>
> {code}
>   Iterable getHandlers() {
> return Arrays.asList(handlers);
>   }
> {code}
> All the other methods accessing handlers are synchronized methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10980) TestActiveStandbyElector fails occasionally in trunk

2015-03-07 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HADOOP-10980.
-
Resolution: Cannot Reproduce

Hadoop-Common-trunk has been green for a while.

> TestActiveStandbyElector fails occasionally in trunk
> 
>
> Key: HADOOP-10980
> URL: https://issues.apache.org/jira/browse/HADOOP-10980
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0
>    Reporter: Ted Yu
>Priority: Minor
>
> From https://builds.apache.org/job/Hadoop-Common-trunk/1211/consoleFull :
> {code}
> Running org.apache.hadoop.ha.TestActiveStandbyElector
> Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.7 sec <<< 
> FAILURE! - in org.apache.hadoop.ha.TestActiveStandbyElector
> testWithoutZKServer(org.apache.hadoop.ha.TestActiveStandbyElector)  Time 
> elapsed: 0.051 sec  <<< FAILURE!
> java.lang.AssertionError: Did not throw zookeeper connection loss exceptions!
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.ha.TestActiveStandbyElector.testWithoutZKServer(TestActiveStandbyElector.java:722)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11190) Potentially stale value is used in SelfRenewingLease ctor

2015-02-28 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HADOOP-11190.
-
Resolution: Later

> Potentially stale value is used in SelfRenewingLease ctor
> -
>
> Key: HADOOP-11190
> URL: https://issues.apache.org/jira/browse/HADOOP-11190
> Project: Hadoop Common
>  Issue Type: Bug
>    Reporter: Ted Yu
>Priority: Minor
>
> Here is w.r.t. threadNumber, shown in the code around line 102:
> {code}
> renewer.setName("AzureLeaseRenewer-" + threadNumber++);
> {code}
> Since there is no synchronization involved, potentially stale value may be 
> read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11191) NativeAzureFileSystem#close() should be synchronized

2015-02-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HADOOP-11191.
-
Resolution: Later

> NativeAzureFileSystem#close() should be synchronized
> 
>
> Key: HADOOP-11191
> URL: https://issues.apache.org/jira/browse/HADOOP-11191
> Project: Hadoop Common
>  Issue Type: Bug
>    Reporter: Ted Yu
>Priority: Minor
>
> {code}
> public void close() throws IOException {
>   in.close();
>   closed = true;
> }
> {code}
> The other methods, such as seek(), are synchronized.
> close() should be as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11500) InputStream is left unclosed in ApplicationClassLoader

2015-01-21 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11500:
---

 Summary: InputStream is left unclosed in ApplicationClassLoader
 Key: HADOOP-11500
 URL: https://issues.apache.org/jira/browse/HADOOP-11500
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor


{code}
InputStream is = null;
try {
  is = ApplicationClassLoader.class.getClassLoader().
  getResourceAsStream(PROPERTIES_FILE);
{code}
The InputStream is not closed in the static block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11499) Check of executorThreadsStarted in ValueQueue#submitRefillTask() evades lock acquisition

2015-01-21 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11499:
---

 Summary: Check of executorThreadsStarted in 
ValueQueue#submitRefillTask() evades lock acquisition
 Key: HADOOP-11499
 URL: https://issues.apache.org/jira/browse/HADOOP-11499
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


{code}
if (!executorThreadsStarted) {
  synchronized (this) {
// To ensure all requests are first queued, make coreThreads =
// maxThreads
// and pre-start all the Core Threads.
executor.prestartAllCoreThreads();
executorThreadsStarted = true;
  }
}
{code}
It is possible that two threads executing the above code both see 
executorThreadsStarted as being false, leading to 
executor.prestartAllCoreThreads() called twice.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11494) Lock acquisition on WrappedInputStream#unwrappedRpcBuffer may race with another thread

2015-01-20 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11494:
---

 Summary: Lock acquisition on WrappedInputStream#unwrappedRpcBuffer 
may race with another thread
 Key: HADOOP-11494
 URL: https://issues.apache.org/jira/browse/HADOOP-11494
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


In SaslRpcClient, starting at line 576:
{code}
public int read(byte[] buf, int off, int len) throws IOException {
  synchronized(unwrappedRpcBuffer) {
// fill the buffer with the next RPC message
if (unwrappedRpcBuffer.remaining() == 0) {
  readNextRpcPacket();
}
{code}
readNextRpcPacket() may assign another ByteBuffer to unwrappedRpcBuffer, making 
the lock on previous ByteBuffer not useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11480) Typo in hadoop-aws/index.md uses wrong scheme for test.fs.s3.name

2015-01-14 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11480:
---

 Summary: Typo in hadoop-aws/index.md uses wrong scheme for 
test.fs.s3.name
 Key: HADOOP-11480
 URL: https://issues.apache.org/jira/browse/HADOOP-11480
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor


Around line 270:
{code}
   
 test.fs.s3.name
s3a://test-aws-s3/
   
{code}
The scheme should be s3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: InterfaceStability and InterfaceAudience stability

2015-01-13 Thread Ted Yu
+1

On Tue, Jan 13, 2015 at 1:47 PM, Abraham Elmahrek  wrote:

> Hey guys,
>
> I've noticed the InterfaceStability (
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceStability.java
> )
> and InterfaceAudience (
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceAudience.java
> )
> classes are marked as "Evolving". These really haven't changed much in the
> last few years, so I was wondering if it is reasonable to mark them as
> stable?
>
> -Abe
>


[jira] [Created] (HADOOP-11475) Utilize try-with-resource to close StopWatch

2015-01-13 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11475:
---

 Summary: Utilize try-with-resource to close StopWatch
 Key: HADOOP-11475
 URL: https://issues.apache.org/jira/browse/HADOOP-11475
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ted Yu
Priority: Minor


Currently the stop() method of StopWatch is called without using finally clause.
This can result in resource leak if there is IOE thrown.
Here is one example from Journal#journal():
{code}
StopWatch sw = new StopWatch();
sw.start();
curSegment.flush(shouldFsync);
sw.stop();
{code}
If curSegment.flush() throws IOE, sw would be left unclosed.

Propose using try-with-resource structure to close the StopWatch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11463) Replace method-local TransferManager object with S3AFileSystem#transfers

2015-01-06 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11463:
---

 Summary: Replace method-local TransferManager object with 
S3AFileSystem#transfers
 Key: HADOOP-11463
 URL: https://issues.apache.org/jira/browse/HADOOP-11463
 Project: Hadoop Common
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu


This is continuation of HADOOP-11446.
The following changes are made:

1. Replace method-local TransferManager object with S3AFileSystem#transfers
2. Do not shutdown TransferManager after purging existing multipart file
3. Shutdown TransferManager instance in the close method of S3AFileSystem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11454) Potential null dereference in Configuration#loadProperty()

2014-12-30 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11454:
---

 Summary: Potential null dereference in Configuration#loadProperty()
 Key: HADOOP-11454
 URL: https://issues.apache.org/jira/browse/HADOOP-11454
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu


Here is related code:
{code}
2581 properties.setProperty(attr, value);
2582 updatingResource.put(attr, source);
2583   } else if (!value.equals(properties.getProperty(attr))) {
{code}
The null check in the enclosing if statement is accompanied with 
allowNullValueProperties, thus dereferencing value above may result in NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11446) S3AOutputStream should use shared thread pool to avoid OutOfMemoryError

2014-12-23 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11446:
---

 Summary: S3AOutputStream should use shared thread pool to avoid 
OutOfMemoryError
 Key: HADOOP-11446
 URL: https://issues.apache.org/jira/browse/HADOOP-11446
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu


Here is part of the output including the OOME when hbase snapshot is exported 
to s3a (nofile ulimit was increased to 102400):
{code}
2014-12-19 13:15:03,895 INFO  [main] s3a.S3AFileSystem: OutputStream for key 
'FastQueryPOC/2014-12-11/EVENT1-IDX-snapshot/.hbase-snapshot/.tmp/EVENT1_IDX_snapshot_2012_12_11/
650a5678810fbdaa91809668d11ccf09/.regioninfo' closed. Now beginning upload
2014-12-19 13:15:03,895 INFO  [main] s3a.S3AFileSystem: Minimum upload part 
size: 16777216 threshold2147483647
Exception in thread "main" java.lang.OutOfMemoryError: unable to create new 
native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1360)
at 
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
at 
com.amazonaws.services.s3.transfer.internal.UploadMonitor.(UploadMonitor.java:129)
at 
com.amazonaws.services.s3.transfer.TransferManager.upload(TransferManager.java:449)
at 
com.amazonaws.services.s3.transfer.TransferManager.upload(TransferManager.java:382)
at 
org.apache.hadoop.fs.s3a.S3AOutputStream.close(S3AOutputStream.java:127)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:54)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:366)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:356)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:356)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:338)
at 
org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:791)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:882)
at 
org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:886)
{code}
In S3AOutputStream#close():
{code}
  TransferManager transfers = new TransferManager(client);
{code}
This results in each TransferManager creating its own thread pool, leading to 
the OOME.
One solution is to pass shared thread pool to TransferManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11414) Close of Reader should be enclosed in finally block in FileBasedIPList#readLines()

2014-12-16 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11414:
---

 Summary: Close of Reader should be enclosed in finally block in 
FileBasedIPList#readLines()
 Key: HADOOP-11414
 URL: https://issues.apache.org/jira/browse/HADOOP-11414
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


{code}
  Reader fileReader = new InputStreamReader(
  new FileInputStream(file), Charsets.UTF_8);
  BufferedReader bufferedReader = new BufferedReader(fileReader);
  List lines = new ArrayList();
  String line = null;
  while ((line = bufferedReader.readLine()) != null) {
lines.add(line);
  }
  bufferedReader.close();
{code}
Since bufferedReader.readLine() may throw IOE, so the close of bufferedReader 
should be enclosed within finally block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Switching to Java 7

2014-12-08 Thread Ted Yu
Looks like there was still OutOfMemoryError :

https://builds.apache.org/job/Hadoop-Hdfs-trunk/1964/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameDirAcrossSnapshottableDirs/

FYI

On Mon, Dec 8, 2014 at 2:42 AM, Steve Loughran 
wrote:

> yes, bumped them up to
>
> export MAVEN_OPTS="-Xmx3072m -XX:MaxPermSize=768m"
> export ANT_OPTS=$MAVEN_OPTS
>
> also extended test runs times.
>
>
>
> On 8 December 2014 at 00:58, Ted Yu  wrote:
>
> > Looking at the test failures of
> > https://builds.apache.org/job/Hadoop-Hdfs-trunk/1963/ which uses jdk
> 1.7:
> >
> > e.g.
> >
> >
> https://builds.apache.org/job/Hadoop-Hdfs-trunk/1963/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameFileAndDeleteSnapshot/
> >
> > java.lang.OutOfMemoryError: Java heap space
> > at
> sun.nio.ch.EPollArrayWrapper.(EPollArrayWrapper.java:120)
> > at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:68)
> > at
> >
> sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
> > at
> > io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:126)
> > at
> io.netty.channel.nio.NioEventLoop.(NioEventLoop.java:120)
> > at
> >
> io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:87)
> > at
> >
> io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:64)
> >
> >
> > Should more heap be given to the tests ?
> >
> >
> > Cheers
> >
> >
> > On Sun, Dec 7, 2014 at 2:09 PM, Steve Loughran 
> > wrote:
> >
> > > The latest migration status:
> > >
> > >   if the jenkins builds are happy then the patch will go in -I do that
> > > monday morning 10:00 UTC
> > >
> > > https://builds.apache.org/view/H-L/view/Hadoop/
> > >
> > > Getting jenkins to work has been "surprisingly difficult"...it turns
> out
> > > that those builds which we thought were java7 or java8 weren't, as
> > setting
> > >   export JAVA_HOME=${TOOLS_HOME}/java/latest
> > >
> > > meant that they picked up a java 6 machine
> > >
> > > Now the trunk precommit/postcommit and scheduled branches should have
> > > export JAVA_HOME=${TOOLS_HOME}/java/jdk1.7.0_55
> > >
> > > the Java 8 builds have more changes
> > >
> > > export JAVA_HOME=${TOOLS_HOME}/java/jdk1.8.0
> > > export MAVEN_OPTS="-Xmx3072m -XX:MaxPermSize=768m"
> > > and  -Dmaven.javadoc.skip=true  on the mvn builds
> > >
> > > without these javadocs fails and test runs OOM.
> > >
> > > We need to have something resembling the nightly build env setup again,
> > > git/Svn stored file with something for java8 alongside the normal env
> > vars.
> > >
> > > --
> > > CONFIDENTIALITY NOTICE
> > > NOTICE: This message is intended for the use of the individual or
> entity
> > to
> > > which it is addressed and may contain information that is confidential,
> > > privileged and exempt from disclosure under applicable law. If the
> reader
> > > of this message is not the intended recipient, you are hereby notified
> > that
> > > any printing, copying, dissemination, distribution, disclosure or
> > > forwarding of this communication is strictly prohibited. If you have
> > > received this communication in error, please contact the sender
> > immediately
> > > and delete it from your system. Thank You.
> > >
> >
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>


Re: Switching to Java 7

2014-12-07 Thread Ted Yu
Looking at the test failures of
https://builds.apache.org/job/Hadoop-Hdfs-trunk/1963/ which uses jdk 1.7:

e.g.
https://builds.apache.org/job/Hadoop-Hdfs-trunk/1963/testReport/junit/org.apache.hadoop.hdfs.server.namenode.snapshot/TestRenameWithSnapshots/testRenameFileAndDeleteSnapshot/

java.lang.OutOfMemoryError: Java heap space
at sun.nio.ch.EPollArrayWrapper.(EPollArrayWrapper.java:120)
at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:68)
at 
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
at io.netty.channel.nio.NioEventLoop.openSelector(NioEventLoop.java:126)
at io.netty.channel.nio.NioEventLoop.(NioEventLoop.java:120)
at 
io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:87)
at 
io.netty.util.concurrent.MultithreadEventExecutorGroup.(MultithreadEventExecutorGroup.java:64)


Should more heap be given to the tests ?


Cheers


On Sun, Dec 7, 2014 at 2:09 PM, Steve Loughran 
wrote:

> The latest migration status:
>
>   if the jenkins builds are happy then the patch will go in -I do that
> monday morning 10:00 UTC
>
> https://builds.apache.org/view/H-L/view/Hadoop/
>
> Getting jenkins to work has been "surprisingly difficult"...it turns out
> that those builds which we thought were java7 or java8 weren't, as setting
>   export JAVA_HOME=${TOOLS_HOME}/java/latest
>
> meant that they picked up a java 6 machine
>
> Now the trunk precommit/postcommit and scheduled branches should have
> export JAVA_HOME=${TOOLS_HOME}/java/jdk1.7.0_55
>
> the Java 8 builds have more changes
>
> export JAVA_HOME=${TOOLS_HOME}/java/jdk1.8.0
> export MAVEN_OPTS="-Xmx3072m -XX:MaxPermSize=768m"
> and  -Dmaven.javadoc.skip=true  on the mvn builds
>
> without these javadocs fails and test runs OOM.
>
> We need to have something resembling the nightly build env setup again,
> git/Svn stored file with something for java8 alongside the normal env vars.
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>


Re: a friendly suggestion for developers when uploading patches

2014-11-26 Thread Ted Yu
bq. in the format of 001

It is hard to anticipate how many revisions a patch would go through. So
the leading zero's in the rev number should be optional.

Cheers

On Sat, Nov 22, 2014 at 11:11 AM, Yongjun Zhang  wrote:

> Hi Steve,
>
> Thanks for the good suggestion.
>
> I like the idea to have even a more specific guideline for patch file
> naming, and I agree using 3-digit is a good choice here:
>
> <*projectName*>-<*jiraNum*>-<*revNum*>.patch
>
> where revNum is 3-digit, in the format of 001, 002, ..., 010, 011, ...
>
> Thanks.
>
> --Yongjun
>
> On Sat, Nov 22, 2014 at 10:24 AM, Steve Loughran 
> wrote:
>
> > can we do HADOOP--001.patch
> >
> > with the 001 being the revision.
> >
> > -That numbering scheme guarantees listing order in directories &c
> > -having .patch come after ensures that those people who have .patch bound
> > in their browser to a text editor (e.g. textmate) can view the patch with
> > ease
> >
> > I know having a 3 digit number is pessimistic -I've never got past 70+,
> but
> > you never know
> >
> > For anyone doing patches off their own repo, I'd recommend tagging the
> > commit with the same revision number —but that 's just a personal choice
> >
> >
> >
> > On 21 November 2014 at 19:10, Ted Yu  wrote:
> >
> > > bq. include a revision number in the patch file name
> > >
> > > +1
> > >
> > > On Fri, Nov 21, 2014 at 11:06 AM, Yongjun Zhang 
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > When I look at patches uploaded to jiras, from time to time I notice
> > that
> > > > different revisions of the patch is uploaded with the same patch file
> > > name,
> > > > some time for quite a few times. It's confusing which is which.
> > > >
> > > > I'd suggest that as a guideline, we do the following when uploading a
> > > > patch:
> > > >
> > > >- include a revision number in the patch file name.A
> > > >- include a comment, stating that a new patch is uploaded,
> including
> > > the
> > > >revision number of the patch in the comment.
> > > >
> > > > This way, it's easier to refer to a specific version of a patch, and
> to
> > > > know which patch a comment is made about.
> > > >
> > > > Hope that makes sense to you.
> > > >
> > > > Thanks.
> > > >
> > > > --Yongjun
> > > >
> > >
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
> >
>


Re: a friendly suggestion for developers when uploading patches

2014-11-22 Thread Ted Yu
For patch viewing, David Deng has a Chrome extension for rendering patch on
JIRA.
See this thread:
http://search-hadoop.com/m/DHED4LHEYI

FYI

On Sat, Nov 22, 2014 at 10:24 AM, Steve Loughran 
wrote:

> can we do HADOOP--001.patch
>
> with the 001 being the revision.
>
> -That numbering scheme guarantees listing order in directories &c
> -having .patch come after ensures that those people who have .patch bound
> in their browser to a text editor (e.g. textmate) can view the patch with
> ease
>
> I know having a 3 digit number is pessimistic -I've never got past 70+, but
> you never know
>
> For anyone doing patches off their own repo, I'd recommend tagging the
> commit with the same revision number —but that 's just a personal choice
>
>
>
> On 21 November 2014 at 19:10, Ted Yu  wrote:
>
> > bq. include a revision number in the patch file name
> >
> > +1
> >
> > On Fri, Nov 21, 2014 at 11:06 AM, Yongjun Zhang 
> > wrote:
> >
> > > Hi,
> > >
> > > When I look at patches uploaded to jiras, from time to time I notice
> that
> > > different revisions of the patch is uploaded with the same patch file
> > name,
> > > some time for quite a few times. It's confusing which is which.
> > >
> > > I'd suggest that as a guideline, we do the following when uploading a
> > > patch:
> > >
> > >- include a revision number in the patch file name.A
> > >- include a comment, stating that a new patch is uploaded, including
> > the
> > >revision number of the patch in the comment.
> > >
> > > This way, it's easier to refer to a specific version of a patch, and to
> > > know which patch a comment is made about.
> > >
> > > Hope that makes sense to you.
> > >
> > > Thanks.
> > >
> > > --Yongjun
> > >
> >
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>


Re: a friendly suggestion for developers when uploading patches

2014-11-21 Thread Ted Yu
bq. include a revision number in the patch file name

+1

On Fri, Nov 21, 2014 at 11:06 AM, Yongjun Zhang  wrote:

> Hi,
>
> When I look at patches uploaded to jiras, from time to time I notice that
> different revisions of the patch is uploaded with the same patch file name,
> some time for quite a few times. It's confusing which is which.
>
> I'd suggest that as a guideline, we do the following when uploading a
> patch:
>
>- include a revision number in the patch file name.A
>- include a comment, stating that a new patch is uploaded, including the
>revision number of the patch in the comment.
>
> This way, it's easier to refer to a specific version of a patch, and to
> know which patch a comment is made about.
>
> Hope that makes sense to you.
>
> Thanks.
>
> --Yongjun
>


Re: submitting a hadoop patch doesn't trigger jenkins test run

2014-11-14 Thread Ted Yu
Adding builds@apache

On Fri, Nov 14, 2014 at 1:34 PM, Yongjun Zhang  wrote:

> Hi,
>
> One issue to report here, any help would be greatly appreciated!
>
> I noticed that multiple patch submissions to
> https://issues.apache.org/jira/browse/HADOOP-11293
> did not trigger jenkins test run.
>
> Thanks Chris Nauroth for the help to trigger one manually for me:
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/5079/
>
> and it turned out the manually triggered one did not report result back to
> the jira upon finishing.
>
> I submitted the same patch to another jira (HADOOP-11045) that I was
> working on, which triggered jenkins test before, and it seems not
> triggering this time too.
>
> So it looks like something is broken. This might be just a HADOOP jira
> issue (HDFS jira seems to be fine).
>
> Thanks a lot for looking into.
>
> --Yongjun
>


Re: Hadoop maven packaging does not work on JAVA 1.8?

2014-11-10 Thread Ted Yu
Have created Jenkins jobs for common, hdfs and mapreduce components against
Java8.

FYI

On Mon, Nov 10, 2014 at 4:24 PM, Ted Yu  wrote:

> Created Hadoop-Yarn-trunk-Java8 and triggered a build.
>
> Can create Jenkins builds for other projects later.
>
> Cheers
>
> On Mon, Nov 10, 2014 at 1:26 PM, Andrew Wang 
> wrote:
>
>> Good idea, we should probably have such a build anyway.
>>
>> Thanks,
>> Andrew
>>
>> On Mon, Nov 10, 2014 at 1:24 PM, Ted Yu  wrote:
>>
>> > Should there be a Jenkins job building trunk branch against Java 1.8
>> after
>> > the fix goes in ?
>> >
>> > That way we can easily see any regression.
>> >
>> > Cheers
>> >
>> > On Mon, Nov 10, 2014 at 12:54 PM, Chen He  wrote:
>> >
>> > > Invite Andrew Purtell to HADOOP-11292, My fix is just disable the
>> > "doclint"
>> > > in hadoop project. Then, we can still keep current docs without
>> change.
>> > >
>> > > On Mon, Nov 10, 2014 at 12:51 PM, Andrew Wang <
>> andrew.w...@cloudera.com>
>> > > wrote:
>> > >
>> > > > I think Andrew Purtell had some patches to clean up javadoc errors
>> for
>> > > > JDK8, might be worth asking him before diving in yourself.
>> > > >
>> > > > On Mon, Nov 10, 2014 at 12:04 PM, Chen He 
>> wrote:
>> > > >
>> > > > > Thanks, Ted Yu. I will create a JIRA for it. I find a way to fix
>> it.
>> > > > >
>> > > > > On Mon, Nov 10, 2014 at 11:50 AM, Ted Yu 
>> > wrote:
>> > > > >
>> > > > > > I can reproduce this.
>> > > > > >
>> > > > > > Tried what was suggested here:
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> http://stackoverflow.com/questions/15886209/maven-is-not-working-in-java-8-when-javadoc-tags-are-incomplete
>> > > > > >
>> > > > > > Though it doesn't seem to work.
>> > > > > >
>> > > > > > On Mon, Nov 10, 2014 at 11:32 AM, Chen He 
>> > wrote:
>> > > > > >
>> > > > > > > "mvn package -Pdist -Dtar -DskipTests" reports following error
>> > > based
>> > > > on
>> > > > > > > latest trunk:
>> > > > > > >
>> > > > > > > [INFO] BUILD FAILURE
>> > > > > > >
>> > > > > > > [INFO]
>> > > > > > >
>> > > > >
>> > >
>> 
>> > > > > > >
>> > > > > > > [INFO] Total time: 11.010 s
>> > > > > > >
>> > > > > > > [INFO] Finished at: 2014-11-10T11:23:49-08:00
>> > > > > > >
>> > > > > > > [INFO] Final Memory: 51M/555M
>> > > > > > >
>> > > > > > > [INFO]
>> > > > > > >
>> > > > >
>> > >
>> 
>> > > > > > >
>> > > > > > > [ERROR] Failed to execute goal
>> > > > > > > org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar
>> > > > > (module-javadocs)
>> > > > > > > on project hadoop-maven-plugins: MavenReportException: Error
>> > while
>> > > > > > creating
>> > > > > > > archive:
>> > > > > > >
>> > > > > > > [ERROR] Exit code: 1 -
>> > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:45:
>> > > > > > > error: unknown tag: String
>> > > > > > >
>> > > > > > > [ERROR] * @param command List containing command and
>> all
>> > > > > > arguments
>> > > > > > >
>> > > > > > > [ERROR] ^
>> > > > > > >
>> > > > > > > [ERROR]
&g

Re: Hadoop maven packaging does not work on JAVA 1.8?

2014-11-10 Thread Ted Yu
Created Hadoop-Yarn-trunk-Java8 and triggered a build.

Can create Jenkins builds for other projects later.

Cheers

On Mon, Nov 10, 2014 at 1:26 PM, Andrew Wang 
wrote:

> Good idea, we should probably have such a build anyway.
>
> Thanks,
> Andrew
>
> On Mon, Nov 10, 2014 at 1:24 PM, Ted Yu  wrote:
>
> > Should there be a Jenkins job building trunk branch against Java 1.8
> after
> > the fix goes in ?
> >
> > That way we can easily see any regression.
> >
> > Cheers
> >
> > On Mon, Nov 10, 2014 at 12:54 PM, Chen He  wrote:
> >
> > > Invite Andrew Purtell to HADOOP-11292, My fix is just disable the
> > "doclint"
> > > in hadoop project. Then, we can still keep current docs without change.
> > >
> > > On Mon, Nov 10, 2014 at 12:51 PM, Andrew Wang <
> andrew.w...@cloudera.com>
> > > wrote:
> > >
> > > > I think Andrew Purtell had some patches to clean up javadoc errors
> for
> > > > JDK8, might be worth asking him before diving in yourself.
> > > >
> > > > On Mon, Nov 10, 2014 at 12:04 PM, Chen He  wrote:
> > > >
> > > > > Thanks, Ted Yu. I will create a JIRA for it. I find a way to fix
> it.
> > > > >
> > > > > On Mon, Nov 10, 2014 at 11:50 AM, Ted Yu 
> > wrote:
> > > > >
> > > > > > I can reproduce this.
> > > > > >
> > > > > > Tried what was suggested here:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> http://stackoverflow.com/questions/15886209/maven-is-not-working-in-java-8-when-javadoc-tags-are-incomplete
> > > > > >
> > > > > > Though it doesn't seem to work.
> > > > > >
> > > > > > On Mon, Nov 10, 2014 at 11:32 AM, Chen He 
> > wrote:
> > > > > >
> > > > > > > "mvn package -Pdist -Dtar -DskipTests" reports following error
> > > based
> > > > on
> > > > > > > latest trunk:
> > > > > > >
> > > > > > > [INFO] BUILD FAILURE
> > > > > > >
> > > > > > > [INFO]
> > > > > > >
> > > > >
> > >
> 
> > > > > > >
> > > > > > > [INFO] Total time: 11.010 s
> > > > > > >
> > > > > > > [INFO] Finished at: 2014-11-10T11:23:49-08:00
> > > > > > >
> > > > > > > [INFO] Final Memory: 51M/555M
> > > > > > >
> > > > > > > [INFO]
> > > > > > >
> > > > >
> > >
> 
> > > > > > >
> > > > > > > [ERROR] Failed to execute goal
> > > > > > > org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar
> > > > > (module-javadocs)
> > > > > > > on project hadoop-maven-plugins: MavenReportException: Error
> > while
> > > > > > creating
> > > > > > > archive:
> > > > > > >
> > > > > > > [ERROR] Exit code: 1 -
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:45:
> > > > > > > error: unknown tag: String
> > > > > > >
> > > > > > > [ERROR] * @param command List containing command and
> all
> > > > > > arguments
> > > > > > >
> > > > > > > [ERROR] ^
> > > > > > >
> > > > > > > [ERROR]
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> ./develop/hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:46:
> > > > > > > error: unknown tag: String
> > > > > > >
> > > > > > > [ERROR] * @param output List in/out parameter to
> receive
> > > > > command
> > > > > > > output
> > > > > > >
> > > > > > > [

Re: Hadoop maven packaging does not work on JAVA 1.8?

2014-11-10 Thread Ted Yu
Should there be a Jenkins job building trunk branch against Java 1.8 after
the fix goes in ?

That way we can easily see any regression.

Cheers

On Mon, Nov 10, 2014 at 12:54 PM, Chen He  wrote:

> Invite Andrew Purtell to HADOOP-11292, My fix is just disable the "doclint"
> in hadoop project. Then, we can still keep current docs without change.
>
> On Mon, Nov 10, 2014 at 12:51 PM, Andrew Wang 
> wrote:
>
> > I think Andrew Purtell had some patches to clean up javadoc errors for
> > JDK8, might be worth asking him before diving in yourself.
> >
> > On Mon, Nov 10, 2014 at 12:04 PM, Chen He  wrote:
> >
> > > Thanks, Ted Yu. I will create a JIRA for it. I find a way to fix it.
> > >
> > > On Mon, Nov 10, 2014 at 11:50 AM, Ted Yu  wrote:
> > >
> > > > I can reproduce this.
> > > >
> > > > Tried what was suggested here:
> > > >
> > > >
> > >
> >
> http://stackoverflow.com/questions/15886209/maven-is-not-working-in-java-8-when-javadoc-tags-are-incomplete
> > > >
> > > > Though it doesn't seem to work.
> > > >
> > > > On Mon, Nov 10, 2014 at 11:32 AM, Chen He  wrote:
> > > >
> > > > > "mvn package -Pdist -Dtar -DskipTests" reports following error
> based
> > on
> > > > > latest trunk:
> > > > >
> > > > > [INFO] BUILD FAILURE
> > > > >
> > > > > [INFO]
> > > > >
> > >
> 
> > > > >
> > > > > [INFO] Total time: 11.010 s
> > > > >
> > > > > [INFO] Finished at: 2014-11-10T11:23:49-08:00
> > > > >
> > > > > [INFO] Final Memory: 51M/555M
> > > > >
> > > > > [INFO]
> > > > >
> > >
> 
> > > > >
> > > > > [ERROR] Failed to execute goal
> > > > > org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar
> > > (module-javadocs)
> > > > > on project hadoop-maven-plugins: MavenReportException: Error while
> > > > creating
> > > > > archive:
> > > > >
> > > > > [ERROR] Exit code: 1 -
> > > > >
> > > > >
> > > >
> > >
> >
> ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:45:
> > > > > error: unknown tag: String
> > > > >
> > > > > [ERROR] * @param command List containing command and all
> > > > arguments
> > > > >
> > > > > [ERROR] ^
> > > > >
> > > > > [ERROR]
> > > > >
> > > > >
> > > >
> > >
> >
> ./develop/hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:46:
> > > > > error: unknown tag: String
> > > > >
> > > > > [ERROR] * @param output List in/out parameter to receive
> > > command
> > > > > output
> > > > >
> > > > > [ERROR] ^
> > > > >
> > > > > [ERROR]
> > > > >
> > > > >
> > > >
> > >
> >
> ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/FileSetUtils.java:50:
> > > > > error: unknown tag: File
> > > > >
> > > > > [ERROR] * @return List containing every element of the
> FileSet
> > > as a
> > > > > File
> > > > >
> > > > > [ERROR] ^
> > > > >
> > > > > [ERROR]
> > > > >
> > > > > [ERROR] Command line was:
> > > > >
> > > >
> > >
> >
> /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/javadoc
> > > > > -J-Dhttp.proxySet=true -J-Dhttp.proxyHost=www-proxy.us.oracle.com
> > > > > -J-Dhttp.proxyPort=80 @options @packages
> > > > >
> > > > > [ERROR]
> > > > >
> > > > > [ERROR] Refer to the generated Javadoc files in
> > > > > './hadoop/hadoop/hadoop-maven-plugins/target' dir.
> > > > >
> > > > > [ERROR] -> [Help 1]
> > > > >
> > > > > [ERROR]
> > > > >
> > > > > [ERROR] To see the full stack trace of the errors, re-run Maven
> with
> > > the
> > > > -e
> > > > > switch.
> > > > >
> > > > > [ERROR] Re-run Maven using the -X switch to enable full debug
> > logging.
> > > > >
> > > > > [ERROR]
> > > > >
> > > > > [ERROR] For more information about the errors and possible
> solutions,
> > > > > please read the following articles:
> > > > >
> > > > > [ERROR] [Help 1]
> > > > >
> > >
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> > > > >
> > > > > [ERROR]
> > > > >
> > > > > [ERROR] After correcting the problems, you can resume the build
> with
> > > the
> > > > > command
> > > > >
> > > > > [ERROR]   mvn  -rf :hadoop-maven-plugins
> > > > >
> > > >
> > >
> >
>


Re: Hadoop maven packaging does not work on JAVA 1.8?

2014-11-10 Thread Ted Yu
I can reproduce this.

Tried what was suggested here:
http://stackoverflow.com/questions/15886209/maven-is-not-working-in-java-8-when-javadoc-tags-are-incomplete

Though it doesn't seem to work.

On Mon, Nov 10, 2014 at 11:32 AM, Chen He  wrote:

> "mvn package -Pdist -Dtar -DskipTests" reports following error based on
> latest trunk:
>
> [INFO] BUILD FAILURE
>
> [INFO]
> 
>
> [INFO] Total time: 11.010 s
>
> [INFO] Finished at: 2014-11-10T11:23:49-08:00
>
> [INFO] Final Memory: 51M/555M
>
> [INFO]
> 
>
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs)
> on project hadoop-maven-plugins: MavenReportException: Error while creating
> archive:
>
> [ERROR] Exit code: 1 -
>
> ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:45:
> error: unknown tag: String
>
> [ERROR] * @param command List containing command and all arguments
>
> [ERROR] ^
>
> [ERROR]
>
> ./develop/hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:46:
> error: unknown tag: String
>
> [ERROR] * @param output List in/out parameter to receive command
> output
>
> [ERROR] ^
>
> [ERROR]
>
> ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/FileSetUtils.java:50:
> error: unknown tag: File
>
> [ERROR] * @return List containing every element of the FileSet as a
> File
>
> [ERROR] ^
>
> [ERROR]
>
> [ERROR] Command line was:
> /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/javadoc
> -J-Dhttp.proxySet=true -J-Dhttp.proxyHost=www-proxy.us.oracle.com
> -J-Dhttp.proxyPort=80 @options @packages
>
> [ERROR]
>
> [ERROR] Refer to the generated Javadoc files in
> './hadoop/hadoop/hadoop-maven-plugins/target' dir.
>
> [ERROR] -> [Help 1]
>
> [ERROR]
>
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e
> switch.
>
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>
> [ERROR]
>
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
>
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
>
> [ERROR]
>
> [ERROR] After correcting the problems, you can resume the build with the
> command
>
> [ERROR]   mvn  -rf :hadoop-maven-plugins
>


[jira] [Created] (HADOOP-11283) Potentially unclosed SequenceFile.Writer in DistCpV1#setup()

2014-11-07 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11283:
---

 Summary: Potentially unclosed SequenceFile.Writer in 
DistCpV1#setup()
 Key: HADOOP-11283
 URL: https://issues.apache.org/jira/browse/HADOOP-11283
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


{code}
SequenceFile.Writer src_writer = SequenceFile.createWriter(jobfs, jobConf,
srcfilelist, LongWritable.class, FilePair.class,
SequenceFile.CompressionType.NONE);

Path dstfilelist = new Path(jobDirectory, "_distcp_dst_files");
SequenceFile.Writer dst_writer = SequenceFile.createWriter(jobfs, jobConf,
dstfilelist, Text.class, Text.class,
SequenceFile.CompressionType.NONE);
{code}
If creation of dst_writer throws exception, src_writer would be left unclosed 
since there is no finally clause doing that for the above code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11275) TestSSLFactory fails on Java 8

2014-11-05 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu resolved HADOOP-11275.
-
Resolution: Not a Problem

Refreshed workspace and this test passes.

> TestSSLFactory fails on Java 8
> --
>
> Key: HADOOP-11275
> URL: https://issues.apache.org/jira/browse/HADOOP-11275
> Project: Hadoop Common
>  Issue Type: Test
>    Reporter: Ted Yu
>Priority: Minor
>
> Below are a few of the exceptions I got running this test against Java 8:
> {code}
> Running org.apache.hadoop.security.ssl.TestSSLFactory
> Tests run: 15, Failures: 0, Errors: 14, Skipped: 0, Time elapsed: 1.724 sec 
> <<< FAILURE! - in org.apache.hadoop.security.ssl.TestSSLFactory
> testNoClientCertsInitialization(org.apache.hadoop.security.ssl.TestSSLFactory)
>   Time elapsed: 0.177 sec  <<< ERROR!
> java.security.cert.CertificateException: Subject class type invalid.
> at sun.security.x509.X509CertInfo.setSubject(X509CertInfo.java:888)
> at sun.security.x509.X509CertInfo.set(X509CertInfo.java:415)
> at 
> org.apache.hadoop.security.ssl.KeyStoreTestUtil.generateCertificate(KeyStoreTestUtil.java:96)
> at 
> org.apache.hadoop.security.ssl.KeyStoreTestUtil.setupSSLConfig(KeyStoreTestUtil.java:268)
> at 
> org.apache.hadoop.security.ssl.TestSSLFactory.createConfiguration(TestSSLFactory.java:64)
> at 
> org.apache.hadoop.security.ssl.TestSSLFactory.testNoClientCertsInitialization(TestSSLFactory.java:337)
> testServerKeyPasswordDefaultsToPassword(org.apache.hadoop.security.ssl.TestSSLFactory)
>   Time elapsed: 0.189 sec  <<< ERROR!
> java.security.cert.CertificateException: Subject class type invalid.
> at sun.security.x509.X509CertInfo.setSubject(X509CertInfo.java:888)
> at sun.security.x509.X509CertInfo.set(X509CertInfo.java:415)
> at 
> org.apache.hadoop.security.ssl.KeyStoreTestUtil.generateCertificate(KeyStoreTestUtil.java:96)
> at 
> org.apache.hadoop.security.ssl.TestSSLFactory.checkSSLFactoryInitWithPasswords(TestSSLFactory.java:283)
> at 
> org.apache.hadoop.security.ssl.TestSSLFactory.checkSSLFactoryInitWithPasswords(TestSSLFactory.java:248)
> at 
> org.apache.hadoop.security.ssl.TestSSLFactory.testServerKeyPasswordDefaultsToPassword(TestSSLFactory.java:205)
> testServerCredProviderPasswords(org.apache.hadoop.security.ssl.TestSSLFactory)
>   Time elapsed: 0.462 sec  <<< ERROR!
> java.security.cert.CertificateException: Subject class type invalid.
> at sun.security.x509.X509CertInfo.setSubject(X509CertInfo.java:888)
> at sun.security.x509.X509CertInfo.set(X509CertInfo.java:415)
> at 
> org.apache.hadoop.security.ssl.KeyStoreTestUtil.generateCertificate(KeyStoreTestUtil.java:96)
> at 
> org.apache.hadoop.security.ssl.TestSSLFactory.checkSSLFactoryInitWithPasswords(TestSSLFactory.java:283)
> at 
> org.apache.hadoop.security.ssl.TestSSLFactory.testServerCredProviderPasswords(TestSSLFactory.java:224)
> testClientDifferentPasswordAndKeyPassword(org.apache.hadoop.security.ssl.TestSSLFactory)
>   Time elapsed: 0.059 sec  <<< ERROR!
> java.security.cert.CertificateException: Subject class type invalid.
> at sun.security.x509.X509CertInfo.setSubject(X509CertInfo.java:888)
> at sun.security.x509.X509CertInfo.set(X509CertInfo.java:415)
> at 
> org.apache.hadoop.security.ssl.KeyStoreTestUtil.generateCertificate(KeyStoreTestUtil.java:96)
> at 
> org.apache.hadoop.security.ssl.TestSSLFactory.checkSSLFactoryInitWithPasswords(TestSSLFactory.java:283)
> at 
> org.apache.hadoop.security.ssl.TestSSLFactory.checkSSLFactoryInitWithPasswords(TestSSLFactory.java:248)
> at 
> org.apache.hadoop.security.ssl.TestSSLFactory.testClientDifferentPasswordAndKeyPassword(TestSSLFactory.java:211)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11275) TestSSLFactory fails on Java 8

2014-11-05 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11275:
---

 Summary: TestSSLFactory fails on Java 8
 Key: HADOOP-11275
 URL: https://issues.apache.org/jira/browse/HADOOP-11275
 Project: Hadoop Common
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


Below are a few of the exceptions I got running this test against Java 8:
{code}
Running org.apache.hadoop.security.ssl.TestSSLFactory
Tests run: 15, Failures: 0, Errors: 14, Skipped: 0, Time elapsed: 1.724 sec <<< 
FAILURE! - in org.apache.hadoop.security.ssl.TestSSLFactory
testNoClientCertsInitialization(org.apache.hadoop.security.ssl.TestSSLFactory)  
Time elapsed: 0.177 sec  <<< ERROR!
java.security.cert.CertificateException: Subject class type invalid.
at sun.security.x509.X509CertInfo.setSubject(X509CertInfo.java:888)
at sun.security.x509.X509CertInfo.set(X509CertInfo.java:415)
at 
org.apache.hadoop.security.ssl.KeyStoreTestUtil.generateCertificate(KeyStoreTestUtil.java:96)
at 
org.apache.hadoop.security.ssl.KeyStoreTestUtil.setupSSLConfig(KeyStoreTestUtil.java:268)
at 
org.apache.hadoop.security.ssl.TestSSLFactory.createConfiguration(TestSSLFactory.java:64)
at 
org.apache.hadoop.security.ssl.TestSSLFactory.testNoClientCertsInitialization(TestSSLFactory.java:337)

testServerKeyPasswordDefaultsToPassword(org.apache.hadoop.security.ssl.TestSSLFactory)
  Time elapsed: 0.189 sec  <<< ERROR!
java.security.cert.CertificateException: Subject class type invalid.
at sun.security.x509.X509CertInfo.setSubject(X509CertInfo.java:888)
at sun.security.x509.X509CertInfo.set(X509CertInfo.java:415)
at 
org.apache.hadoop.security.ssl.KeyStoreTestUtil.generateCertificate(KeyStoreTestUtil.java:96)
at 
org.apache.hadoop.security.ssl.TestSSLFactory.checkSSLFactoryInitWithPasswords(TestSSLFactory.java:283)
at 
org.apache.hadoop.security.ssl.TestSSLFactory.checkSSLFactoryInitWithPasswords(TestSSLFactory.java:248)
at 
org.apache.hadoop.security.ssl.TestSSLFactory.testServerKeyPasswordDefaultsToPassword(TestSSLFactory.java:205)

testServerCredProviderPasswords(org.apache.hadoop.security.ssl.TestSSLFactory)  
Time elapsed: 0.462 sec  <<< ERROR!
java.security.cert.CertificateException: Subject class type invalid.
at sun.security.x509.X509CertInfo.setSubject(X509CertInfo.java:888)
at sun.security.x509.X509CertInfo.set(X509CertInfo.java:415)
at 
org.apache.hadoop.security.ssl.KeyStoreTestUtil.generateCertificate(KeyStoreTestUtil.java:96)
at 
org.apache.hadoop.security.ssl.TestSSLFactory.checkSSLFactoryInitWithPasswords(TestSSLFactory.java:283)
at 
org.apache.hadoop.security.ssl.TestSSLFactory.testServerCredProviderPasswords(TestSSLFactory.java:224)

testClientDifferentPasswordAndKeyPassword(org.apache.hadoop.security.ssl.TestSSLFactory)
  Time elapsed: 0.059 sec  <<< ERROR!
java.security.cert.CertificateException: Subject class type invalid.
at sun.security.x509.X509CertInfo.setSubject(X509CertInfo.java:888)
at sun.security.x509.X509CertInfo.set(X509CertInfo.java:415)
at 
org.apache.hadoop.security.ssl.KeyStoreTestUtil.generateCertificate(KeyStoreTestUtil.java:96)
at 
org.apache.hadoop.security.ssl.TestSSLFactory.checkSSLFactoryInitWithPasswords(TestSSLFactory.java:283)
at 
org.apache.hadoop.security.ssl.TestSSLFactory.checkSSLFactoryInitWithPasswords(TestSSLFactory.java:248)
at 
org.apache.hadoop.security.ssl.TestSSLFactory.testClientDifferentPasswordAndKeyPassword(TestSSLFactory.java:211)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11229) JobStoryProducer is not closed upon return from Gridmix#setupDistCacheEmulation()

2014-10-24 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11229:
---

 Summary: JobStoryProducer is not closed upon return from 
Gridmix#setupDistCacheEmulation()
 Key: HADOOP-11229
 URL: https://issues.apache.org/jira/browse/HADOOP-11229
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


Here is related code:
{code}
  JobStoryProducer jsp = createJobStoryProducer(traceIn, conf);
  exitCode = distCacheEmulator.setupGenerateDistCacheData(jsp);
{code}
jsp should be closed upon return from setupDistCacheEmulation().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11198) Typo in javadoc for FileSystem#listStatus()

2014-10-13 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11198:
---

 Summary: Typo in javadoc for FileSystem#listStatus()
 Key: HADOOP-11198
 URL: https://issues.apache.org/jira/browse/HADOOP-11198
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


{code}
   * @return the statuses of the files/directories in the given patch
{code}
'patch' should be path



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11191) NativeAzureFileSystem#close() should be synchronized

2014-10-10 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11191:
---

 Summary: NativeAzureFileSystem#close() should be synchronized
 Key: HADOOP-11191
 URL: https://issues.apache.org/jira/browse/HADOOP-11191
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


{code}
public void close() throws IOException {
  in.close();
  closed = true;
}
{code}
The other methods, such as seek(), are synchronized.
close() should be as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11190) Potentially stale value is used in SelfRenewingLease ctor

2014-10-10 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11190:
---

 Summary: Potentially stale value is used in SelfRenewingLease ctor
 Key: HADOOP-11190
 URL: https://issues.apache.org/jira/browse/HADOOP-11190
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


Here is w.r.t. threadNumber, shown in the code around line 102:
{code}
renewer.setName("AzureLeaseRenewer-" + threadNumber++);
{code}
Since there is no synchronization involved, potentially stale value may be read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: hadoop 2.4.0

2014-10-08 Thread Ted Yu
Why not use the latest ?
http://hadoop.apache.org/releases.html#12+September%2C+2014%3A+Relase+2.5.1+available

Cheers

On Wed, Oct 8, 2014 at 8:28 PM, Stanley Shi  wrote:

> hadoop 2.4.1 is out, this is a hotfix version. If there're no specific
> reason, you should choose this one.
>
> http://hadoop.apache.org/releases.html#30+June%2C+2014%3A+Release+2.4.1+available
>
> On Fri, Apr 18, 2014 at 1:20 AM, MrAsanjar .  wrote:
>
> > Hi all,
> > How stable is hadoop 2.4.0? what are the known issues? anyone has done an
> > extensive testing on it?
> > Thanks in advance..
> >
>
>
>
> --
> Regards,
> *Stanley Shi,*
>


[jira] [Created] (HADOOP-11165) TestUTF8 fails when run against java 8

2014-10-06 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11165:
---

 Summary: TestUTF8 fails when run against java 8
 Key: HADOOP-11165
 URL: https://issues.apache.org/jira/browse/HADOOP-11165
 Project: Hadoop Common
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


Using jdk1.8.0_20 , I got:
{code}
testGetBytes(org.apache.hadoop.io.TestUTF8)  Time elapsed: 0.007 sec  <<< 
FAILURE!
junit.framework.ComparisonFailure: 
expected:<쑼ь⣄鬘㟻햫紖燺[?炀⃍⑰풸낓⨵ἲꬌホ쭷㛕曬䟊⁍䴥䳠領蟭뱻宭竕昚鍳튇ꊕ혶齲쏈㠮胨䩦隼᢯䍻᝴킿喝벁ࢼ듿饭玳Մ剌䒤?䳛슟녚沖᯳?訨
牙⍖?䎠旘薑春觀葝礫⁑ﻱ⣽゚굿뒦ݦ︀偆?]古絥萟浐> but 
was:<쑼ь⣄鬘㟻햫紖燺[�炀⃍⑰풸낓⨵ἲꬌホ쭷㛕曬䟊⁍䴥䳠領蟭뱻宭竕昚鍳튇ꊕ혶齲쏈㠮胨䩦隼᢯䍻᝴킿喝벁ࢼ듿饭玳Մ剌䒤�䳛슟녚᯳�訨牙⍖�䎠旘薑春觀葝礫⁑ﻱ⣽゚굿뒦ݦ︀偆�]古絥萟浐>
at junit.framework.Assert.assertEquals(Assert.java:100)
at junit.framework.Assert.assertEquals(Assert.java:107)
at junit.framework.TestCase.assertEquals(TestCase.java:269)
at org.apache.hadoop.io.TestUTF8.testGetBytes(TestUTF8.java:58)

testIO(org.apache.hadoop.io.TestUTF8)  Time elapsed: 0.002 sec  <<< FAILURE!
junit.framework.ComparisonFailure: expected:<...ᨍ⁖粩⧬车﹂脖朷䝄懒댵突疼資⍣眠畠忁[?]䪐ゑ鬍鍅遻ꈸ釡> 
but was:<...ᨍ⁖粩⧬车﹂脖朷䝄懒댵突疼資⍣眠畠忁[�]䪐ゑ鬍鍅遻>ꈸ釡>
at junit.framework.Assert.assertEquals(Assert.java:100)
at junit.framework.Assert.assertEquals(Assert.java:107)
at junit.framework.TestCase.assertEquals(TestCase.java:269)
at org.apache.hadoop.io.TestUTF8.testIO(TestUTF8.java:86)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Build failed in Jenkins: Hadoop-Common-trunk due to missing header file

2014-10-06 Thread Ted Yu
Logged BUILDS-26 to track Colin's suggestion.

FYI

On Mon, Oct 6, 2014 at 12:19 PM, Colin McCabe 
wrote:

> On Thu, Oct 2, 2014 at 1:15 PM, Ted Yu  wrote:
> > On my Mac and on Linux, I was able to
> > find /usr/include/openssl/opensslconf.h
> >
> > However the file is absent on Jenkins machine(s).
> >
> > Just want to make sure that the file is needed for native build before
> > filing INFRA ticket.
>
> opensslconf.h is part of the openssl-devel package (at least on my
> machine) and if it is missing, I would suspect that openssl is either
> not installed or incorrectly installed.
>
> We need it for the native build to have coverage for the
> openssl-related things (like random number generation and encryption).
>
> Colin
>
> >
> > Cheers
> >
> > On Thu, Oct 2, 2014 at 9:09 AM, Tsuyoshi OZAWA  >
> > wrote:
> >
> >> Hi Ted,
> >>
> >> On my local, the build of trunk with "mvn package -Pnative,dist" works
> >> well. I'm not certain whether this problem is related, but some build
> >> fails on YARN(e.g. YARN-2562, YARN-2615, YARN-2640).
> >>
> >> The version information of OS and libssl-dev on my local environment
> >> is as follows:
> >>
> >> $ uname -a
> >> Linux ip-172-31-4-83 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10
> >> 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
> >>
> >> $ apt-cache show libssl-dev
> >> Package: libssl-dev
> >> Priority: optional
> >> Section: libdevel
> >> Installed-Size: 6162
> >> Maintainer: Ubuntu Developers 
> >> Original-Maintainer: Debian OpenSSL Team
> >> 
> >> Architecture: amd64
> >> Source: openssl
> >> Version: 1.0.1f-1ubuntu2
> >> Depends: libssl1.0.0 (= 1.0.1f-1ubuntu2), zlib1g-dev
> >> Recommends: libssl-doc
> >> Filename: pool/main/o/openssl/libssl-dev_1.0.1f-1ubuntu2_amd64.deb
> >> Size: 1066212
> >> MD5sum: 321724885048f9a78d0e93887a7eb296
> >> SHA1: e337538bed6e5765a0a85c4ca2af1d0deefd6ce0
> >> SHA256: ed199dc9131923fa3c911202f165402b1310f50dcdfab987f6f5c2669fc698cc
> >>
> >> Cheers,
> >> - Tsuyoshi
> >>
> >> On Thu, Oct 2, 2014 at 11:43 PM, Ted Yu  wrote:
> >> > Hadoop-Common-trunk build failed due to missing opensslconf.h
> >> >
> >> > Is this environment issue or due to recent commits ?
> >> >
> >> > Cheers
> >> >
> >> > On Thu, Oct 2, 2014 at 7:31 AM, Apache Jenkins Server <
> >> > jenk...@builds.apache.org> wrote:
> >> >
> >> >> See <https://builds.apache.org/job/Hadoop-Common-trunk/1257/>
> >> >>
> >> >> --
> >> >>  [exec] /usr/bin/cmake -E cmake_progress_report <
> >> >>
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/target/native/CMakeFiles
> >> >
> >> >> 8
> >> >>  [exec] [ 16%] Building C object
> >> >>
> >>
> CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c.o
> >> >>  [exec] /usr/bin/cc  -Dhadoop_EXPORTS -m32 -g -Wall -O2
> -D_REENTRANT
> >> >> -D_GNU_SOURCE -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -fPIC -I<
> >> >>
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/target/native/javah
> >> >
> >> >> -I<
> >> >>
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src/main/native/src
> >> >
> >> >> -I<
> >> >>
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src
> >> >
> >> >> -I<
> >> >>
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src/src
> >> >
> >> >> -I<
> >> >>
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/target/native
> >> >
> >> >> -I/home/jenkins/tools/java/latest/include
> >> >> -I/home/jenkins/tools/java/latest/include/linux -I<
> >> >>
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/u

Re: builds failing on H9 with "cannot access java.lang.Runnable"

2014-10-03 Thread Ted Yu
Adding builds@

On Fri, Oct 3, 2014 at 1:07 PM, Colin McCabe  wrote:

> It looks like builds are failing on the H9 host with "cannot access
> java.lang.Runnable"
>
> Example from
> https://builds.apache.org/job/PreCommit-HDFS-Build/8313/artifact/patchprocess/trunkJavacWarnings.txt
> :
>
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time: 03:13 min
> [INFO] Finished at: 2014-10-03T18:04:35+00:00
> [INFO] Final Memory: 57M/839M
> [INFO]
> 
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-compiler-plugin:2.5.1:testCompile
> (default-testCompile) on project hadoop-mapreduce-client-app:
> Compilation failure
> [ERROR]
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/commit/TestCommitterEventHandler.java:[189,-1]
> cannot access java.lang.Runnable
> [ERROR] bad class file: java/lang/Runnable.class(java/lang:Runnable.class)
>
> I don't have shell access to this, does anyone know what's going on on H9?
>
> best,
> Colin
>


[jira] [Created] (HADOOP-11162) Unclosed InputStream in ApplicationClassLoader

2014-10-03 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-11162:
---

 Summary: Unclosed InputStream in ApplicationClassLoader
 Key: HADOOP-11162
 URL: https://issues.apache.org/jira/browse/HADOOP-11162
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor


{code}
  static {
InputStream is = null;
{code}
The above InputStream is not closed upon leaving the static block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Build failed in Jenkins: Hadoop-Common-trunk due to missing header file

2014-10-02 Thread Ted Yu
On my Mac and on Linux, I was able to
find /usr/include/openssl/opensslconf.h

However the file is absent on Jenkins machine(s).

Just want to make sure that the file is needed for native build before
filing INFRA ticket.

Cheers

On Thu, Oct 2, 2014 at 9:09 AM, Tsuyoshi OZAWA 
wrote:

> Hi Ted,
>
> On my local, the build of trunk with "mvn package -Pnative,dist" works
> well. I'm not certain whether this problem is related, but some build
> fails on YARN(e.g. YARN-2562, YARN-2615, YARN-2640).
>
> The version information of OS and libssl-dev on my local environment
> is as follows:
>
> $ uname -a
> Linux ip-172-31-4-83 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10
> 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
>
> $ apt-cache show libssl-dev
> Package: libssl-dev
> Priority: optional
> Section: libdevel
> Installed-Size: 6162
> Maintainer: Ubuntu Developers 
> Original-Maintainer: Debian OpenSSL Team
> 
> Architecture: amd64
> Source: openssl
> Version: 1.0.1f-1ubuntu2
> Depends: libssl1.0.0 (= 1.0.1f-1ubuntu2), zlib1g-dev
> Recommends: libssl-doc
> Filename: pool/main/o/openssl/libssl-dev_1.0.1f-1ubuntu2_amd64.deb
> Size: 1066212
> MD5sum: 321724885048f9a78d0e93887a7eb296
> SHA1: e337538bed6e5765a0a85c4ca2af1d0deefd6ce0
> SHA256: ed199dc9131923fa3c911202f165402b1310f50dcdfab987f6f5c2669fc698cc
>
> Cheers,
> - Tsuyoshi
>
> On Thu, Oct 2, 2014 at 11:43 PM, Ted Yu  wrote:
> > Hadoop-Common-trunk build failed due to missing opensslconf.h
> >
> > Is this environment issue or due to recent commits ?
> >
> > Cheers
> >
> > On Thu, Oct 2, 2014 at 7:31 AM, Apache Jenkins Server <
> > jenk...@builds.apache.org> wrote:
> >
> >> See <https://builds.apache.org/job/Hadoop-Common-trunk/1257/>
> >>
> >> --
> >>  [exec] /usr/bin/cmake -E cmake_progress_report <
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/target/native/CMakeFiles
> >
> >> 8
> >>  [exec] [ 16%] Building C object
> >>
> CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c.o
> >>  [exec] /usr/bin/cc  -Dhadoop_EXPORTS -m32 -g -Wall -O2 -D_REENTRANT
> >> -D_GNU_SOURCE -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -fPIC -I<
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/target/native/javah
> >
> >> -I<
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src/main/native/src
> >
> >> -I<
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src
> >
> >> -I<
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src/src
> >
> >> -I<
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/target/native
> >
> >> -I/home/jenkins/tools/java/latest/include
> >> -I/home/jenkins/tools/java/latest/include/linux -I<
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util
> >
> >>   -o
> >>
> CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c.o
> >>  -c <
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c
> >> >
> >>  [exec] In file included from <
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src/main/nat
> >> >
> >>  [exec]
> >> ive/src/org/apache/hadoop/crypto/org_apache_hadoop_crypto.h:33:0,
> >>  [exec]  from <
> >>
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c
> >> >:19:
> >>  [exec] /usr/incl
> >>  [exec] ude/openssl/aes.h:55:33: fatal error: openssl/opensslconf.h:
> >> No such file or directory
> >>  [exec]  #include 
> >>  [exec]  ^
> >>  [exec] compilation terminated.
> >>  [exec] make[2]: ***
> >>
> [CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c.o]
> >> Error 1
> >>  [exec] make[1]: *** [CMakeFiles/hadoop.dir/all] Err

Build failed in Jenkins: Hadoop-Common-trunk due to missing header file

2014-10-02 Thread Ted Yu
Hadoop-Common-trunk build failed due to missing opensslconf.h

Is this environment issue or due to recent commits ?

Cheers

On Thu, Oct 2, 2014 at 7:31 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See 
>
> --
>  [exec] /usr/bin/cmake -E cmake_progress_report <
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/target/native/CMakeFiles>
> 8
>  [exec] [ 16%] Building C object
> CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c.o
>  [exec] /usr/bin/cc  -Dhadoop_EXPORTS -m32 -g -Wall -O2 -D_REENTRANT
> -D_GNU_SOURCE -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -fPIC -I<
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/target/native/javah>
> -I<
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src/main/native/src>
> -I<
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src>
> -I<
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src/src>
> -I<
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/target/native>
> -I/home/jenkins/tools/java/latest/include
> -I/home/jenkins/tools/java/latest/include/linux -I<
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util>
>   -o
> CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c.o
>  -c <
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c
> >
>  [exec] In file included from <
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src/main/nat
> >
>  [exec]
> ive/src/org/apache/hadoop/crypto/org_apache_hadoop_crypto.h:33:0,
>  [exec]  from <
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c
> >:19:
>  [exec] /usr/incl
>  [exec] ude/openssl/aes.h:55:33: fatal error: openssl/opensslconf.h:
> No such file or directory
>  [exec]  #include 
>  [exec]  ^
>  [exec] compilation terminated.
>  [exec] make[2]: ***
> [CMakeFiles/hadoop.dir/main/native/src/org/apache/hadoop/crypto/OpensslCipher.c.o]
> Error 1
>  [exec] make[1]: *** [CMakeFiles/hadoop.dir/all] Error 2
>  [exec] make: *** [all] Error 2
>  [exec] make[2]: Leaving directory `<
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/target/native
> '>
>  [exec] make[1]: Leaving directory `<
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/target/native
> '>
> [INFO]
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Hadoop Annotations . SUCCESS [
> 6.890 s]
> [INFO] Apache Hadoop MiniKDC . SUCCESS [
> 11.827 s]
> [INFO] Apache Hadoop Auth  SUCCESS [04:57
> min]
> [INFO] Apache Hadoop Auth Examples ... SUCCESS [
> 4.448 s]
> [INFO] Apache Hadoop Common .. FAILURE [
> 21.471 s]
> [INFO] Apache Hadoop NFS . SKIPPED
> [INFO] Apache Hadoop KMS . SKIPPED
> [INFO] Apache Hadoop Common Project .. SKIPPED
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time: 05:43 min
> [INFO] Finished at: 2014-10-02T14:30:49+00:00
> [INFO] Final Memory: 65M/763M
> [INFO]
> 
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project
> hadoop-common: An Ant BuildException has occured: exec returned: 2
> [ERROR] around Ant part ...https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/target/native";>
> executable="make" failonerror="true">... @ 7:160 in <
> https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
> >
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute
> goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project
> hadoop-common: An Ant BuildException has occured: exec returned: 2
> around Ant part ...https://builds.apache.org/job/Hadoop-Common-trunk/ws/hadoop-common-project/hadoop-common/target/native";>
> executa

Re: Git repo ready to use

2014-09-24 Thread Ted Yu
Billie found out that Hadoop-Common-2-Commit should be the build that
publishes artifacts.

Thanks Billie.

On Wed, Sep 24, 2014 at 4:20 PM, Ted Yu  wrote:

> FYI
>
> I made some changes to:
> https://builds.apache.org/view/All/job/Hadoop-branch2
>
> because it until this morning was using svn to build.
>
> Would 2.6.0-SNAPSHOT maven artifacts be updated after the build ?
>
> Cheers
>
>
> On Mon, Sep 15, 2014 at 11:14 AM, Todd Lipcon  wrote:
>
>> Hey all,
>>
>> For those of you who like to see the entire history of a file going back
>> to
>> 2006, I found I had to add a new graft to .git/info/grafts:
>>
>> # Project un-split in new writable git repo
>> a196766ea07775f18ded69bd9e8d239f8cfd3ccc
>> 928d485e2743115fe37f9d123ce9a635c5afb91a
>> cd66945f62635f589ff93468e94c0039684a8b6d
>> 77f628ff5925c25ba2ee4ce14590789eb2e7b85b
>>
>> FWIW, my entire file now contains:
>>
>> # Project split
>> 5128a9a453d64bfe1ed978cf9ffed27985eeef36
>> 6c16dc8cf2b28818c852e95302920a278d07ad0c
>> 6a3ac690e493c7da45bbf2ae2054768c427fd0e1
>> 6c16dc8cf2b28818c852e95302920a278d07ad0c
>> 546d96754ffee3142bcbbf4563c624c053d0ed0d
>> 6c16dc8cf2b28818c852e95302920a278d07ad0c
>> 4e569e629a98a4ef5326e5d25a84c7d57b5a8f7a
>> c78078dd2283e2890018ff0e87d751c86163f99f
>>
>> # Project un-split in new writable git repo
>> a196766ea07775f18ded69bd9e8d239f8cfd3ccc
>> 928d485e2743115fe37f9d123ce9a635c5afb91a
>> cd66945f62635f589ff93468e94c0039684a8b6d
>> 77f628ff5925c25ba2ee4ce14590789eb2e7b85b
>>
>> which seems to do a good job for me (not sure if the first few lines are
>> necessary anymore in the latest world)
>>
>> -Todd
>>
>>
>>
>> On Fri, Sep 12, 2014 at 11:31 AM, Colin McCabe 
>> wrote:
>>
>> > It's an issue with test-patch.sh.  See
>> > https://issues.apache.org/jira/browse/HADOOP-11084
>> >
>> > best,
>> > Colin
>> >
>> > On Mon, Sep 8, 2014 at 3:38 PM, Andrew Wang 
>> > wrote:
>> > > We're still not seeing findbugs results show up on precommit runs. I
>> see
>> > > that we're archiving "../patchprocess/*", and Ted thinks that since
>> it's
>> > > not in $WORKSPACE it's not getting picked up. Can we get confirmation
>> of
>> > > this issue? If so, we could just add "patchprocess" to the toplevel
>> > > .gitignore.
>> > >
>> > > On Thu, Sep 4, 2014 at 8:54 AM, Sangjin Lee  wrote:
>> > >
>> > >> That's good to know. Thanks.
>> > >>
>> > >>
>> > >> On Wed, Sep 3, 2014 at 11:15 PM, Vinayakumar B <
>> vinayakum...@apache.org
>> > >
>> > >> wrote:
>> > >>
>> > >> > I think its still pointing to old svn repository which is just read
>> > only
>> > >> > now.
>> > >> >
>> > >> > You can use latest mirror:
>> > >> > https://github.com/apache/hadoop
>> > >> >
>> > >> > Regards,
>> > >> > Vinay
>> > >> > On Sep 4, 2014 11:37 AM, "Sangjin Lee"  wrote:
>> > >> >
>> > >> > > It seems like the github mirror at
>> > >> > https://github.com/apache/hadoop-common
>> > >> > > has stopped getting updates as of 8/22. Could this mirror have
>> been
>> > >> > broken
>> > >> > > by the git transition?
>> > >> > >
>> > >> > > Thanks,
>> > >> > > Sangjin
>> > >> > >
>> > >> > >
>> > >> > > On Fri, Aug 29, 2014 at 11:51 AM, Ted Yu 
>> > wrote:
>> > >> > >
>> > >> > > > From
>> https://builds.apache.org/job/Hadoop-hdfs-trunk/1854/console
>> > :
>> > >> > > >
>> > >> > > > ERROR: No artifacts found that match the file pattern
>> > >> > > > "trunk/hadoop-hdfs-project/*/target/*.tar.gz". Configuration
>> > >> > > > error?ERROR <
>> http://stacktrace.jenkins-ci.org/search?query=ERROR
>> > >:
>> > >> > > > ?trunk/hadoop-hdfs-project/*/target/*.tar.gz? doesn?t match
>> > anything,
>> > >> > > > but ?hadoop-hdfs-project/*/target/*.tar.gz? does. Perhaps

Re: Git repo ready to use

2014-09-24 Thread Ted Yu
FYI

I made some changes to:
https://builds.apache.org/view/All/job/Hadoop-branch2

because it until this morning was using svn to build.

Would 2.6.0-SNAPSHOT maven artifacts be updated after the build ?

Cheers

On Mon, Sep 15, 2014 at 11:14 AM, Todd Lipcon  wrote:

> Hey all,
>
> For those of you who like to see the entire history of a file going back to
> 2006, I found I had to add a new graft to .git/info/grafts:
>
> # Project un-split in new writable git repo
> a196766ea07775f18ded69bd9e8d239f8cfd3ccc
> 928d485e2743115fe37f9d123ce9a635c5afb91a
> cd66945f62635f589ff93468e94c0039684a8b6d
> 77f628ff5925c25ba2ee4ce14590789eb2e7b85b
>
> FWIW, my entire file now contains:
>
> # Project split
> 5128a9a453d64bfe1ed978cf9ffed27985eeef36
> 6c16dc8cf2b28818c852e95302920a278d07ad0c
> 6a3ac690e493c7da45bbf2ae2054768c427fd0e1
> 6c16dc8cf2b28818c852e95302920a278d07ad0c
> 546d96754ffee3142bcbbf4563c624c053d0ed0d
> 6c16dc8cf2b28818c852e95302920a278d07ad0c
> 4e569e629a98a4ef5326e5d25a84c7d57b5a8f7a
> c78078dd2283e2890018ff0e87d751c86163f99f
>
> # Project un-split in new writable git repo
> a196766ea07775f18ded69bd9e8d239f8cfd3ccc
> 928d485e2743115fe37f9d123ce9a635c5afb91a
> cd66945f62635f589ff93468e94c0039684a8b6d
> 77f628ff5925c25ba2ee4ce14590789eb2e7b85b
>
> which seems to do a good job for me (not sure if the first few lines are
> necessary anymore in the latest world)
>
> -Todd
>
>
>
> On Fri, Sep 12, 2014 at 11:31 AM, Colin McCabe 
> wrote:
>
> > It's an issue with test-patch.sh.  See
> > https://issues.apache.org/jira/browse/HADOOP-11084
> >
> > best,
> > Colin
> >
> > On Mon, Sep 8, 2014 at 3:38 PM, Andrew Wang 
> > wrote:
> > > We're still not seeing findbugs results show up on precommit runs. I
> see
> > > that we're archiving "../patchprocess/*", and Ted thinks that since
> it's
> > > not in $WORKSPACE it's not getting picked up. Can we get confirmation
> of
> > > this issue? If so, we could just add "patchprocess" to the toplevel
> > > .gitignore.
> > >
> > > On Thu, Sep 4, 2014 at 8:54 AM, Sangjin Lee  wrote:
> > >
> > >> That's good to know. Thanks.
> > >>
> > >>
> > >> On Wed, Sep 3, 2014 at 11:15 PM, Vinayakumar B <
> vinayakum...@apache.org
> > >
> > >> wrote:
> > >>
> > >> > I think its still pointing to old svn repository which is just read
> > only
> > >> > now.
> > >> >
> > >> > You can use latest mirror:
> > >> > https://github.com/apache/hadoop
> > >> >
> > >> > Regards,
> > >> > Vinay
> > >> > On Sep 4, 2014 11:37 AM, "Sangjin Lee"  wrote:
> > >> >
> > >> > > It seems like the github mirror at
> > >> > https://github.com/apache/hadoop-common
> > >> > > has stopped getting updates as of 8/22. Could this mirror have
> been
> > >> > broken
> > >> > > by the git transition?
> > >> > >
> > >> > > Thanks,
> > >> > > Sangjin
> > >> > >
> > >> > >
> > >> > > On Fri, Aug 29, 2014 at 11:51 AM, Ted Yu 
> > wrote:
> > >> > >
> > >> > > > From
> https://builds.apache.org/job/Hadoop-hdfs-trunk/1854/console
> > :
> > >> > > >
> > >> > > > ERROR: No artifacts found that match the file pattern
> > >> > > > "trunk/hadoop-hdfs-project/*/target/*.tar.gz". Configuration
> > >> > > > error?ERROR <
> http://stacktrace.jenkins-ci.org/search?query=ERROR
> > >:
> > >> > > > ?trunk/hadoop-hdfs-project/*/target/*.tar.gz? doesn?t match
> > anything,
> > >> > > > but ?hadoop-hdfs-project/*/target/*.tar.gz? does. Perhaps that?s
> > what
> > >> > > > you mean?
> > >> > > >
> > >> > > >
> > >> > > > I corrected the path to hdfs tar ball.
> > >> > > >
> > >> > > >
> > >> > > > FYI
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > > > On Fri, Aug 29, 2014 at 8:48 AM, Alejandro Abdelnur <
> > >> t...@cloudera.com
> > >> > >
> > >> > > > wrote:
> > >> > > >

Re: Cannot access Jenkins generated test results, findbug warnings and javac warnings

2014-09-19 Thread Ted Yu
Wangda:
In the meantime, you can find the failed unit test from console log. e.g.:

https://builds.apache.org/job/PreCommit-YARN-Build/5040/console

Then you can run the test locally to see if the failure was related to your
patch.

Cheers

On Fri, Sep 19, 2014 at 2:54 AM, Wangda Tan  wrote:

> Hi Steve,
> I guess this problem should be also caused by wrong URL, if anybody have
> admin access to Jenkins, correct URL should be easily found.
>
> Thanks,
> Wangda
>
> On Fri, Sep 19, 2014 at 4:32 PM, Steve Loughran 
> wrote:
>
> > Looks like HADOOP-11084 isn't complete —the patch to the build to get it
> > working post-git
> >
> > before that patch the builds weren't working at all ... now its just
> > getting the URLs wrong.
> >
> > If you can work out the right URLs we can fix this easily enough
> >
> > On 19 September 2014 09:24, Wangda Tan  wrote:
> >
> > > Hi Hadoop developers,
> > > I found recently, I cannot access Jenkins generated results, like:
> > >
> > >
> > >
> > > *Test
> > > results:
> > > https://builds.apache.org/job/PreCommit-YARN-Build/5039//testReport/
> > >  > > >Findbugs
> > > warnings:
> > >
> >
> https://builds.apache.org/job/PreCommit-YARN-Build/5039//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-yarn-client.html
> > > <
> > >
> >
> https://builds.apache.org/job/PreCommit-YARN-Build/5039//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-yarn-client.html
> > > >Javac
> > > warnings:
> > >
> >
> https://builds.apache.org/job/PreCommit-YARN-Build/5039//artifact/PreCommit-HADOOP-Build-patchprocess/diffJavacWarnings.txt
> > > <
> > >
> >
> https://builds.apache.org/job/PreCommit-YARN-Build/5039//artifact/PreCommit-HADOOP-Build-patchprocess/diffJavacWarnings.txt
> > > >*
> > >
> > > It will report 404 when trying to access findbugs/javac warnings and it
> > > will redirect to info page of build when trying to access test report.
> > >
> > > I'm not sure if there's any recent changes on Jenkins configuration.
> Did
> > > you hit the problem like this?
> > >
> > > Thanks,
> > > Wangda
> > >
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
> >
>


Re: hbase use error

2014-09-16 Thread Ted Yu
Which hadoop release are you using ?
Can you pastebin more of the server logs ?

bq. load file larger than 20M

Do you store such file(s) directly on hdfs and put its path in hbase ?
See HBASE-11339 HBase MOB

On Tue, Sep 16, 2014 at 7:29 PM, QiXiangming 
wrote:

> hello ,everyone
> i use hbase to store small pic or files , and meet an exception
> raised from hdfs, as following :
>
>  slave2:50010:DataXceiver error processing WRITE_BLOCK operation  src: /
> 192.168.20.246:33162 dest: /192.168.20.247:50010
> java.io.IOException: Premature EOF from inputStream
> at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
>
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:446)
>
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:702)
>
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:711)
>
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
>
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
>
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:229)
> at java.lang.Thread.run(Thread.java:745)
>
> when hbase stores pics or file  under 200k, it works well,
> but if you load file larger than 20M , hbase definitely down!
>
> what's wrong with it ?
> can anyone help use?
>
>
> URGENT!!!
>
>
> Qi Xiangming
>


Re: a script to find out flaky tests of Hadoop jenkins job

2014-08-31 Thread Ted Yu
-08-29 04:31:30)
>   Could not open testReport
> ==>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1853/testReport
> (2014-08-28 09:37:18)
>   Could not open testReport
> ==>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1852/testReport
> (2014-08-28 09:28:48)
>   Could not open testReport
> ==>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1850/testReport
> (2014-08-27 04:31:30)
>Failed test:
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS.testEnd2End
> ==>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1849/testReport
> (2014-08-26 04:31:29)
>Failed test:
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer.testBalancer0Integrity
> 
> All failed tests <#occurrences: testName>:
>1:
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer.testBalancer0Integrity
>1:
> org.apache.hadoop.hdfs.TestDFSClientRetries.testIdempotentAllocateBlockAndClose
>1:
> org.apache.hadoop.hdfs.TestDFSClientRetries.testFailuresArePerOperation
>1:
> org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS.testEnd2End
>1:
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.testUnevenDistribution
>1:
> org.apache.hadoop.hdfs.TestDFSClientRetries.testRetryOnChecksumFailure
>1:
> org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.testBalancer
>1:
> org.apache.hadoop.hdfs.TestDFSClientRetries.testWriteTimeoutAtDataNode
>1:
> org.apache.hadoop.hdfs.TestDFSClientRetries.testDFSClientRetriesOnBusyBlocks
>1:
> org.apache.hadoop.hdfs.TestDFSClientRetries.testClientDNProtocolTimeout
>1: org.apache.hadoop.hdfs.TestDFSClientRetries.testGetFileChecksum
>1: org.apache.hadoop.hdfs.TestDFSClientRetries.testNamenodeRestart
> [yzhang@localhost jenkinsftf]$
> 
> 
> 
> On Thu, Aug 28, 2014 at 8:04 PM, Yongjun Zhang  wrote:
> 
>> Hi,
>> 
>> I just noticed that the recent jenkin test report doesn't include link to
>> test result, however, the email notice does show the failed tests:
>> 
>> E.g.
>> 
>> https://builds.apache.org/job/PreCommit-HDFS-Build/7846//
>> 
>> Example old job report that has the link:
>> 
>> https://builds.apache.org/job/PreCommit-HDFS-Build/7590/
>> 
>> Would any one please take a look?
>> 
>> Thanks a lot.
>> 
>> --Yongjun
>> 
>> On Thu, Aug 28, 2014 at 4:21 PM, Karthik Kambatla 
>> wrote:
>> 
>>> Thanks Giri and Ted for fixing the builds.
>>> 
>>> 
>>> On Thu, Aug 28, 2014 at 9:49 AM, Ted Yu  wrote:
>>> 
>>>> Charles:
>>>> QA build is running for your JIRA:
>>>> https://builds.apache.org/job/PreCommit-hdfs-Build/7828/parameters/
>>>> 
>>>> Cheers
>>>> 
>>>> 
>>>> On Thu, Aug 28, 2014 at 9:41 AM, Charles Lamb 
>>> wrote:
>>>> 
>>>>> On 8/28/2014 12:07 PM, Giridharan Kesavan wrote:
>>>>> 
>>>>>> Fixed all the 3 pre-commit buids. test-patch's git reset --hard is
>>>>>> removing
>>>>>> the patchprocess dir, so moved it off the workspace.
>>>>> Thanks Giri. Should I resubmit HDFS-6954's patch? I've gotten 3 or 4
>>>>> jenkins messages that indicated the problem so something is
>>> resubmitting,
>>>>> but now that you've fixed it, should I resubmit it again?
>>>>> 
>>>>> Charles
>> 
>> 


Re: Git repo ready to use

2014-08-29 Thread Ted Yu
>From https://builds.apache.org/job/Hadoop-hdfs-trunk/1854/console :

ERROR: No artifacts found that match the file pattern
"trunk/hadoop-hdfs-project/*/target/*.tar.gz". Configuration
error?ERROR <http://stacktrace.jenkins-ci.org/search?query=ERROR>:
?trunk/hadoop-hdfs-project/*/target/*.tar.gz? doesn?t match anything,
but ?hadoop-hdfs-project/*/target/*.tar.gz? does. Perhaps that?s what
you mean?


I corrected the path to hdfs tar ball.


FYI



On Fri, Aug 29, 2014 at 8:48 AM, Alejandro Abdelnur 
wrote:

> it seems we missed updating the HADOOP precommit job to use Git, it was
> still using SVN. I've just updated it.
>
> thx
>
>
> On Thu, Aug 28, 2014 at 9:26 PM, Ted Yu  wrote:
>
> > Currently patchprocess/ (contents shown below) is one level higher than
> > ${WORKSPACE}
> >
> > diffJavadocWarnings.txt newPatchFindbugsWarningshadoop-hdfs.html
> >  patchFindBugsOutputhadoop-hdfs.txtpatchReleaseAuditOutput.txt
> >  trunkJavadocWarnings.txt
> > filteredPatchJavacWarnings.txt  newPatchFindbugsWarningshadoop-hdfs.xml
> > patchFindbugsWarningshadoop-hdfs.xml  patchReleaseAuditWarnings.txt
> > filteredTrunkJavacWarnings.txt  patch
> > patchJavacWarnings.txttestrun_hadoop-hdfs.txt
> > jirapatchEclipseOutput.txt
> >  patchJavadocWarnings.txt  trunkJavacWarnings.txt
> >
> > Under Files to archive input box of PreCommit-HDFS-Build/configure, I
> saw:
> >
> > ‘../patchprocess/*’ doesn’t match anything, but ‘*’ does. Perhaps that’s
> > what you mean?
> >
> > I guess once patchprocess is moved back under ${WORKSPACE}, a lot of
> things
> > would be back to normal.
> >
> > Cheers
> >
> > On Thu, Aug 28, 2014 at 9:16 PM, Alejandro Abdelnur 
> > wrote:
> >
> > > i'm also seeing broken links for javadocs warnings.
> > >
> > > Alejandro
> > > (phone typing)
> > >
> > > > On Aug 28, 2014, at 20:00, Andrew Wang 
> > wrote:
> > > >
> > > > I noticed that the JUnit test results aren't getting picked up
> > anymore. I
> > > > suspect we just need to update the path to the surefire output, but
> > based
> > > > on a quick examination I'm not sure what that is.
> > > >
> > > > Does someone mind taking another look?
> > > >
> > > >
> > > > On Thu, Aug 28, 2014 at 4:21 PM, Karthik Kambatla <
> ka...@cloudera.com>
> > > > wrote:
> > > >
> > > >> Thanks Giri and Ted for fixing the builds.
> > > >>
> > > >>
> > > >>> On Thu, Aug 28, 2014 at 9:49 AM, Ted Yu 
> wrote:
> > > >>>
> > > >>> Charles:
> > > >>> QA build is running for your JIRA:
> > > >>>
> https://builds.apache.org/job/PreCommit-hdfs-Build/7828/parameters/
> > > >>>
> > > >>> Cheers
> > > >>>
> > > >>>
> > > >>>> On Thu, Aug 28, 2014 at 9:41 AM, Charles Lamb  >
> > > >>> wrote:
> > > >>>
> > > >>>>> On 8/28/2014 12:07 PM, Giridharan Kesavan wrote:
> > > >>>>>
> > > >>>>> Fixed all the 3 pre-commit buids. test-patch's git reset --hard
> is
> > > >>>>> removing
> > > >>>>> the patchprocess dir, so moved it off the workspace.
> > > >>>> Thanks Giri. Should I resubmit HDFS-6954's patch? I've gotten 3
> or 4
> > > >>>> jenkins messages that indicated the problem so something is
> > > >> resubmitting,
> > > >>>> but now that you've fixed it, should I resubmit it again?
> > > >>>>
> > > >>>> Charles
> > > >>
> > >
> >
>
>
>
> --
> Alejandro
>


Re: Git repo ready to use

2014-08-28 Thread Ted Yu
Currently patchprocess/ (contents shown below) is one level higher than
${WORKSPACE}

diffJavadocWarnings.txt newPatchFindbugsWarningshadoop-hdfs.html
 patchFindBugsOutputhadoop-hdfs.txtpatchReleaseAuditOutput.txt
 trunkJavadocWarnings.txt
filteredPatchJavacWarnings.txt  newPatchFindbugsWarningshadoop-hdfs.xml
patchFindbugsWarningshadoop-hdfs.xml  patchReleaseAuditWarnings.txt
filteredTrunkJavacWarnings.txt  patch
patchJavacWarnings.txttestrun_hadoop-hdfs.txt
jirapatchEclipseOutput.txt
 patchJavadocWarnings.txt  trunkJavacWarnings.txt

Under Files to archive input box of PreCommit-HDFS-Build/configure, I saw:

‘../patchprocess/*’ doesn’t match anything, but ‘*’ does. Perhaps that’s
what you mean?

I guess once patchprocess is moved back under ${WORKSPACE}, a lot of things
would be back to normal.

Cheers

On Thu, Aug 28, 2014 at 9:16 PM, Alejandro Abdelnur 
wrote:

> i'm also seeing broken links for javadocs warnings.
>
> Alejandro
> (phone typing)
>
> > On Aug 28, 2014, at 20:00, Andrew Wang  wrote:
> >
> > I noticed that the JUnit test results aren't getting picked up anymore. I
> > suspect we just need to update the path to the surefire output, but based
> > on a quick examination I'm not sure what that is.
> >
> > Does someone mind taking another look?
> >
> >
> > On Thu, Aug 28, 2014 at 4:21 PM, Karthik Kambatla 
> > wrote:
> >
> >> Thanks Giri and Ted for fixing the builds.
> >>
> >>
> >>> On Thu, Aug 28, 2014 at 9:49 AM, Ted Yu  wrote:
> >>>
> >>> Charles:
> >>> QA build is running for your JIRA:
> >>> https://builds.apache.org/job/PreCommit-hdfs-Build/7828/parameters/
> >>>
> >>> Cheers
> >>>
> >>>
> >>>> On Thu, Aug 28, 2014 at 9:41 AM, Charles Lamb 
> >>> wrote:
> >>>
> >>>>> On 8/28/2014 12:07 PM, Giridharan Kesavan wrote:
> >>>>>
> >>>>> Fixed all the 3 pre-commit buids. test-patch's git reset --hard is
> >>>>> removing
> >>>>> the patchprocess dir, so moved it off the workspace.
> >>>> Thanks Giri. Should I resubmit HDFS-6954's patch? I've gotten 3 or 4
> >>>> jenkins messages that indicated the problem so something is
> >> resubmitting,
> >>>> but now that you've fixed it, should I resubmit it again?
> >>>>
> >>>> Charles
> >>
>


Re: Git repo ready to use

2014-08-28 Thread Ted Yu
Charles:
QA build is running for your JIRA:
https://builds.apache.org/job/PreCommit-hdfs-Build/7828/parameters/

Cheers


On Thu, Aug 28, 2014 at 9:41 AM, Charles Lamb  wrote:

> On 8/28/2014 12:07 PM, Giridharan Kesavan wrote:
>
>> Fixed all the 3 pre-commit buids. test-patch's git reset --hard is
>> removing
>> the patchprocess dir, so moved it off the workspace.
>>
>>
> Thanks Giri. Should I resubmit HDFS-6954's patch? I've gotten 3 or 4
> jenkins messages that indicated the problem so something is resubmitting,
> but now that you've fixed it, should I resubmit it again?
>
> Charles
>
>


Re: Git repo ready to use

2014-08-28 Thread Ted Yu
Thanks, Giri

I have switched the following builds to using git:

https://builds.apache.org/job/Hadoop-Yarn-trunk/

https://builds.apache.org/job/Hadoop-hdfs-trunk/

https://builds.apache.org/job/Hadoop-mapreduce-trunk/


Cheers



On Thu, Aug 28, 2014 at 9:07 AM, Giridharan Kesavan <
gkesa...@hortonworks.com> wrote:

> Fixed all the 3 pre-commit buids. test-patch's git reset --hard is removing
> the patchprocess dir, so moved it off the workspace.
>
>
>
> -giri
>
>
> On Thu, Aug 28, 2014 at 8:48 AM, Giridharan Kesavan <
> gkesa...@hortonworks.com> wrote:
>
> > I'm looking into it.
> >
> > -giri
> >
> >
> > On Thu, Aug 28, 2014 at 3:18 AM, Ted Yu  wrote:
> >
> >> I spent some time on PreCommit-hdfs-Build.
> >> Looks like the following command was not effective:
> >>
> >> mkdir -p ${WORKSPACE}/patchprocess
> >>
> >> In build output, I saw:
> >>
> >>
> >>
> /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/jira:
> >> No such file or directory
> >>
> >>
> >> I will work with Giri in the morning.
> >>
> >>
> >> Cheers
> >>
> >>
> >>
> >> On Thu, Aug 28, 2014 at 2:04 AM, Ted Yu  wrote:
> >>
> >> > build #7808 failed due to QA bot trying to apply the following as
> patch:
> >> >
> >> >
> >>
> http://issues.apache.org/jira/secure/attachment/12552318/dfsio-x86-trunk-vs-3529.png
> >> >
> >> >
> >> > FYI
> >> >
> >> >
> >> >
> >> > On Thu, Aug 28, 2014 at 1:52 AM, Ted Yu  wrote:
> >> >
> >> >> I modified config for the following builds:
> >> >>
> >> >> https://builds.apache.org/job/PreCommit-HDFS-Build/build #7808
> >> would
> >> >> be checking out trunk using git.
> >> >>
> >> >> https://builds.apache.org/job/PreCommit-yarn-Build/
> >> >> https://builds.apache.org/job/PreCommit-mapreduce-Build/
> >> >>
> >> >> Should I modify the other Jenkins jobs e.g.:
> >> >>
> >> >> https://builds.apache.org/job/Hadoop-Yarn-trunk/
> >> >>
> >> >> Cheers
> >> >>
> >> >>
> >> >> On Wed, Aug 27, 2014 at 11:25 PM, Karthik Kambatla <
> ka...@cloudera.com
> >> >
> >> >> wrote:
> >> >>
> >> >>> We just got HADOOP-11001 in. If you have access, can you please try
> >> >>> modifying the Jenkins jobs taking the patch on HADOOP-11001 into
> >> >>> consideration.
> >> >>>
> >> >>>
> >> >>>
> >> >>> On Wed, Aug 27, 2014 at 4:38 PM, Ted Yu 
> wrote:
> >> >>>
> >> >>> > I have access.
> >> >>> >
> >> >>> > I can switch the repository if you think it is time to do so.
> >> >>> >
> >> >>> >
> >> >>> > On Wed, Aug 27, 2014 at 4:35 PM, Karthik Kambatla <
> >> ka...@cloudera.com>
> >> >>> > wrote:
> >> >>> >
> >> >>> > > Thanks for reporting it, Ted. We are aware of it - second
> >> follow-up
> >> >>> item
> >> >>> > in
> >> >>> > > my earlier email.
> >> >>> > >
> >> >>> > > Unfortunately, I don't have access to the builds to fix them and
> >> >>> don't
> >> >>> > > quite know the procedure to get access either. I am waiting for
> >> >>> someone
> >> >>> > > with access to help us out.
> >> >>> > >
> >> >>> > >
> >> >>> > > On Wed, Aug 27, 2014 at 3:45 PM, Ted Yu 
> >> wrote:
> >> >>> > >
> >> >>> > > > Precommit builds are still using svn :
> >> >>> > > >
> >> >>> > > > https://builds.apache.org/job/PreCommit-HDFS-Build/configure
> >> >>> > > > https://builds.apache.org/job/PreCommit-YARN-Build/configure
> >> >>> > > >
> >> >>> > > > FYI
> >> >>> > > >
> >> >>> > > >
> >> >>> > > > On Wed, Aug 

Re: Git repo ready to use

2014-08-28 Thread Ted Yu
I spent some time on PreCommit-hdfs-Build.
Looks like the following command was not effective:

mkdir -p ${WORKSPACE}/patchprocess

In build output, I saw:

/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build/patchprocess/jira:
No such file or directory


I will work with Giri in the morning.


Cheers



On Thu, Aug 28, 2014 at 2:04 AM, Ted Yu  wrote:

> build #7808 failed due to QA bot trying to apply the following as patch:
>
> http://issues.apache.org/jira/secure/attachment/12552318/dfsio-x86-trunk-vs-3529.png
>
>
> FYI
>
>
>
> On Thu, Aug 28, 2014 at 1:52 AM, Ted Yu  wrote:
>
>> I modified config for the following builds:
>>
>> https://builds.apache.org/job/PreCommit-HDFS-Build/build #7808 would
>> be checking out trunk using git.
>>
>> https://builds.apache.org/job/PreCommit-yarn-Build/
>> https://builds.apache.org/job/PreCommit-mapreduce-Build/
>>
>> Should I modify the other Jenkins jobs e.g.:
>>
>> https://builds.apache.org/job/Hadoop-Yarn-trunk/
>>
>> Cheers
>>
>>
>> On Wed, Aug 27, 2014 at 11:25 PM, Karthik Kambatla 
>> wrote:
>>
>>> We just got HADOOP-11001 in. If you have access, can you please try
>>> modifying the Jenkins jobs taking the patch on HADOOP-11001 into
>>> consideration.
>>>
>>>
>>>
>>> On Wed, Aug 27, 2014 at 4:38 PM, Ted Yu  wrote:
>>>
>>> > I have access.
>>> >
>>> > I can switch the repository if you think it is time to do so.
>>> >
>>> >
>>> > On Wed, Aug 27, 2014 at 4:35 PM, Karthik Kambatla 
>>> > wrote:
>>> >
>>> > > Thanks for reporting it, Ted. We are aware of it - second follow-up
>>> item
>>> > in
>>> > > my earlier email.
>>> > >
>>> > > Unfortunately, I don't have access to the builds to fix them and
>>> don't
>>> > > quite know the procedure to get access either. I am waiting for
>>> someone
>>> > > with access to help us out.
>>> > >
>>> > >
>>> > > On Wed, Aug 27, 2014 at 3:45 PM, Ted Yu  wrote:
>>> > >
>>> > > > Precommit builds are still using svn :
>>> > > >
>>> > > > https://builds.apache.org/job/PreCommit-HDFS-Build/configure
>>> > > > https://builds.apache.org/job/PreCommit-YARN-Build/configure
>>> > > >
>>> > > > FYI
>>> > > >
>>> > > >
>>> > > > On Wed, Aug 27, 2014 at 7:00 AM, Ted Yu 
>>> wrote:
>>> > > >
>>> > > > > Currently Jenkins builds still use subversion as source.
>>> > > > >
>>> > > > > Should Jenkins point to git ?
>>> > > > >
>>> > > > > Cheers
>>> > > > >
>>> > > > >
>>> > > > > On Wed, Aug 27, 2014 at 1:40 AM, Karthik Kambatla <
>>> > ka...@cloudera.com>
>>> > > > > wrote:
>>> > > > >
>>> > > > >> Oh.. a couple more things.
>>> > > > >>
>>> > > > >> The git commit hashes have changed and are different from what
>>> we
>>> > had
>>> > > on
>>> > > > >> our github. This might interfere with any build automations that
>>> > folks
>>> > > > >> have.
>>> > > > >>
>>> > > > >> Another follow-up item: email and JIRA integration
>>> > > > >>
>>> > > > >>
>>> > > > >> On Wed, Aug 27, 2014 at 1:33 AM, Karthik Kambatla <
>>> > ka...@cloudera.com
>>> > > >
>>> > > > >> wrote:
>>> > > > >>
>>> > > > >> > Hi folks,
>>> > > > >> >
>>> > > > >> > I am very excited to let you know that the git repo is now
>>> > > writable. I
>>> > > > >> > committed a few changes (CHANGES.txt fixes and branching for
>>> > 2.5.1)
>>> > > > and
>>> > > > >> > everything looks good.
>>> > > > >> >
>>> > > > >> > Current status:
>>> > > > >> >
>>> > > > >> >1. All branches have the same names, including trunk.
>>> > > > >> >2. Force push is disabled on trunk, branch-2 and tags.
>>> > > > >> >3. Even if you are experienced with git, take a look at
>>> > > > >> >https://wiki.apache.org/hadoop/HowToCommitWithGit .
>>> > > Particularly,
>>> > > > >> let
>>> > > > >> >us avoid merge commits.
>>> > > > >> >
>>> > > > >> > Follow-up items:
>>> > > > >> >
>>> > > > >> >1. Update rest of the wiki documentation
>>> > > > >> >2. Update precommit Jenkins jobs and get HADOOP-11001
>>> committed
>>> > > > >> >(reviews appreciated). Until this is done, the precommit
>>> jobs
>>> > > will
>>> > > > >> run
>>> > > > >> >against our old svn repo.
>>> > > > >> >3. git mirrors etc. to use the new repo instead of the old
>>> svn
>>> > > > repo.
>>> > > > >> >
>>> > > > >> > Thanks again for your cooperation through the migration
>>> process.
>>> > > > Please
>>> > > > >> > reach out to me (or the list) if you find anything missing or
>>> have
>>> > > > >> > suggestions.
>>> > > > >> >
>>> > > > >> > Cheers!
>>> > > > >> > Karthik
>>> > > > >> >
>>> > > > >> >
>>> > > > >>
>>> > > > >
>>> > > > >
>>> > > >
>>> > >
>>> >
>>>
>>
>>
>


Re: Git repo ready to use

2014-08-28 Thread Ted Yu
build #7808 failed due to QA bot trying to apply the following as patch:

http://issues.apache.org/jira/secure/attachment/12552318/dfsio-x86-trunk-vs-3529.png


FYI



On Thu, Aug 28, 2014 at 1:52 AM, Ted Yu  wrote:

> I modified config for the following builds:
>
> https://builds.apache.org/job/PreCommit-HDFS-Build/build #7808 would
> be checking out trunk using git.
>
> https://builds.apache.org/job/PreCommit-yarn-Build/
> https://builds.apache.org/job/PreCommit-mapreduce-Build/
>
> Should I modify the other Jenkins jobs e.g.:
>
> https://builds.apache.org/job/Hadoop-Yarn-trunk/
>
> Cheers
>
>
> On Wed, Aug 27, 2014 at 11:25 PM, Karthik Kambatla 
> wrote:
>
>> We just got HADOOP-11001 in. If you have access, can you please try
>> modifying the Jenkins jobs taking the patch on HADOOP-11001 into
>> consideration.
>>
>>
>>
>> On Wed, Aug 27, 2014 at 4:38 PM, Ted Yu  wrote:
>>
>> > I have access.
>> >
>> > I can switch the repository if you think it is time to do so.
>> >
>> >
>> > On Wed, Aug 27, 2014 at 4:35 PM, Karthik Kambatla 
>> > wrote:
>> >
>> > > Thanks for reporting it, Ted. We are aware of it - second follow-up
>> item
>> > in
>> > > my earlier email.
>> > >
>> > > Unfortunately, I don't have access to the builds to fix them and don't
>> > > quite know the procedure to get access either. I am waiting for
>> someone
>> > > with access to help us out.
>> > >
>> > >
>> > > On Wed, Aug 27, 2014 at 3:45 PM, Ted Yu  wrote:
>> > >
>> > > > Precommit builds are still using svn :
>> > > >
>> > > > https://builds.apache.org/job/PreCommit-HDFS-Build/configure
>> > > > https://builds.apache.org/job/PreCommit-YARN-Build/configure
>> > > >
>> > > > FYI
>> > > >
>> > > >
>> > > > On Wed, Aug 27, 2014 at 7:00 AM, Ted Yu 
>> wrote:
>> > > >
>> > > > > Currently Jenkins builds still use subversion as source.
>> > > > >
>> > > > > Should Jenkins point to git ?
>> > > > >
>> > > > > Cheers
>> > > > >
>> > > > >
>> > > > > On Wed, Aug 27, 2014 at 1:40 AM, Karthik Kambatla <
>> > ka...@cloudera.com>
>> > > > > wrote:
>> > > > >
>> > > > >> Oh.. a couple more things.
>> > > > >>
>> > > > >> The git commit hashes have changed and are different from what we
>> > had
>> > > on
>> > > > >> our github. This might interfere with any build automations that
>> > folks
>> > > > >> have.
>> > > > >>
>> > > > >> Another follow-up item: email and JIRA integration
>> > > > >>
>> > > > >>
>> > > > >> On Wed, Aug 27, 2014 at 1:33 AM, Karthik Kambatla <
>> > ka...@cloudera.com
>> > > >
>> > > > >> wrote:
>> > > > >>
>> > > > >> > Hi folks,
>> > > > >> >
>> > > > >> > I am very excited to let you know that the git repo is now
>> > > writable. I
>> > > > >> > committed a few changes (CHANGES.txt fixes and branching for
>> > 2.5.1)
>> > > > and
>> > > > >> > everything looks good.
>> > > > >> >
>> > > > >> > Current status:
>> > > > >> >
>> > > > >> >1. All branches have the same names, including trunk.
>> > > > >> >2. Force push is disabled on trunk, branch-2 and tags.
>> > > > >> >3. Even if you are experienced with git, take a look at
>> > > > >> >https://wiki.apache.org/hadoop/HowToCommitWithGit .
>> > > Particularly,
>> > > > >> let
>> > > > >> >us avoid merge commits.
>> > > > >> >
>> > > > >> > Follow-up items:
>> > > > >> >
>> > > > >> >1. Update rest of the wiki documentation
>> > > > >> >2. Update precommit Jenkins jobs and get HADOOP-11001
>> committed
>> > > > >> >(reviews appreciated). Until this is done, the precommit
>> jobs
>> > > will
>> > > > >> run
>> > > > >> >against our old svn repo.
>> > > > >> >3. git mirrors etc. to use the new repo instead of the old
>> svn
>> > > > repo.
>> > > > >> >
>> > > > >> > Thanks again for your cooperation through the migration
>> process.
>> > > > Please
>> > > > >> > reach out to me (or the list) if you find anything missing or
>> have
>> > > > >> > suggestions.
>> > > > >> >
>> > > > >> > Cheers!
>> > > > >> > Karthik
>> > > > >> >
>> > > > >> >
>> > > > >>
>> > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>
>


  1   2   >