Add Guava common packages/classes to illegal imports

2020-07-06 Thread Ahmed Hussein
Hi folks,

I believe we need prevent future commits from re-introducing Guava classes
This can be achieved by two options:

   1. We add rules on the go. Each time we add an illegal import to the
   class that has been replaced. This won't prevent developers from committing
   code that uses Guava lasses. The disadvantage, that it makes the task a
   moving target somehow. Another disadvantage in that approach is an
   increased conflict of code changes because each patch has to append to the
   illegal-imports rules.
   2. Add rules to warn against Guava packages. This will prevent
   introducing Guava imports.


Both options are not bullet-proof to prevent future usage of Guava through
the code because the checkstyle illegal import does not cover API usage.

Does anyone have concerns adding those rules to checkstyle configurations?
And which option should we go for?
--
Best Regards,

*Ahmed Hussein, PhD*


[jira] [Reopened] (HADOOP-17102) Add checkstyle rule to prevent further usage of Guava classes

2020-07-01 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein reopened HADOOP-17102:


Lets see if we can add checkstyle so that no one would import any further Guava 
base classes

> Add checkstyle rule to prevent further usage of Guava classes
> -
>
> Key: HADOOP-17102
> URL: https://issues.apache.org/jira/browse/HADOOP-17102
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, precommit
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> We should have precommit rules to prevent further usage of Guava classes that 
> are available in Java8+
> A list replacing Guava APIs with java8 features:
> {code:java}
> com.google.common.io.BaseEncoding#base64()java.util.Base64
> com.google.common.io.BaseEncoding#base64Url() java.util.Base64
> com.google.common.base.Joiner.on()
> java.lang.String#join() or 
>   
>java.util.stream.Collectors#joining()
> com.google.common.base.Optional#of()  java.util.Optional#of()
> com.google.common.base.Optional#absent()  
> java.util.Optional#empty()
> com.google.common.base.Optional#fromNullable()
> java.util.Optional#ofNullable()
> com.google.common.base.Optional   
> java.util.Optional
> com.google.common.base.Predicate  
> java.util.function.Predicate
> com.google.common.base.Function   
> java.util.function.Function
> com.google.common.base.Supplier   
> java.util.function.Supplier
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17111) Replace Guava Optional with Java8+ Optional

2020-07-01 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17111:
--

 Summary: Replace Guava Optional with Java8+ Optional
 Key: HADOOP-17111
 URL: https://issues.apache.org/jira/browse/HADOOP-17111
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein



{code:java}
Targets
Occurrences of 'com.google.common.base.Optional' in project with mask 
'*.java'
Found Occurrences  (3 usages found)
org.apache.hadoop.yarn.server.nodemanager  (2 usages found)
DefaultContainerExecutor.java  (1 usage found)
71 import com.google.common.base.Optional;
LinuxContainerExecutor.java  (1 usage found)
22 import com.google.common.base.Optional;
org.apache.hadoop.yarn.server.resourcemanager.recovery  (1 usage found)
TestZKRMStateStorePerf.java  (1 usage found)
21 import com.google.common.base.Optional;

{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17110) Replace Guava Preconditions to avoid Guava dependency

2020-07-01 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17110:
--

 Summary: Replace Guava Preconditions to avoid Guava dependency
 Key: HADOOP-17110
 URL: https://issues.apache.org/jira/browse/HADOOP-17110
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein


By far, one of the most painful replacement in hadoop. There are two options:
# Using Apache commons
# Using Java wrapper without dependency on third party.

{code:java}
Targets
Occurrences of 'com.google.common.base.Preconditions' in project with mask 
'*.java'
Found Occurrences  (577 usages found)
org.apache.hadoop.conf  (2 usages found)
Configuration.java  (1 usage found)
108 import com.google.common.base.Preconditions;
ReconfigurableBase.java  (1 usage found)
22 import com.google.common.base.Preconditions;
org.apache.hadoop.crypto  (7 usages found)
AesCtrCryptoCodec.java  (1 usage found)
23 import com.google.common.base.Preconditions;
CryptoInputStream.java  (1 usage found)
33 import com.google.common.base.Preconditions;
CryptoOutputStream.java  (1 usage found)
32 import com.google.common.base.Preconditions;
CryptoStreamUtils.java  (1 usage found)
32 import com.google.common.base.Preconditions;
JceAesCtrCryptoCodec.java  (1 usage found)
32 import com.google.common.base.Preconditions;
OpensslAesCtrCryptoCodec.java  (1 usage found)
32 import com.google.common.base.Preconditions;
OpensslCipher.java  (1 usage found)
32 import com.google.common.base.Preconditions;
org.apache.hadoop.crypto.key  (2 usages found)
JavaKeyStoreProvider.java  (1 usage found)
21 import com.google.common.base.Preconditions;
KeyProviderCryptoExtension.java  (1 usage found)
32 import com.google.common.base.Preconditions;
org.apache.hadoop.crypto.key.kms  (3 usages found)
KMSClientProvider.java  (1 usage found)
83 import com.google.common.base.Preconditions;
LoadBalancingKMSClientProvider.java  (1 usage found)
54 import com.google.common.base.Preconditions;
ValueQueue.java  (1 usage found)
36 import com.google.common.base.Preconditions;
org.apache.hadoop.crypto.key.kms.server  (5 usages found)
KeyAuthorizationKeyProvider.java  (1 usage found)
35 import com.google.common.base.Preconditions;
KMS.java  (1 usage found)
20 import com.google.common.base.Preconditions;
KMSAudit.java  (1 usage found)
24 import com.google.common.base.Preconditions;
KMSWebApp.java  (1 usage found)
29 import com.google.common.base.Preconditions;
MiniKMS.java  (1 usage found)
29 import com.google.common.base.Preconditions;
org.apache.hadoop.crypto.random  (1 usage found)
OpensslSecureRandom.java  (1 usage found)
25 import com.google.common.base.Preconditions;
org.apache.hadoop.fs  (19 usages found)
ByteBufferUtil.java  (1 usage found)
29 import com.google.common.base.Preconditions;
ChecksumFileSystem.java  (1 usage found)
32 import com.google.common.base.Preconditions;
FileContext.java  (1 usage found)
68 import com.google.common.base.Preconditions;
FileEncryptionInfo.java  (2 usages found)
27 import static com.google.common.base.Preconditions.checkArgument;
28 import static com.google.common.base.Preconditions.checkNotNull;
FileSystem.java  (2 usages found)
86 import com.google.common.base.Preconditions;
91 import static com.google.common.base.Preconditions.checkArgument;
FileSystemStorageStatistics.java  (1 usage found)
23 import com.google.common.base.Preconditions;
FSDataOutputStreamBuilder.java  (1 usage found)
31 import static com.google.common.base.Preconditions.checkNotNull;
FSInputStream.java  (1 usage found)
24 import com.google.common.base.Preconditions;
FsUrlConnection.java  (1 usage found)
27 import com.google.common.base.Preconditions;
GlobalStorageStatistics.java  (1 usage found)
26 import com.google.common.base.Preconditions;
Globber.java  (1 usage found)
35 import static com.google.common.base.Preconditions.checkNotNull;
MultipartUploader.java  (1 usage found)
31 import static com.google.common.base.Preconditions.checkArgument;
PartialListing.java  (1 usage found)
20 import com.google.common.base.Preconditions;
TestEnhancedByteBufferAccess.java  (1 usage found)
74 import com.google.common.base.Preconditions;
TestLocalFileSystem.

[jira] [Created] (HADOOP-17109) Replace Guava base64Url and base64 with Java8+ base64

2020-07-01 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17109:
--

 Summary: Replace Guava base64Url and base64 with Java8+ base64
 Key: HADOOP-17109
 URL: https://issues.apache.org/jira/browse/HADOOP-17109
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein


One important thing to not here as pointed out by [~jeagles] in [his comment on 
the parent 
task|https://issues.apache.org/jira/browse/HADOOP-17098?focusedCommentId=17147935&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17147935]

{quote}One note to be careful about is that base64 translation is not a 
standard, so the two implementations could produce different results. This 
might matter in the case of serialization, persistence, or client server 
different versions.{quote}


*Base64Url:*

{code:java}
Targets
Occurrences of 'base64Url' in project with mask '*.java'
Found Occurrences  (6 usages found)
org.apache.hadoop.mapreduce  (3 usages found)
CryptoUtils.java  (3 usages found)
wrapIfNecessary(Configuration, FSDataOutputStream, boolean)  (1 
usage found)
138 + Base64.encodeBase64URLSafeString(iv) + "]");
wrapIfNecessary(Configuration, InputStream, long)  (1 usage found)
183 + Base64.encodeBase64URLSafeString(iv) + "]");
wrapIfNecessary(Configuration, FSDataInputStream)  (1 usage found)
218 + Base64.encodeBase64URLSafeString(iv) + "]");
org.apache.hadoop.util  (2 usages found)
KMSUtil.java  (2 usages found)
toJSON(KeyVersion)  (1 usage found)
104 Base64.encodeBase64URLSafeString(
toJSON(EncryptedKeyVersion)  (1 usage found)
117 
.encodeBase64URLSafeString(encryptedKeyVersion.getEncryptedKeyIv()));
org.apache.hadoop.yarn.server.resourcemanager.webapp  (1 usage found)
TestRMWebServicesAppsModification.java  (1 usage found)
testAppSubmit(String, String)  (1 usage found)
837 .put("test", 
Base64.encodeBase64URLSafeString("value12".getBytes("UTF8")));

{code}

*Base64:*

{code:java}
Targets
Occurrences of 'base64;' in project with mask '*.java'
Found Occurrences  (51 usages found)
org.apache.hadoop.crypto.key.kms  (1 usage found)
KMSClientProvider.java  (1 usage found)
20 import org.apache.commons.codec.binary.Base64;
org.apache.hadoop.crypto.key.kms.server  (1 usage found)
KMS.java  (1 usage found)
22 import org.apache.commons.codec.binary.Base64;
org.apache.hadoop.fs  (2 usages found)
XAttrCodec.java  (2 usages found)
23 import org.apache.commons.codec.binary.Base64;
56 BASE64;
org.apache.hadoop.fs.azure  (3 usages found)
AzureBlobStorageTestAccount.java  (1 usage found)
23 import com.microsoft.azure.storage.core.Base64;
BlockBlobAppendStream.java  (1 usage found)
50 import org.apache.commons.codec.binary.Base64;
ITestBlobDataValidation.java  (1 usage found)
50 import com.microsoft.azure.storage.core.Base64;
org.apache.hadoop.fs.azurebfs  (2 usages found)
AzureBlobFileSystemStore.java  (1 usage found)
99 import org.apache.hadoop.fs.azurebfs.utils.Base64;
TestAbfsConfigurationFieldsValidation.java  (1 usage found)
34 import org.apache.hadoop.fs.azurebfs.utils.Base64;
org.apache.hadoop.fs.azurebfs.diagnostics  (2 usages found)
Base64StringConfigurationBasicValidator.java  (1 usage found)
26 import org.apache.hadoop.fs.azurebfs.utils.Base64;
TestConfigurationValidators.java  (1 usage found)
25 import org.apache.hadoop.fs.azurebfs.utils.Base64;
org.apache.hadoop.fs.azurebfs.extensions  (2 usages found)
MockDelegationSASTokenProvider.java  (1 usage found)
37 import org.apache.hadoop.fs.azurebfs.utils.Base64;
MockSASTokenProvider.java  (1 usage found)
27 import org.apache.hadoop.fs.azurebfs.utils.Base64;
org.apache.hadoop.fs.azurebfs.services  (1 usage found)
SharedKeyCredentials.java  (1 usage found)
47 import org.apache.hadoop.fs.azurebfs.utils.Base64;
org.apache.hadoop.fs.cosn  (1 usage found)
CosNativeFileSystemStore.java  (1 usage found)
61 import com.qcloud.cos.utils.Base64;
org.apache.hadoop.fs.s3a  (1 usage found)
EncryptionTestUtils.java  (1 usage found)
26 import org.apache.commons.net.util.Base64;
org.apache.hadoop.hdfs.protocol.datatransfer.sasl  (3 usages found)
DataTransferSaslUtil.java  (1 usage found)
39 import org.apache.commons.codec.binary.Base64;
SaslDataTransferClient.java  (1 usage found)
 

[jira] [Created] (HADOOP-17108) Create Classes to wrap Guava code replacement

2020-07-01 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17108:
--

 Summary: Create Classes to wrap Guava code replacement
 Key: HADOOP-17108
 URL: https://issues.apache.org/jira/browse/HADOOP-17108
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein


Usage of Guava APIs in hadoop may not have one line replacement in Java8+. We 
need to create some classes to wrap those common functionalities instead of 
reinventing the wheel everywhere.
For example, we should have new package {{package 
org.apache.hadoop.util.collections}}.
Then we create classes like {{MultiMap}} which may have the entire 
implementation from scratch or we can use Apache Commons Collections 4.4 API.
The Pros of using wrapper is to avoid adding more dependencies in POM if we 
vote to use a third party jar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17106) Replace Guava Joiner with Java8 String Join

2020-06-30 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17106:
--

 Summary: Replace Guava Joiner with Java8 String Join
 Key: HADOOP-17106
 URL: https://issues.apache.org/jira/browse/HADOOP-17106
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


Replace \{{com.google.common.base.Joiner}} with String.join.

 
{code:java}
Targets Occurrences of 'com.google.common.base.Joiner' in project with mask 
'*.java' Found Occurrences (103 usages found) 
org.apache.hadoop.crypto.key.kms.server (1 usage found) 
SimpleKMSAuditLogger.java (1 usage found) 26 import 
com.google.common.base.Joiner; org.apache.hadoop.fs (1 usage found) 
TestPath.java (1 usage found) 37 import com.google.common.base.Joiner; 
org.apache.hadoop.fs.s3a (1 usage found) StorageStatisticsTracker.java (1 usage 
found) 25 import com.google.common.base.Joiner; org.apache.hadoop.ha (1 usage 
found) TestHAAdmin.java (1 usage found) 34 import 
com.google.common.base.Joiner; org.apache.hadoop.hdfs (8 usages found) 
DFSClient.java (1 usage found) 196 import com.google.common.base.Joiner; 
DFSTestUtil.java (1 usage found) 76 import com.google.common.base.Joiner; 
DFSUtil.java (1 usage found) 108 import com.google.common.base.Joiner; 
DFSUtilClient.java (1 usage found) 20 import com.google.common.base.Joiner; 
HAUtil.java (1 usage found) 59 import com.google.common.base.Joiner; 
MiniDFSCluster.java (1 usage found) 145 import com.google.common.base.Joiner; 
StripedFileTestUtil.java (1 usage found) 20 import 
com.google.common.base.Joiner; TestDFSUpgrade.java (1 usage found) 53 import 
com.google.common.base.Joiner; org.apache.hadoop.hdfs.protocol (1 usage found) 
LayoutFlags.java (1 usage found) 26 import com.google.common.base.Joiner; 
org.apache.hadoop.hdfs.protocolPB (1 usage found) TestPBHelper.java (1 usage 
found) 118 import com.google.common.base.Joiner; 
org.apache.hadoop.hdfs.qjournal (1 usage found) MiniJournalCluster.java (1 
usage found) 43 import com.google.common.base.Joiner; 
org.apache.hadoop.hdfs.qjournal.client (5 usages found) AsyncLoggerSet.java (1 
usage found) 38 import com.google.common.base.Joiner; QuorumCall.java (1 usage 
found) 32 import com.google.common.base.Joiner; QuorumException.java (1 usage 
found) 25 import com.google.common.base.Joiner; QuorumJournalManager.java (1 
usage found) 62 import com.google.common.base.Joiner; TestQuorumCall.java (1 
usage found) 29 import com.google.common.base.Joiner; 
org.apache.hadoop.hdfs.server.blockmanagement (4 usages found) HostSet.java (1 
usage found) 21 import com.google.common.base.Joiner; TestBlockManager.java (1 
usage found) 20 import com.google.common.base.Joiner; 
TestBlockReportRateLimiting.java (1 usage found) 24 import 
com.google.common.base.Joiner; TestPendingDataNodeMessages.java (1 usage found) 
41 import com.google.common.base.Joiner; org.apache.hadoop.hdfs.server.common 
(1 usage found) StorageInfo.java (1 usage found) 37 import 
com.google.common.base.Joiner; org.apache.hadoop.hdfs.server.datanode (7 usages 
found) BlockPoolManager.java (1 usage found) 32 import 
com.google.common.base.Joiner; BlockRecoveryWorker.java (1 usage found) 21 
import com.google.common.base.Joiner; BPServiceActor.java (1 usage found) 75 
import com.google.common.base.Joiner; DataNode.java (1 usage found) 226 import 
com.google.common.base.Joiner; ShortCircuitRegistry.java (1 usage found) 49 
import com.google.common.base.Joiner; TestDataNodeHotSwapVolumes.java (1 usage 
found) 21 import com.google.common.base.Joiner; TestRefreshNamenodes.java (1 
usage found) 35 import com.google.common.base.Joiner; 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl (1 usage found) 
FsVolumeImpl.java (1 usage found) 90 import com.google.common.base.Joiner; 
org.apache.hadoop.hdfs.server.namenode (13 usages found) 
FileJournalManager.java (1 usage found) 49 import 
com.google.common.base.Joiner; FSDirectory.java (1 usage found) 24 import 
com.google.common.base.Joiner; FSEditLogLoader.java (1 usage found) 120 import 
com.google.common.base.Joiner; FSEditLogOp.java (1 usage found) 141 import 
com.google.common.base.Joiner; FSImage.java (1 usage found) 78 import 
com.google.common.base.Joiner; FSImageTestUtil.java (1 usage found) 66 import 
com.google.common.base.Joiner; NameNode.java (1 usage found) 21 import 
com.google.common.base.Joiner; TestAuditLogAtDebug.java (1 usage found) 21 
import com.google.common.base.Joiner; TestCheckpoint.java (1 usage found) 97 
import com.google.common.base.Joiner; TestFileJournalManager.java (1 usage 
found) 52 import com.google.common.base.Joiner; 
TestNNStorageRetentionFunctional.java (1 usage found) 39 import 
com.google.common.base.Joiner; TestNNStorageRetentionManager.java (1 usage 
found) 53 import com.google.common.base.Joiner; TestProtectedDirectories.java 
(1 usage found) 21 import com.googl

[jira] [Created] (HADOOP-17104) Replace Guava Supplier with Java8+ Supplier in hdfs

2020-06-30 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17104:
--

 Summary: Replace Guava Supplier with Java8+ Supplier in hdfs
 Key: HADOOP-17104
 URL: https://issues.apache.org/jira/browse/HADOOP-17104
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


Replacing Usage of Guava supplier are in Unit tests 
{{GenereicTestUtils.waitFor()}} in hadoop-hdfs-project subdirectory.
{code:java}
Targets
Occurrences of 'com.google.common.base.Supplier' in directory 
hadoop-hdfs-project with mask '*.java'
Found Occurrences  (99 usages found)
org.apache.hadoop.fs  (1 usage found)
TestEnhancedByteBufferAccess.java  (1 usage found)
75 import com.google.common.base.Supplier;
org.apache.hadoop.fs.viewfs  (1 usage found)
TestViewFileSystemWithTruncate.java  (1 usage found)
23 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs  (20 usages found)
DFSTestUtil.java  (1 usage found)
79 import com.google.common.base.Supplier;
MiniDFSCluster.java  (1 usage found)
78 import com.google.common.base.Supplier;
TestBalancerBandwidth.java  (1 usage found)
29 import com.google.common.base.Supplier;
TestClientProtocolForPipelineRecovery.java  (1 usage found)
30 import com.google.common.base.Supplier;
TestDatanodeRegistration.java  (1 usage found)
44 import com.google.common.base.Supplier;
TestDataTransferKeepalive.java  (1 usage found)
47 import com.google.common.base.Supplier;
TestDeadNodeDetection.java  (1 usage found)
20 import com.google.common.base.Supplier;
TestDecommission.java  (1 usage found)
41 import com.google.common.base.Supplier;
TestDFSShell.java  (1 usage found)
37 import com.google.common.base.Supplier;
TestEncryptedTransfer.java  (1 usage found)
35 import com.google.common.base.Supplier;
TestEncryptionZonesWithKMS.java  (1 usage found)
22 import com.google.common.base.Supplier;
TestFileCorruption.java  (1 usage found)
21 import com.google.common.base.Supplier;
TestLeaseRecovery2.java  (1 usage found)
32 import com.google.common.base.Supplier;
TestLeaseRecoveryStriped.java  (1 usage found)
21 import com.google.common.base.Supplier;
TestMaintenanceState.java  (1 usage found)
63 import com.google.common.base.Supplier;
TestPread.java  (1 usage found)
61 import com.google.common.base.Supplier;
TestQuota.java  (1 usage found)
39 import com.google.common.base.Supplier;
TestReplaceDatanodeOnFailure.java  (1 usage found)
20 import com.google.common.base.Supplier;
TestReplication.java  (1 usage found)
27 import com.google.common.base.Supplier;
TestSafeMode.java  (1 usage found)
62 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.client.impl  (2 usages found)
TestBlockReaderLocalMetrics.java  (1 usage found)
20 import com.google.common.base.Supplier;
TestLeaseRenewer.java  (1 usage found)
20 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.qjournal  (1 usage found)
MiniJournalCluster.java  (1 usage found)
31 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.qjournal.client  (1 usage found)
TestIPCLoggerChannel.java  (1 usage found)
43 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.qjournal.server  (1 usage found)
TestJournalNodeSync.java  (1 usage found)
20 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.server.blockmanagement  (7 usages found)
TestBlockManagerSafeMode.java  (1 usage found)
20 import com.google.common.base.Supplier;
TestBlockReportRateLimiting.java  (1 usage found)
25 import com.google.common.base.Supplier;
TestNameNodePrunesMissingStorages.java  (1 usage found)
21 import com.google.common.base.Supplier;
TestPendingInvalidateBlock.java  (1 usage found)
43 import com.google.common.base.Supplier;
TestPendingReconstruction.java  (1 usage found)
34 import com.google.common.base.Supplier;
TestRBWBlockInvalidation.java  (1 usage found)
49 import com.google.common.base.Supplier;
TestSlowDiskTracker.java  (1 usage found)
48 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.server.datanode  (13 usages found)
DataNodeTestUtils.java  (1 usage found)
40 import com.google.common.base.Supplier;
TestBlockRecovery.java  (1 usage found)
  

[jira] [Created] (HADOOP-17103) Replace Guava Supplier with Java8+ Supplier in MAPREDUCE

2020-06-30 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17103:
--

 Summary: Replace Guava Supplier with Java8+ Supplier in MAPREDUCE
 Key: HADOOP-17103
 URL: https://issues.apache.org/jira/browse/HADOOP-17103
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


Replacing Usage of Guava supplier are in Unit tests 
{{GenereicTestUtils.waitFor()}} in hadoop-mapreduce-project subdirectory.
{code:java}
Targets
hadoop-mapreduce-project with mask '*.java'
Found Occurrences  (8 usages found)
org.apache.hadoop.mapred  (2 usages found)
TestTaskAttemptListenerImpl.java  (1 usage found)
20 import com.google.common.base.Supplier;
UtilsForTests.java  (1 usage found)
64 import com.google.common.base.Supplier;
org.apache.hadoop.mapreduce.v2.app  (4 usages found)
TestFetchFailure.java  (1 usage found)
29 import com.google.common.base.Supplier;
TestMRApp.java  (1 usage found)
31 import com.google.common.base.Supplier;
TestRecovery.java  (1 usage found)
31 import com.google.common.base.Supplier;
TestTaskHeartbeatHandler.java  (1 usage found)
28 import com.google.common.base.Supplier;
org.apache.hadoop.mapreduce.v2.app.rm  (1 usage found)
TestRMContainerAllocator.java  (1 usage found)
156 import com.google.common.base.Supplier;
org.apache.hadoop.mapreduce.v2.hs  (1 usage found)
TestJHSDelegationTokenSecretManager.java  (1 usage found)
30 import com.google.common.base.Supplier;

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17102) Add checkstyle rule to prevent further usage of Guava classes

2020-06-30 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein resolved HADOOP-17102.

Resolution: Abandoned

This is a moving target. It is better we this precommit rule merged with its 
relevant subtak.

> Add checkstyle rule to prevent further usage of Guava classes
> -
>
> Key: HADOOP-17102
> URL: https://issues.apache.org/jira/browse/HADOOP-17102
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, precommit
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> We should have precommit rules to prevent further usage of Guava classes that 
> are available in Java8+
> A list replacing Guava APIs with java8 features:
> {code:java}
> com.google.common.io.BaseEncoding#base64()java.util.Base64
> com.google.common.io.BaseEncoding#base64Url() java.util.Base64
> com.google.common.base.Joiner.on()
> java.lang.String#join() or 
>   
>java.util.stream.Collectors#joining()
> com.google.common.base.Optional#of()  java.util.Optional#of()
> com.google.common.base.Optional#absent()  
> java.util.Optional#empty()
> com.google.common.base.Optional#fromNullable()
> java.util.Optional#ofNullable()
> com.google.common.base.Optional   
> java.util.Optional
> com.google.common.base.Predicate  
> java.util.function.Predicate
> com.google.common.base.Function   
> java.util.function.Function
> com.google.common.base.Supplier   
> java.util.function.Supplier
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17102) Add checkstyle rule to prevent further usage of Guava classes

2020-06-30 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17102:
--

 Summary: Add checkstyle rule to prevent further usage of Guava 
classes
 Key: HADOOP-17102
 URL: https://issues.apache.org/jira/browse/HADOOP-17102
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build, precommit
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


We should have precommit rules to prevent further usage of Guava classes that 
are available in Java8+


A list replacing Guava APIs with java8 features:
{code:java}
com.google.common.io.BaseEncoding#base64()  java.util.Base64
com.google.common.io.BaseEncoding#base64Url()   java.util.Base64
com.google.common.base.Joiner.on()  
java.lang.String#join() or 

 java.util.stream.Collectors#joining()
com.google.common.base.Optional#of()java.util.Optional#of()
com.google.common.base.Optional#absent()
java.util.Optional#empty()
com.google.common.base.Optional#fromNullable()  java.util.Optional#ofNullable()
com.google.common.base.Optional java.util.Optional
com.google.common.base.Predicate
java.util.function.Predicate
com.google.common.base.Function 
java.util.function.Function
com.google.common.base.Supplier 
java.util.function.Supplier
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17101) Replace Guava Function with Java8+ Function

2020-06-29 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17101:
--

 Summary: Replace Guava Function with Java8+ Function
 Key: HADOOP-17101
 URL: https://issues.apache.org/jira/browse/HADOOP-17101
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein



{code:java}
Targets
Occurrences of 'com.google.common.base.Function' in directory 
/Users/ahussein/workspace/repos/community/guava-dependency/amahadoop-17100
Found Occurrences  (7 usages found)
hadoop-hdfs-project/hadoop-hdfs/dev-support/jdiff  (1 usage found)
Apache_Hadoop_HDFS_2.6.0.xml  (1 usage found)
13603 

[jira] [Created] (HADOOP-17100) Replace Guava Supplier with Java8+ Supplier

2020-06-29 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17100:
--

 Summary: Replace Guava Supplier with Java8+ Supplier
 Key: HADOOP-17100
 URL: https://issues.apache.org/jira/browse/HADOOP-17100
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


Usage of Guava supplier are in Unit tests.

 
{code:java}
Targets
Occurrences of 'com.google.common.base.Supplier' in project with mask 
'*.java'
Found Occurrences  (146 usages found)
org.apache.hadoop.conf  (1 usage found)
TestReconfiguration.java  (1 usage found)
21 import com.google.common.base.Supplier;
org.apache.hadoop.crypto.key.kms.server  (1 usage found)
TestKMS.java  (1 usage found)
20 import com.google.common.base.Supplier;
org.apache.hadoop.fs  (2 usages found)
FCStatisticsBaseTest.java  (1 usage found)
40 import com.google.common.base.Supplier;
TestEnhancedByteBufferAccess.java  (1 usage found)
75 import com.google.common.base.Supplier;
org.apache.hadoop.fs.viewfs  (1 usage found)
TestViewFileSystemWithTruncate.java  (1 usage found)
23 import com.google.common.base.Supplier;
org.apache.hadoop.ha  (1 usage found)
TestZKFailoverController.java  (1 usage found)
25 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs  (20 usages found)
DFSTestUtil.java  (1 usage found)
79 import com.google.common.base.Supplier;
MiniDFSCluster.java  (1 usage found)
78 import com.google.common.base.Supplier;
TestBalancerBandwidth.java  (1 usage found)
29 import com.google.common.base.Supplier;
TestClientProtocolForPipelineRecovery.java  (1 usage found)
30 import com.google.common.base.Supplier;
TestDatanodeRegistration.java  (1 usage found)
44 import com.google.common.base.Supplier;
TestDataTransferKeepalive.java  (1 usage found)
47 import com.google.common.base.Supplier;
TestDeadNodeDetection.java  (1 usage found)
20 import com.google.common.base.Supplier;
TestDecommission.java  (1 usage found)
41 import com.google.common.base.Supplier;
TestDFSShell.java  (1 usage found)
37 import com.google.common.base.Supplier;
TestEncryptedTransfer.java  (1 usage found)
35 import com.google.common.base.Supplier;
TestEncryptionZonesWithKMS.java  (1 usage found)
22 import com.google.common.base.Supplier;
TestFileCorruption.java  (1 usage found)
21 import com.google.common.base.Supplier;
TestLeaseRecovery2.java  (1 usage found)
32 import com.google.common.base.Supplier;
TestLeaseRecoveryStriped.java  (1 usage found)
21 import com.google.common.base.Supplier;
TestMaintenanceState.java  (1 usage found)
63 import com.google.common.base.Supplier;
TestPread.java  (1 usage found)
61 import com.google.common.base.Supplier;
TestQuota.java  (1 usage found)
39 import com.google.common.base.Supplier;
TestReplaceDatanodeOnFailure.java  (1 usage found)
20 import com.google.common.base.Supplier;
TestReplication.java  (1 usage found)
27 import com.google.common.base.Supplier;
TestSafeMode.java  (1 usage found)
62 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.client.impl  (2 usages found)
TestBlockReaderLocalMetrics.java  (1 usage found)
20 import com.google.common.base.Supplier;
TestLeaseRenewer.java  (1 usage found)
20 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.qjournal  (1 usage found)
MiniJournalCluster.java  (1 usage found)
31 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.qjournal.client  (1 usage found)
TestIPCLoggerChannel.java  (1 usage found)
43 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.qjournal.server  (1 usage found)
TestJournalNodeSync.java  (1 usage found)
20 import com.google.common.base.Supplier;
org.apache.hadoop.hdfs.server.blockmanagement  (7 usages found)
TestBlockManagerSafeMode.java  (1 usage found)
20 import com.google.common.base.Supplier;
TestBlockReportRateLimiting.java  (1 usage found)
25 import com.google.common.base.Supplier;
TestNameNodePrunesMissingStorages.java  (1 usage found)
21 import com.google.common.base.Supplier;
TestPendingInvalidateBlock.java  (1 usage found)
43 import com.google.common.base.Supplier;
TestPendingReconstruction.java  (1 usage found)
 

[jira] [Created] (HADOOP-17099) Replace Guava Predicate with Java8+ Predicate

2020-06-29 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17099:
--

 Summary: Replace Guava Predicate with Java8+ Predicate
 Key: HADOOP-17099
 URL: https://issues.apache.org/jira/browse/HADOOP-17099
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


{{com.google.common.base.Predicate}} can be replaced with 
{{java.util.function.Predicate}}. 
The change involving 9 occurrences is straightforward:


{code:java}
Targets
Occurrences of 'com.google.common.base.Predicate' in project with mask 
'*.java'
Found Occurrences  (9 usages found)
org.apache.hadoop.hdfs.server.blockmanagement  (1 usage found)
CombinedHostFileManager.java  (1 usage found)
43 import com.google.common.base.Predicate;
org.apache.hadoop.hdfs.server.namenode  (1 usage found)
NameNodeResourceChecker.java  (1 usage found)
38 import com.google.common.base.Predicate;
org.apache.hadoop.hdfs.server.namenode.snapshot  (1 usage found)
Snapshot.java  (1 usage found)
41 import com.google.common.base.Predicate;
org.apache.hadoop.metrics2.impl  (2 usages found)
MetricsRecords.java  (1 usage found)
21 import com.google.common.base.Predicate;
TestMetricsSystemImpl.java  (1 usage found)
41 import com.google.common.base.Predicate;
org.apache.hadoop.yarn.logaggregation  (1 usage found)
AggregatedLogFormat.java  (1 usage found)
77 import com.google.common.base.Predicate;
org.apache.hadoop.yarn.logaggregation.filecontroller  (1 usage found)
LogAggregationFileController.java  (1 usage found)
22 import com.google.common.base.Predicate;
org.apache.hadoop.yarn.logaggregation.filecontroller.ifile  (1 usage found)
LogAggregationIndexedFileController.java  (1 usage found)
22 import com.google.common.base.Predicate;
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation  
(1 usage found)
AppLogAggregatorImpl.java  (1 usage found)
75 import com.google.common.base.Predicate;

{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17098) Reduce Guava dependency in Hadoop source code

2020-06-29 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17098:
--

 Summary: Reduce Guava dependency in Hadoop source code
 Key: HADOOP-17098
 URL: https://issues.apache.org/jira/browse/HADOOP-17098
 Project: Hadoop Common
  Issue Type: Task
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


Relying on Guava implementation in Hadoop has been painful due to compatibility 
and vulnerability issues.
 Guava updates tend to break/deprecate APIs. This made It hard to maintain 
backward compatibility within hadoop versions and clients/downstreams.

With 3.x uses java8+, the java 8 features should preferred to Guava, reducing 
the footprint, and giving stability to source code.

This jira should serve as an umbrella toward an incremental effort to reduce 
the usage of Guava in the source code and to create subtasks to replace Guava 
classes with Java features.

Furthermore, it will be good to add a rule in the pre-commit build to warn 
against introducing a new Guava usage in certain modules.

Any one willing to take part in this code refactoring has to:
 # Focus on one module at a time in order to reduce the conflicts and the size 
of the patch. This will significantly help the reviewers.
 # Run all the unit tests related to the module being affected by the change. 
It is critical to verify that any change will not break the unit tests, or 
cause a stable test case to become flaky.

 

A list of sub tasks replacing Guava APIs with java8 features:
{code:java}
com.google.common.io.BaseEncoding#base64()  java.util.Base64
com.google.common.io.BaseEncoding#base64Url()   java.util.Base64
com.google.common.base.Joiner.on()  
java.lang.String#join() or 

 java.util.stream.Collectors#joining()
com.google.common.base.Optional#of()java.util.Optional#of()
com.google.common.base.Optional#absent()
java.util.Optional#empty()
com.google.common.base.Optional#fromNullable()  java.util.Optional#ofNullable()
com.google.common.base.Optional java.util.Optional
com.google.common.base.Predicate
java.util.function.Predicate
com.google.common.base.Function 
java.util.function.Function
com.google.common.base.Supplier 
java.util.function.Supplier
{code}
 

I also vote for the replacement of {{Precondition}} with either a wrapper, or 
Apache commons lang.

I believe you guys have dealt with Guava compatibilities in the past and 
probably have better insights. Any thoughts? [~weichiu], [~gabor.bota], 
[~ste...@apache.org], [~ayushtkn], [~busbey], [~jeagles], [~kihwal]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Update guava to 27.0-jre in hadoop branch-2.10

2020-06-23 Thread Wei-Chiu Chuang
Ahmed,
I see you filed HADOOP-17083
<https://issues.apache.org/jira/browse/HADOOP-17083> for the same
discussion.

I started a thread a while ago in the Hadoop dev mailing list to share my
experience adopting guava27.  Simply porting HADOOP-15960
<https://issues.apache.org/jira/browse/HADOOP-15960> to branch-2 will break
miserably because all downstream applications will not compile/run. It took
us half a year to get it harmonized across Cloudera's stack and I don't
want to see you spending time on that.

I feel the better approach is HADOOP-16924
<https://issues.apache.org/jira/browse/HADOOP-16924> where we shade and
then update guava. There is more work inside Hadoop to change references to
the shaded guava classpath, but it'll save you more time later.

On Tue, Jun 23, 2020 at 9:09 AM Ahmed Hussein  wrote:

> Hi folks,
>
> I was looking into upgrading guava to  27.0-jre on branch-2.10 in order to
> address the vulnerabilities reported as CVE-2018-10237
> <https://nvd.nist.gov/vuln/detail/CVE-2018-10237>.
> Since there are concerns using Java8, the plan is to stick to JDK7.
>
> Obviously, it is expected that the upgrade will break downstream projects.
>
> I opened this for discussion to get feedback and make sure that we have
> common ground to address the security of vulnerabilities.
>
> Let me know WDYT.
>
> --
> Best Regards,
>
> *Ahmed Hussein, PhD*
>


[jira] [Created] (HADOOP-17083) Update guava to 27.0-jre in hadoop branch-2.10

2020-06-23 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17083:
--

 Summary: Update guava to 27.0-jre in hadoop branch-2.10
 Key: HADOOP-17083
 URL: https://issues.apache.org/jira/browse/HADOOP-17083
 Project: Hadoop Common
  Issue Type: Bug
  Components: common, security
Affects Versions: 2.10.0
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
[CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237].

 

The upgrade should not affect the version of java used. branch-2.10 still 
sticks to JDK7



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Update guava to 27.0-jre in hadoop branch-2.10

2020-06-23 Thread Ahmed Hussein
Hi folks,

I was looking into upgrading guava to  27.0-jre on branch-2.10 in order to
address the vulnerabilities reported as CVE-2018-10237
<https://nvd.nist.gov/vuln/detail/CVE-2018-10237>.
Since there are concerns using Java8, the plan is to stick to JDK7.

Obviously, it is expected that the upgrade will break downstream projects.

I opened this for discussion to get feedback and make sure that we have
common ground to address the security of vulnerabilities.

Let me know WDYT.

--
Best Regards,

*Ahmed Hussein, PhD*


Re: [DISCUSS] Shade guava into hadoop-thirdparty

2020-04-06 Thread Dinesh Chitlangia
+1

Thanks for initiating this Weichiu.

-Dinesh

On Sat, Apr 4, 2020 at 3:13 PM Wei-Chiu Chuang  wrote:

> Hi Hadoop devs,
>
> I spent a good part of the past 7 months working with a dozen of colleagues
> to update the guava version in Cloudera's software (that includes Hadoop,
> HBase, Spark, Hive, Cloudera Manager ... more than 20+ projects)
>
> After 7 months, I finally came to a conclusion: Update to Hadoop 3.3 /
> 3.2.1 / 3.1.3, even if you just go from Hadoop 3.0/ 3.1.0 is going to be
> really hard because of guava. Because of Guava, the amount of work to
> certify a minor release update is almost equivalent to a major release
> update.
>
> That is because:
> (1) Going from guava 11 to guava 27 is a big jump. There are several
> incompatible API changes in many places. Too bad the Google developers are
> not sympathetic about its users.
> (2) guava is used in all Hadoop jars. Not just Hadoop servers but also
> client jars and Hadoop common libs.
> (3) The Hadoop library is used in practically all software at Cloudera.
>
> Here is my proposal:
> (1) shade guava into hadoop-thirdparty, relocate the classpath to
> org.hadoop.thirdparty.com.google.common.*
> (2) make a hadoop-thirdparty 1.1.0 release.
> (3) update existing references to guava to the relocated path. There are
> more than 2k imports that need an update.
> (4) release Hadoop 3.3.1 / 3.2.2 that contains this change.
>
> In this way, we will be able to update guava in Hadoop in the future
> without disrupting Hadoop applications.
>
> Note: HBase already did this and this guava update project would have been
> much more difficult if HBase didn't do so.
>
> Thoughts? Other options include
> (1) force downstream applications to migrate to Hadoop client artifacts as
> listed here
>
> https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/DownstreamDev.html
> but
> that's nearly impossible.
> (2) Migrate Guava to Java APIs. I suppose this is a big project and I can't
> estimate how much work it's going to be.
>
> Weichiu
>


Re: [DISCUSS] Shade guava into hadoop-thirdparty

2020-04-06 Thread Mukul Kumar Singh

+1

On 07/04/20 7:05 am, Zhankun Tang wrote:

Thanks, Wei-Chiu for the proposal. +1.

On Mon, 6 Apr 2020 at 20:17, Ayush Saxena  wrote:


+1

-Ayush


On 05-Apr-2020, at 12:43 AM, Wei-Chiu Chuang  wrote:

Hi Hadoop devs,

I spent a good part of the past 7 months working with a dozen of

colleagues

to update the guava version in Cloudera's software (that includes Hadoop,
HBase, Spark, Hive, Cloudera Manager ... more than 20+ projects)

After 7 months, I finally came to a conclusion: Update to Hadoop 3.3 /
3.2.1 / 3.1.3, even if you just go from Hadoop 3.0/ 3.1.0 is going to be
really hard because of guava. Because of Guava, the amount of work to
certify a minor release update is almost equivalent to a major release
update.

That is because:
(1) Going from guava 11 to guava 27 is a big jump. There are several
incompatible API changes in many places. Too bad the Google developers

are

not sympathetic about its users.
(2) guava is used in all Hadoop jars. Not just Hadoop servers but also
client jars and Hadoop common libs.
(3) The Hadoop library is used in practically all software at Cloudera.

Here is my proposal:
(1) shade guava into hadoop-thirdparty, relocate the classpath to
org.hadoop.thirdparty.com.google.common.*
(2) make a hadoop-thirdparty 1.1.0 release.
(3) update existing references to guava to the relocated path. There are
more than 2k imports that need an update.
(4) release Hadoop 3.3.1 / 3.2.2 that contains this change.

In this way, we will be able to update guava in Hadoop in the future
without disrupting Hadoop applications.

Note: HBase already did this and this guava update project would have

been

much more difficult if HBase didn't do so.

Thoughts? Other options include
(1) force downstream applications to migrate to Hadoop client artifacts

as

listed here


https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/DownstreamDev.html

but
that's nearly impossible.
(2) Migrate Guava to Java APIs. I suppose this is a big project and I

can't

estimate how much work it's going to be.

Weichiu

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Shade guava into hadoop-thirdparty

2020-04-06 Thread Zhankun Tang
Thanks, Wei-Chiu for the proposal. +1.

On Mon, 6 Apr 2020 at 20:17, Ayush Saxena  wrote:

> +1
>
> -Ayush
>
> > On 05-Apr-2020, at 12:43 AM, Wei-Chiu Chuang  wrote:
> >
> > Hi Hadoop devs,
> >
> > I spent a good part of the past 7 months working with a dozen of
> colleagues
> > to update the guava version in Cloudera's software (that includes Hadoop,
> > HBase, Spark, Hive, Cloudera Manager ... more than 20+ projects)
> >
> > After 7 months, I finally came to a conclusion: Update to Hadoop 3.3 /
> > 3.2.1 / 3.1.3, even if you just go from Hadoop 3.0/ 3.1.0 is going to be
> > really hard because of guava. Because of Guava, the amount of work to
> > certify a minor release update is almost equivalent to a major release
> > update.
> >
> > That is because:
> > (1) Going from guava 11 to guava 27 is a big jump. There are several
> > incompatible API changes in many places. Too bad the Google developers
> are
> > not sympathetic about its users.
> > (2) guava is used in all Hadoop jars. Not just Hadoop servers but also
> > client jars and Hadoop common libs.
> > (3) The Hadoop library is used in practically all software at Cloudera.
> >
> > Here is my proposal:
> > (1) shade guava into hadoop-thirdparty, relocate the classpath to
> > org.hadoop.thirdparty.com.google.common.*
> > (2) make a hadoop-thirdparty 1.1.0 release.
> > (3) update existing references to guava to the relocated path. There are
> > more than 2k imports that need an update.
> > (4) release Hadoop 3.3.1 / 3.2.2 that contains this change.
> >
> > In this way, we will be able to update guava in Hadoop in the future
> > without disrupting Hadoop applications.
> >
> > Note: HBase already did this and this guava update project would have
> been
> > much more difficult if HBase didn't do so.
> >
> > Thoughts? Other options include
> > (1) force downstream applications to migrate to Hadoop client artifacts
> as
> > listed here
> >
> https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/DownstreamDev.html
> > but
> > that's nearly impossible.
> > (2) Migrate Guava to Java APIs. I suppose this is a big project and I
> can't
> > estimate how much work it's going to be.
> >
> > Weichiu
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>


Re: [DISCUSS] Shade guava into hadoop-thirdparty

2020-04-06 Thread Ayush Saxena
+1

-Ayush

> On 05-Apr-2020, at 12:43 AM, Wei-Chiu Chuang  wrote:
> 
> Hi Hadoop devs,
> 
> I spent a good part of the past 7 months working with a dozen of colleagues
> to update the guava version in Cloudera's software (that includes Hadoop,
> HBase, Spark, Hive, Cloudera Manager ... more than 20+ projects)
> 
> After 7 months, I finally came to a conclusion: Update to Hadoop 3.3 /
> 3.2.1 / 3.1.3, even if you just go from Hadoop 3.0/ 3.1.0 is going to be
> really hard because of guava. Because of Guava, the amount of work to
> certify a minor release update is almost equivalent to a major release
> update.
> 
> That is because:
> (1) Going from guava 11 to guava 27 is a big jump. There are several
> incompatible API changes in many places. Too bad the Google developers are
> not sympathetic about its users.
> (2) guava is used in all Hadoop jars. Not just Hadoop servers but also
> client jars and Hadoop common libs.
> (3) The Hadoop library is used in practically all software at Cloudera.
> 
> Here is my proposal:
> (1) shade guava into hadoop-thirdparty, relocate the classpath to
> org.hadoop.thirdparty.com.google.common.*
> (2) make a hadoop-thirdparty 1.1.0 release.
> (3) update existing references to guava to the relocated path. There are
> more than 2k imports that need an update.
> (4) release Hadoop 3.3.1 / 3.2.2 that contains this change.
> 
> In this way, we will be able to update guava in Hadoop in the future
> without disrupting Hadoop applications.
> 
> Note: HBase already did this and this guava update project would have been
> much more difficult if HBase didn't do so.
> 
> Thoughts? Other options include
> (1) force downstream applications to migrate to Hadoop client artifacts as
> listed here
> https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/DownstreamDev.html
> but
> that's nearly impossible.
> (2) Migrate Guava to Java APIs. I suppose this is a big project and I can't
> estimate how much work it's going to be.
> 
> Weichiu

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Shade guava into hadoop-thirdparty

2020-04-06 Thread Masatake Iwasaki

+1

Masatake Iwasaki

On 2020/04/06 10:32, Akira Ajisaka wrote:

+1

Thanks,
Akira

On Sun, Apr 5, 2020 at 4:13 AM Wei-Chiu Chuang  wrote:


Hi Hadoop devs,

I spent a good part of the past 7 months working with a dozen of colleagues
to update the guava version in Cloudera's software (that includes Hadoop,
HBase, Spark, Hive, Cloudera Manager ... more than 20+ projects)

After 7 months, I finally came to a conclusion: Update to Hadoop 3.3 /
3.2.1 / 3.1.3, even if you just go from Hadoop 3.0/ 3.1.0 is going to be
really hard because of guava. Because of Guava, the amount of work to
certify a minor release update is almost equivalent to a major release
update.

That is because:
(1) Going from guava 11 to guava 27 is a big jump. There are several
incompatible API changes in many places. Too bad the Google developers are
not sympathetic about its users.
(2) guava is used in all Hadoop jars. Not just Hadoop servers but also
client jars and Hadoop common libs.
(3) The Hadoop library is used in practically all software at Cloudera.

Here is my proposal:
(1) shade guava into hadoop-thirdparty, relocate the classpath to
org.hadoop.thirdparty.com.google.common.*
(2) make a hadoop-thirdparty 1.1.0 release.
(3) update existing references to guava to the relocated path. There are
more than 2k imports that need an update.
(4) release Hadoop 3.3.1 / 3.2.2 that contains this change.

In this way, we will be able to update guava in Hadoop in the future
without disrupting Hadoop applications.

Note: HBase already did this and this guava update project would have been
much more difficult if HBase didn't do so.

Thoughts? Other options include
(1) force downstream applications to migrate to Hadoop client artifacts as
listed here

https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/DownstreamDev.html
but
that's nearly impossible.
(2) Migrate Guava to Java APIs. I suppose this is a big project and I can't
estimate how much work it's going to be.

Weichiu



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Shade guava into hadoop-thirdparty

2020-04-05 Thread Akira Ajisaka
+1

Thanks,
Akira

On Sun, Apr 5, 2020 at 4:13 AM Wei-Chiu Chuang  wrote:

> Hi Hadoop devs,
>
> I spent a good part of the past 7 months working with a dozen of colleagues
> to update the guava version in Cloudera's software (that includes Hadoop,
> HBase, Spark, Hive, Cloudera Manager ... more than 20+ projects)
>
> After 7 months, I finally came to a conclusion: Update to Hadoop 3.3 /
> 3.2.1 / 3.1.3, even if you just go from Hadoop 3.0/ 3.1.0 is going to be
> really hard because of guava. Because of Guava, the amount of work to
> certify a minor release update is almost equivalent to a major release
> update.
>
> That is because:
> (1) Going from guava 11 to guava 27 is a big jump. There are several
> incompatible API changes in many places. Too bad the Google developers are
> not sympathetic about its users.
> (2) guava is used in all Hadoop jars. Not just Hadoop servers but also
> client jars and Hadoop common libs.
> (3) The Hadoop library is used in practically all software at Cloudera.
>
> Here is my proposal:
> (1) shade guava into hadoop-thirdparty, relocate the classpath to
> org.hadoop.thirdparty.com.google.common.*
> (2) make a hadoop-thirdparty 1.1.0 release.
> (3) update existing references to guava to the relocated path. There are
> more than 2k imports that need an update.
> (4) release Hadoop 3.3.1 / 3.2.2 that contains this change.
>
> In this way, we will be able to update guava in Hadoop in the future
> without disrupting Hadoop applications.
>
> Note: HBase already did this and this guava update project would have been
> much more difficult if HBase didn't do so.
>
> Thoughts? Other options include
> (1) force downstream applications to migrate to Hadoop client artifacts as
> listed here
>
> https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/DownstreamDev.html
> but
> that's nearly impossible.
> (2) Migrate Guava to Java APIs. I suppose this is a big project and I can't
> estimate how much work it's going to be.
>
> Weichiu
>


Re: [DISCUSS] Shade guava into hadoop-thirdparty

2020-04-04 Thread Wei-Chiu Chuang
Great question!

I can run Java API Compliance Checker to detect any API changes. Guess
that's the only one to find out.

On Sat, Apr 4, 2020 at 1:19 PM Igor Dvorzhak  wrote:

> How this proposal will impact public APIs? I.e does Hadoop expose any
> Guava classes in the client APIs that will require recompiling all client
> applications because they need to use shaded Guava classes?
>
> On Sat, Apr 4, 2020 at 12:13 PM Wei-Chiu Chuang 
> wrote:
>
>> Hi Hadoop devs,
>>
>> I spent a good part of the past 7 months working with a dozen of
>> colleagues
>> to update the guava version in Cloudera's software (that includes Hadoop,
>> HBase, Spark, Hive, Cloudera Manager ... more than 20+ projects)
>>
>> After 7 months, I finally came to a conclusion: Update to Hadoop 3.3 /
>> 3.2.1 / 3.1.3, even if you just go from Hadoop 3.0/ 3.1.0 is going to be
>> really hard because of guava. Because of Guava, the amount of work to
>> certify a minor release update is almost equivalent to a major release
>> update.
>>
>> That is because:
>> (1) Going from guava 11 to guava 27 is a big jump. There are several
>> incompatible API changes in many places. Too bad the Google developers are
>> not sympathetic about its users.
>> (2) guava is used in all Hadoop jars. Not just Hadoop servers but also
>> client jars and Hadoop common libs.
>> (3) The Hadoop library is used in practically all software at Cloudera.
>>
>> Here is my proposal:
>> (1) shade guava into hadoop-thirdparty, relocate the classpath to
>> org.hadoop.thirdparty.com.google.common.*
>> (2) make a hadoop-thirdparty 1.1.0 release.
>> (3) update existing references to guava to the relocated path. There are
>> more than 2k imports that need an update.
>> (4) release Hadoop 3.3.1 / 3.2.2 that contains this change.
>>
>> In this way, we will be able to update guava in Hadoop in the future
>> without disrupting Hadoop applications.
>>
>> Note: HBase already did this and this guava update project would have been
>> much more difficult if HBase didn't do so.
>>
>> Thoughts? Other options include
>> (1) force downstream applications to migrate to Hadoop client artifacts as
>> listed here
>>
>> https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/DownstreamDev.html
>> but
>> that's nearly impossible.
>> (2) Migrate Guava to Java APIs. I suppose this is a big project and I
>> can't
>> estimate how much work it's going to be.
>>
>> Weichiu
>>
>


Re: [DISCUSS] Shade guava into hadoop-thirdparty

2020-04-04 Thread Igor Dvorzhak
How this proposal will impact public APIs? I.e does Hadoop expose any Guava
classes in the client APIs that will require recompiling all client
applications because they need to use shaded Guava classes?

On Sat, Apr 4, 2020 at 12:13 PM Wei-Chiu Chuang  wrote:

> Hi Hadoop devs,
>
> I spent a good part of the past 7 months working with a dozen of colleagues
> to update the guava version in Cloudera's software (that includes Hadoop,
> HBase, Spark, Hive, Cloudera Manager ... more than 20+ projects)
>
> After 7 months, I finally came to a conclusion: Update to Hadoop 3.3 /
> 3.2.1 / 3.1.3, even if you just go from Hadoop 3.0/ 3.1.0 is going to be
> really hard because of guava. Because of Guava, the amount of work to
> certify a minor release update is almost equivalent to a major release
> update.
>
> That is because:
> (1) Going from guava 11 to guava 27 is a big jump. There are several
> incompatible API changes in many places. Too bad the Google developers are
> not sympathetic about its users.
> (2) guava is used in all Hadoop jars. Not just Hadoop servers but also
> client jars and Hadoop common libs.
> (3) The Hadoop library is used in practically all software at Cloudera.
>
> Here is my proposal:
> (1) shade guava into hadoop-thirdparty, relocate the classpath to
> org.hadoop.thirdparty.com.google.common.*
> (2) make a hadoop-thirdparty 1.1.0 release.
> (3) update existing references to guava to the relocated path. There are
> more than 2k imports that need an update.
> (4) release Hadoop 3.3.1 / 3.2.2 that contains this change.
>
> In this way, we will be able to update guava in Hadoop in the future
> without disrupting Hadoop applications.
>
> Note: HBase already did this and this guava update project would have been
> much more difficult if HBase didn't do so.
>
> Thoughts? Other options include
> (1) force downstream applications to migrate to Hadoop client artifacts as
> listed here
>
> https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/DownstreamDev.html
> but
> that's nearly impossible.
> (2) Migrate Guava to Java APIs. I suppose this is a big project and I can't
> estimate how much work it's going to be.
>
> Weichiu
>


smime.p7s
Description: S/MIME Cryptographic Signature


[DISCUSS] Shade guava into hadoop-thirdparty

2020-04-04 Thread Wei-Chiu Chuang
Hi Hadoop devs,

I spent a good part of the past 7 months working with a dozen of colleagues
to update the guava version in Cloudera's software (that includes Hadoop,
HBase, Spark, Hive, Cloudera Manager ... more than 20+ projects)

After 7 months, I finally came to a conclusion: Update to Hadoop 3.3 /
3.2.1 / 3.1.3, even if you just go from Hadoop 3.0/ 3.1.0 is going to be
really hard because of guava. Because of Guava, the amount of work to
certify a minor release update is almost equivalent to a major release
update.

That is because:
(1) Going from guava 11 to guava 27 is a big jump. There are several
incompatible API changes in many places. Too bad the Google developers are
not sympathetic about its users.
(2) guava is used in all Hadoop jars. Not just Hadoop servers but also
client jars and Hadoop common libs.
(3) The Hadoop library is used in practically all software at Cloudera.

Here is my proposal:
(1) shade guava into hadoop-thirdparty, relocate the classpath to
org.hadoop.thirdparty.com.google.common.*
(2) make a hadoop-thirdparty 1.1.0 release.
(3) update existing references to guava to the relocated path. There are
more than 2k imports that need an update.
(4) release Hadoop 3.3.1 / 3.2.2 that contains this change.

In this way, we will be able to update guava in Hadoop in the future
without disrupting Hadoop applications.

Note: HBase already did this and this guava update project would have been
much more difficult if HBase didn't do so.

Thoughts? Other options include
(1) force downstream applications to migrate to Hadoop client artifacts as
listed here
https://hadoop.apache.org/docs/r3.1.1/hadoop-project-dist/hadoop-common/DownstreamDev.html
but
that's nearly impossible.
(2) Migrate Guava to Java APIs. I suppose this is a big project and I can't
estimate how much work it's going to be.

Weichiu


[jira] [Created] (HADOOP-16924) Update guava version to 28.1-jre

2020-03-13 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HADOOP-16924:


 Summary: Update guava version to 28.1-jre
 Key: HADOOP-16924
 URL: https://issues.apache.org/jira/browse/HADOOP-16924
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 3.4.0
Reporter: Wei-Chiu Chuang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-15218) Make Hadoop compatible with Guava 22.0+

2019-07-11 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor reopened HADOOP-15218:
---

> Make Hadoop compatible with Guava 22.0+
> ---
>
> Key: HADOOP-15218
> URL: https://issues.apache.org/jira/browse/HADOOP-15218
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
> Attachments: HADOOP-15218-001.patch
>
>
> Deprecated HostAndPort#getHostText method was deleted in Guava 22.0 and new 
> HostAndPort#getHost method is not available before Guava 20.0.
> This patch implements getHost(HostAndPort) method that extracts host from 
> HostAndPort#toString value.
> This is a little hacky, that's why I'm not sure if it worth to merge this 
> patch, but it could be nice if Hadoop will be Guava-neutral.
> With this patch Hadoop can be built against latest Guava v24.0.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15218) Make Hadoop compatible with Guava 22.0+

2019-07-11 Thread Andras Bokor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor resolved HADOOP-15218.
---
Resolution: Duplicate

> Make Hadoop compatible with Guava 22.0+
> ---
>
> Key: HADOOP-15218
> URL: https://issues.apache.org/jira/browse/HADOOP-15218
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Igor Dvorzhak
>Assignee: Igor Dvorzhak
>Priority: Major
> Attachments: HADOOP-15218-001.patch
>
>
> Deprecated HostAndPort#getHostText method was deleted in Guava 22.0 and new 
> HostAndPort#getHost method is not available before Guava 20.0.
> This patch implements getHost(HostAndPort) method that extracts host from 
> HostAndPort#toString value.
> This is a little hacky, that's why I'm not sure if it worth to merge this 
> patch, but it could be nice if Hadoop will be Guava-neutral.
> With this patch Hadoop can be built against latest Guava v24.0.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15960) Update guava to 27.0-jre in hadoop-project

2019-06-14 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-15960.
-
Resolution: Fixed

All subtasks are resolved, guava updated on branches 3.0, 3.1, 3.2 and trunk. 
Resolving this as fixed. 

If update is needed on branch-2 I can create another issue for that. We need to 
update javac version to 8 to be compatible with this guava version or use the 
-android flavor. There's an ongoing discussion about this in HADOOP-16219 if 
you want to learn more.

> Update guava to 27.0-jre in hadoop-project
> --
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0, 3.2.0, 3.0.3, 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Fix For: 3.3.0
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> [CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15272) Update Guava, see what breaks

2019-04-08 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-15272.
-
Resolution: Fixed

Fixed in HADOOP-16210. 

> Update Guava, see what breaks
> -
>
> Key: HADOOP-15272
> URL: https://issues.apache.org/jira/browse/HADOOP-15272
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>    Priority: Major
>
> We're still on Guava 11; the last attempt at an update (HADOOP-10101) failed 
> to take
> The HBase 2 version of ATS should permit this, at least for its profile.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16237) Fix new findbugs issues after update guava to 27.0-jre in hadoop-project trunk

2019-04-04 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-16237:
---

 Summary: Fix new findbugs issues after update guava to 27.0-jre in 
hadoop-project trunk
 Key: HADOOP-16237
 URL: https://issues.apache.org/jira/browse/HADOOP-16237
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.3.0
Reporter: Gabor Bota
Assignee: Gabor Bota
 Attachments: 
branch-findbugs-hadoop-common-project_hadoop-kms-warnings.html, 
branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html,
 
branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-warnings.html,
 
branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-documentstore-warnings.html

There are a bunch of new findbugs issues in the build after committing the 
guava update.
Mostly in yarn, but we have to check and handle those.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Update guava to 27.0-jre in hadoop-project

2019-04-03 Thread Wei-Chiu Chuang
+1

I watched Gabor working on this and this is a very comprehensive, which
also includes testing in downstreamers (HBase and Hive). So very good work.

Thanks!

On Wed, Apr 3, 2019 at 3:41 AM Steve Loughran 
wrote:

> I am taking silence as happiness here.
>
> +1 to the patch
>
> On Tue, Apr 2, 2019 at 9:54 AM Steve Loughran  wrote:
>
> >
> > I know that the number of guava updates we could call painless is 0, but
> > we need to do this.
> >
> > The last time we successfully updated Guava was 2012: h
> > ttps://issues.apache.org/jira/browse/HDFS-3187
> > That was the java 6 era
> >
> > The last unsuccessful attempt, April 2017:
> > https://issues.apache.org/jira/browse/HADOOP-14386
> >
> > Let's try again and this time if there are problems say: sorry, but its
> > time to move on.
> >
> > I think we should only worry about branch-3.2+ for now, though the other
> > branches could be lined up for those changes needed to ensure that
> > everything builds if you explicitly set the version (e.g findbugs
> changes.
> > Then we can worry about 3.1.x line, which is the 3.x branch most widely
> > picked up to date.
> >
> > I want to avoid branch-2 entirely, though as Gabor notes, I want to move
> > us on to java 8 builds there so that people can do a branch-2 build if
> they
> > need to.
> >
> > *Is everyone happy with the proposed patch*:
> > https://github.com/apache/hadoop/pull/674
> >
> > -Steve
> >
> >
> > On Mon, Apr 1, 2019 at 8:35 PM Gabor Bota  .invalid>
> > wrote:
> >
> >> Hi devs,
> >>
> >> I'm working on the guava version from 11.0.2 to 27.0-jre in
> >> hadoop-project.
> >> We need to do the upgrade because of CVE-2018-10237
> >> <https://nvd.nist.gov/vuln/detail/CVE-2018-10237>.
> >>
> >> I've created an issue (HADOOP-15960
> >> <https://issues.apache.org/jira/browse/HADOOP-15960>) to track progress
> >> and
> >> created subtasks for hadoop branches 3.0, 3.1, 3.2 and trunk. The first
> >> update should be done in the trunk, and then it can be backported to
> lower
> >> version branches. Backporting to 2.x is not feasible right now, because
> of
> >> Guava 20 is the last Java 7 compatible version[1], and we have Java 7
> >> compatibility on version 2 branches - but we are planning to update (
> >> HADOOP-16219 <https://issues.apache.org/jira/browse/HADOOP-16219>).
> >>
> >> For the new deprecations after the update, I've created another issue (
> >> HADOOP-16222 <https://issues.apache.org/jira/browse/HADOOP-16222>).
> Those
> >> can be fixed after the update is committed.
> >>
> >> Unit and integration testing in hadoop trunk
> >> There were modifications in the test in the following modules so
> >> precommit tests were running on jenkins:
> >>
> >>- hadoop-common-project
> >>- hadoop-hdfs-project
> >>- hadoop-mapreduce-project
> >>- hadoop-yarn-project
> >>
> >> There was one failure but after re-running the test locally it was
> >> successful, so not related to the change.
> >>
> >> Because of 5 hour test time limit for jenkins precommit build, I had to
> >> run
> >> tests on hadoop-tools manually and the tests were successful. You can
> find
> >> test results for trunk under HADOOP-16210
> >> <https://issues.apache.org/jira/browse/HADOOP-16210>.
> >>
> >> Integration testing with other components
> >> I've done testing with HBase master on hadoop branch-3.0 with guava 27,
> >> and
> >> the tests were running fine. Thanks to Peter Somogyi for help.
> >> We are planning to do some testing with Peter Vary on Hive with
> branch-3.1
> >> this week.
> >>
> >> Thanks,
> >> Gabor
> >>
> >> [1]
> >>
> >>
> https://groups.google.com/forum/#!msg/guava-discuss/ZRmDJnAq9T0/-HExv44eCAAJ
> >>
> >
>


[jira] [Resolved] (HADOOP-16230) Correct findbug ignores for unjustified issues during update to guava to 27.0-jre in hadoop-project

2019-04-03 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-16230.
-
Resolution: Won't Fix

I'll resolve it as won't fix. I will fix this in HADOOP-16210.

> Correct findbug ignores for unjustified issues during update to guava to 
> 27.0-jre in hadoop-project
> ---
>
> Key: HADOOP-16230
> URL: https://issues.apache.org/jira/browse/HADOOP-16230
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> In HADOOP-16220 I’ve added
> {code:java}
> 
>
>
>
>  
> {code}
> instead of
> {code:java}
>  
>
>
>
>  
> {code}
> So it should be {{getPersistedPaxosData}} instead of {{persistPaxosData}}.
>  
> The following description is correct, but the code was not:
> *Null passed for non-null parameter of 
> com.google.common.base.Preconditions.checkState(boolean, String, Object, 
> Object, Object) in 
> org.apache.hadoop.hdfs.qjournal.server.Journal.getPersistedPaxosData(long)*
> In {{org/apache/hadoop/hdfs/qjournal/server/Journal.java:1064}} we call
> {code:java}
> Preconditions.checkState(ret != null &&
>   ret.getSegmentState().getStartTxId() == segmentTxId,
>   "Bad persisted data for segment %s: %s ; journal id: %s",
>   segmentTxId, ret, journalId);
> {code}
> for this call findbug assumes that {{Argument 4 might be null but must not be 
> null}}, but Guava 27.0's 
> {{com.google.common.base.Preconditions#checkState(boolean, java.lang.String, 
> java.lang.Object, java.lang.Object, java.lang.Object)}} is annotated like the 
> following:
> {code:java}
>   public static void checkState(
>   boolean b,
>   @Nullable String errorMessageTemplate,
>   @Nullable Object p1,
>   @Nullable Object p2,
>   @Nullable Object p3) {
> {code}
> so we have {{@Nullable}} on each parameter for the method. I don't see this 
> warning as justified, or need to be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Update guava to 27.0-jre in hadoop-project

2019-04-03 Thread Steve Loughran
I am taking silence as happiness here.

+1 to the patch

On Tue, Apr 2, 2019 at 9:54 AM Steve Loughran  wrote:

>
> I know that the number of guava updates we could call painless is 0, but
> we need to do this.
>
> The last time we successfully updated Guava was 2012: h
> ttps://issues.apache.org/jira/browse/HDFS-3187
> That was the java 6 era
>
> The last unsuccessful attempt, April 2017:
> https://issues.apache.org/jira/browse/HADOOP-14386
>
> Let's try again and this time if there are problems say: sorry, but its
> time to move on.
>
> I think we should only worry about branch-3.2+ for now, though the other
> branches could be lined up for those changes needed to ensure that
> everything builds if you explicitly set the version (e.g findbugs changes.
> Then we can worry about 3.1.x line, which is the 3.x branch most widely
> picked up to date.
>
> I want to avoid branch-2 entirely, though as Gabor notes, I want to move
> us on to java 8 builds there so that people can do a branch-2 build if they
> need to.
>
> *Is everyone happy with the proposed patch*:
> https://github.com/apache/hadoop/pull/674
>
> -Steve
>
>
> On Mon, Apr 1, 2019 at 8:35 PM Gabor Bota 
> wrote:
>
>> Hi devs,
>>
>> I'm working on the guava version from 11.0.2 to 27.0-jre in
>> hadoop-project.
>> We need to do the upgrade because of CVE-2018-10237
>> <https://nvd.nist.gov/vuln/detail/CVE-2018-10237>.
>>
>> I've created an issue (HADOOP-15960
>> <https://issues.apache.org/jira/browse/HADOOP-15960>) to track progress
>> and
>> created subtasks for hadoop branches 3.0, 3.1, 3.2 and trunk. The first
>> update should be done in the trunk, and then it can be backported to lower
>> version branches. Backporting to 2.x is not feasible right now, because of
>> Guava 20 is the last Java 7 compatible version[1], and we have Java 7
>> compatibility on version 2 branches - but we are planning to update (
>> HADOOP-16219 <https://issues.apache.org/jira/browse/HADOOP-16219>).
>>
>> For the new deprecations after the update, I've created another issue (
>> HADOOP-16222 <https://issues.apache.org/jira/browse/HADOOP-16222>). Those
>> can be fixed after the update is committed.
>>
>> Unit and integration testing in hadoop trunk
>> There were modifications in the test in the following modules so
>> precommit tests were running on jenkins:
>>
>>- hadoop-common-project
>>- hadoop-hdfs-project
>>- hadoop-mapreduce-project
>>- hadoop-yarn-project
>>
>> There was one failure but after re-running the test locally it was
>> successful, so not related to the change.
>>
>> Because of 5 hour test time limit for jenkins precommit build, I had to
>> run
>> tests on hadoop-tools manually and the tests were successful. You can find
>> test results for trunk under HADOOP-16210
>> <https://issues.apache.org/jira/browse/HADOOP-16210>.
>>
>> Integration testing with other components
>> I've done testing with HBase master on hadoop branch-3.0 with guava 27,
>> and
>> the tests were running fine. Thanks to Peter Somogyi for help.
>> We are planning to do some testing with Peter Vary on Hive with branch-3.1
>> this week.
>>
>> Thanks,
>> Gabor
>>
>> [1]
>>
>> https://groups.google.com/forum/#!msg/guava-discuss/ZRmDJnAq9T0/-HExv44eCAAJ
>>
>


[jira] [Created] (HADOOP-16230) Corrcet findbug ignores for unjustified issues during update to guava to 27.0-jre in hadoop-project

2019-04-02 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-16230:
---

 Summary: Corrcet findbug ignores for unjustified issues during 
update to guava to 27.0-jre in hadoop-project
 Key: HADOOP-16230
 URL: https://issues.apache.org/jira/browse/HADOOP-16230
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Gabor Bota


In HADOOP-16220 I’ve added
{code:java}

   
   
   
 
{code}
instead of
{code:java}
 
   
   
   
 
{code}

So it should be {{getPersistedPaxosData}} instead of {{persistPaxosData}}.

 

The following description is correct, but the code was not:

*Null passed for non-null parameter of 
com.google.common.base.Preconditions.checkState(boolean, String, Object, 
Object, Object) in 
org.apache.hadoop.hdfs.qjournal.server.Journal.getPersistedPaxosData(long)*
In {{org/apache/hadoop/hdfs/qjournal/server/Journal.java:1064}} we call
{code:java}
Preconditions.checkState(ret != null &&
  ret.getSegmentState().getStartTxId() == segmentTxId,
  "Bad persisted data for segment %s: %s ; journal id: %s",
  segmentTxId, ret, journalId);
{code}
for this call findbug assumes that {{Argument 4 might be null but must not be 
null}}, but Guava 27.0's 
{{com.google.common.base.Preconditions#checkState(boolean, java.lang.String, 
java.lang.Object, java.lang.Object, java.lang.Object)}} is annotated like the 
following:
{code:java}
  public static void checkState(
  boolean b,
  @Nullable String errorMessageTemplate,
  @Nullable Object p1,
  @Nullable Object p2,
  @Nullable Object p3) {
{code}
so we have {{@Nullable}} on each parameter for the method. I don't see this 
warning as justified, or need to be fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Update guava to 27.0-jre in hadoop-project

2019-04-02 Thread Steve Loughran
I know that the number of guava updates we could call painless is 0, but we
need to do this.

The last time we successfully updated Guava was 2012: h
ttps://issues.apache.org/jira/browse/HDFS-3187
That was the java 6 era

The last unsuccessful attempt, April 2017:
https://issues.apache.org/jira/browse/HADOOP-14386

Let's try again and this time if there are problems say: sorry, but its
time to move on.

I think we should only worry about branch-3.2+ for now, though the other
branches could be lined up for those changes needed to ensure that
everything builds if you explicitly set the version (e.g findbugs changes.
Then we can worry about 3.1.x line, which is the 3.x branch most widely
picked up to date.

I want to avoid branch-2 entirely, though as Gabor notes, I want to move us
on to java 8 builds there so that people can do a branch-2 build if they
need to.

*Is everyone happy with the proposed patch*:
https://github.com/apache/hadoop/pull/674

-Steve


On Mon, Apr 1, 2019 at 8:35 PM Gabor Bota 
wrote:

> Hi devs,
>
> I'm working on the guava version from 11.0.2 to 27.0-jre in hadoop-project.
> We need to do the upgrade because of CVE-2018-10237
> <https://nvd.nist.gov/vuln/detail/CVE-2018-10237>.
>
> I've created an issue (HADOOP-15960
> <https://issues.apache.org/jira/browse/HADOOP-15960>) to track progress
> and
> created subtasks for hadoop branches 3.0, 3.1, 3.2 and trunk. The first
> update should be done in the trunk, and then it can be backported to lower
> version branches. Backporting to 2.x is not feasible right now, because of
> Guava 20 is the last Java 7 compatible version[1], and we have Java 7
> compatibility on version 2 branches - but we are planning to update (
> HADOOP-16219 <https://issues.apache.org/jira/browse/HADOOP-16219>).
>
> For the new deprecations after the update, I've created another issue (
> HADOOP-16222 <https://issues.apache.org/jira/browse/HADOOP-16222>). Those
> can be fixed after the update is committed.
>
> Unit and integration testing in hadoop trunk
> There were modifications in the test in the following modules so
> precommit tests were running on jenkins:
>
>- hadoop-common-project
>- hadoop-hdfs-project
>- hadoop-mapreduce-project
>- hadoop-yarn-project
>
> There was one failure but after re-running the test locally it was
> successful, so not related to the change.
>
> Because of 5 hour test time limit for jenkins precommit build, I had to run
> tests on hadoop-tools manually and the tests were successful. You can find
> test results for trunk under HADOOP-16210
> <https://issues.apache.org/jira/browse/HADOOP-16210>.
>
> Integration testing with other components
> I've done testing with HBase master on hadoop branch-3.0 with guava 27, and
> the tests were running fine. Thanks to Peter Somogyi for help.
> We are planning to do some testing with Peter Vary on Hive with branch-3.1
> this week.
>
> Thanks,
> Gabor
>
> [1]
>
> https://groups.google.com/forum/#!msg/guava-discuss/ZRmDJnAq9T0/-HExv44eCAAJ
>


Update guava to 27.0-jre in hadoop-project

2019-04-01 Thread Gabor Bota
Hi devs,

I'm working on the guava version from 11.0.2 to 27.0-jre in hadoop-project.
We need to do the upgrade because of CVE-2018-10237
<https://nvd.nist.gov/vuln/detail/CVE-2018-10237>.

I've created an issue (HADOOP-15960
<https://issues.apache.org/jira/browse/HADOOP-15960>) to track progress and
created subtasks for hadoop branches 3.0, 3.1, 3.2 and trunk. The first
update should be done in the trunk, and then it can be backported to lower
version branches. Backporting to 2.x is not feasible right now, because of
Guava 20 is the last Java 7 compatible version[1], and we have Java 7
compatibility on version 2 branches - but we are planning to update (
HADOOP-16219 <https://issues.apache.org/jira/browse/HADOOP-16219>).

For the new deprecations after the update, I've created another issue (
HADOOP-16222 <https://issues.apache.org/jira/browse/HADOOP-16222>). Those
can be fixed after the update is committed.

Unit and integration testing in hadoop trunk
There were modifications in the test in the following modules so
precommit tests were running on jenkins:

   - hadoop-common-project
   - hadoop-hdfs-project
   - hadoop-mapreduce-project
   - hadoop-yarn-project

There was one failure but after re-running the test locally it was
successful, so not related to the change.

Because of 5 hour test time limit for jenkins precommit build, I had to run
tests on hadoop-tools manually and the tests were successful. You can find
test results for trunk under HADOOP-16210
<https://issues.apache.org/jira/browse/HADOOP-16210>.

Integration testing with other components
I've done testing with HBase master on hadoop branch-3.0 with guava 27, and
the tests were running fine. Thanks to Peter Somogyi for help.
We are planning to do some testing with Peter Vary on Hive with branch-3.1
this week.

Thanks,
Gabor

[1]
https://groups.google.com/forum/#!msg/guava-discuss/ZRmDJnAq9T0/-HExv44eCAAJ


[jira] [Created] (HADOOP-16222) Fix new deprecations after guava 27.0 update in 3.+

2019-03-29 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-16222:
---

 Summary: Fix new deprecations after guava 27.0 update in 3.+
 Key: HADOOP-16222
 URL: https://issues.apache.org/jira/browse/HADOOP-16222
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Gabor Bota
Assignee: Gabor Bota
 Fix For: 3.0.4, 3.3.0, 3.1.3, 3.2.0


There are a bunch of new deprecations after the guava update. We need to fix 
those, because these will be removed after the new update. 
I created a separate jira for this from HADOOP-16210 because jenkins pre-commit 
test job (yetus) will time-out after 5 hours after running this together. 

{noformat}
[WARNING] 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/SemaphoredDelegatingExecutor.java:[110,20]
 [deprecation] immediateFailedCheckedFuture(X) in Futures has been 
deprecated
[WARNING] 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ZKUtil.java:[175,16]
 [deprecation] toString(File,Charset) in Files has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[44,9]
 [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[67,9]
 [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[131,9]
 [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[150,9]
 [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java:[169,9]
 [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestZKUtil.java:[134,9]
 [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestSecurityUtil.java:[437,9]
 [deprecation] write(CharSequence,File,Charset) in Files has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java:[211,26]
 [deprecation] toString(File,Charset) in Files has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java:[219,36]
 [deprecation] toString(File,Charset) in Files has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/TestFpgaResourceHandler.java:[130,9]
 [deprecation] append(CharSequence,File,Charset) in Files has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/fpga/TestFpgaResourceHandler.java:[352,9]
 [deprecation] append(CharSequence,File,Charset) in Files has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/applicationsmanager/TestAMRestart.java:[1161,18]
 [deprecation] propagate(Throwable) in Throwables has been deprecated
[WARNING] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/ServiceTestUtils.java:[413,18]
 [deprecation] propagate(Throwable) in Throwables has been deprecated
{noformat}

Maybe fix these by module by module instead of a single patch?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16220) Add findbugs ignores for unjustified issues during update to guava to 27.0-jre in hadoop-project

2019-03-29 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-16220:
---

 Summary: Add findbugs ignores for unjustified issues during update 
to guava to 27.0-jre in hadoop-project
 Key: HADOOP-16220
 URL: https://issues.apache.org/jira/browse/HADOOP-16220
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.0.4, 3.3.0, 3.2.1, 3.1.3
Reporter: Gabor Bota
Assignee: Gabor Bota


There are some findbugs issues with the guava update that seemed unjustified 
and should be fixed before the update:

 * *Null passed for non-null parameter of 
com.google.common.base.Preconditions.checkState(boolean, String, Object, 
Object, Object) in 
org.apache.hadoop.hdfs.qjournal.server.Journal.getPersistedPaxosData(long)*
In {{org/apache/hadoop/hdfs/qjournal/server/Journal.java:1064}} we call
{code:java}
Preconditions.checkState(ret != null &&
  ret.getSegmentState().getStartTxId() == segmentTxId,
  "Bad persisted data for segment %s: %s ; journal id: %s",
  segmentTxId, ret, journalId);
{code}
for this call findbug assumes that {{Argument 4 might be null but must not be 
null}}, but Guava 27.0's 
{{com.google.common.base.Preconditions#checkState(boolean, java.lang.String, 
java.lang.Object, java.lang.Object, java.lang.Object)}} is annotated like the 
following:
{code:java}
  public static void checkState(
  boolean b,
  @Nullable String errorMessageTemplate,
  @Nullable Object p1,
  @Nullable Object p2,
  @Nullable Object p3) {
{code}
so we have {{@Nullable}} on each parameter for the method. I don't see this 
warning as justified, or need to be fixed.

 * *Null passed for non-null parameter of 
com.google.common.base.Preconditions.checkArgument(boolean, String, Object) in 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.getLogDir(String, String)*
In org/apache/hadoop/hdfs/qjournal/server/JournalNode.java:325 we call
{code:java}
Preconditions.checkArgument(jid != null &&
!jid.isEmpty(),
"bad journal identifier: %s", jid);
{code}
for this call findbug assumes that {{Argument 3 might be null but must not be 
null}}, but Guava 27.0's 
{{com.google.common.base.Preconditions#checkArgument(boolean, java.lang.String, 
java.lang.Object)}} is annotated like the following:
{code:java}
  public static void checkArgument(
  boolean b, @Nullable String errorMessageTemplate, @Nullable Object p1) {
{code}
so we have {{@Nullable}} on argument 3, and that renders the assumption 
incorrect.

 * *Nullcheck of jid at line 346 of value previously dereferenced in 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.getLogDir(String, String)*
This is about the {{assert jid != null;}} at JournalNode.java:346. IMHO that 
check is for devs to inform that the variable can't be null at that point - so 
just for visibility. I would leave it as is, it's not a redundant check, just 
additional information. (I'm not a fan of using {{assert}} in production code, 
but if it's there we can leave it).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16218) findbugs warning of null param to non-nullable method in Configuration with Guava update

2019-03-28 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16218:
---

 Summary: findbugs warning of null param to non-nullable method in 
Configuration with Guava update
 Key: HADOOP-16218
 URL: https://issues.apache.org/jira/browse/HADOOP-16218
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 3.3.0
Reporter: Steve Loughran
Assignee: Steve Loughran


Findbugs is fussing over some unchanged code in configuration
{code}
Null passed for non-null parameter of writeXml(String, Writer) in 
org.apache.hadoop.conf.Configuration.writeXml(Writer)
Bug type NP_NULL_PARAM_DEREF_ALL_TARGETS_DANGEROUS (click for details) 
In class org.apache.hadoop.conf.Configuration
In method org.apache.hadoop.conf.Configuration.writeXml(Writer)
Called method org.apache.hadoop.conf.Configuration.writeXml(String, Writer)
At Configuration.java:[line 3490]
Argument 1 is definitely null but must not be null
Definite null passed to dangerous method call target 
org.apache.hadoop.conf.Configuration.writeXml(String, Writer)
{code}

Code looks fine; it's invoking Guava.Strings for the check for string == 
empty-or-null; maybe something changed there.

Proposed: add @Nullable in Configuration writeXml



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16213) Update guava to 27.0-jre in hadoop-project branch-3.1

2019-03-27 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-16213:
---

 Summary: Update guava to 27.0-jre in hadoop-project branch-3.1
 Key: HADOOP-16213
 URL: https://issues.apache.org/jira/browse/HADOOP-16213
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.1.0, 3.2.0, 3.1.1, 3.1.2
Reporter: Gabor Bota
Assignee: Gabor Bota


com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
CVE-2018-10237.

This is a sub-task for branch-3.2 from HADOOP-15960 to track issues on that 
particular branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16212) Update guava to 27.0-jre in hadoop-project branch-3.0

2019-03-27 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-16212:
---

 Summary: Update guava to 27.0-jre in hadoop-project branch-3.0
 Key: HADOOP-16212
 URL: https://issues.apache.org/jira/browse/HADOOP-16212
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.0.3, 3.0.2, 3.0.1, 3.0.0
Reporter: Gabor Bota
Assignee: Gabor Bota


com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
CVE-2018-10237.

This is a sub-task for branch-3.0 from HADOOP-15960 to track issues on that 
particular branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16211) Update guava to 27.0-jre in hadoop-project branch-3.2 and branch-3.1

2019-03-27 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-16211:
---

 Summary: Update guava to 27.0-jre in hadoop-project branch-3.2 and 
branch-3.1
 Key: HADOOP-16211
 URL: https://issues.apache.org/jira/browse/HADOOP-16211
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.1.2, 3.1.1, 3.2.0, 3.1.0
Reporter: Gabor Bota
Assignee: Gabor Bota


com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
CVE-2018-10237.

This is a sub-task for branch-3.2 and branch-3.1 from HADOOP-15960 to track 
issues on those particular branches. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16210) Update guava to 27.0-jre in hadoop-project trunk

2019-03-27 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-16210:
---

 Summary: Update guava to 27.0-jre in hadoop-project trunk
 Key: HADOOP-16210
 URL: https://issues.apache.org/jira/browse/HADOOP-16210
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.3.0
Reporter: Gabor Bota
Assignee: Gabor Bota


com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
CVE-2018-10237.

This is a sub-task for trunk from HADOOP-15960 to track issues with that 
particular branch.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-15960:
---

 Summary: Update guava to 27.0-jre in hadoop-common
 Key: HADOOP-15960
 URL: https://issues.apache.org/jira/browse/HADOOP-15960
 Project: Hadoop Common
  Issue Type: Bug
  Components: common, security
Affects Versions: 3.1.0
Reporter: Gabor Bota
Assignee: Gabor Bota


com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
CVE-2018-10237



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15272) Update Guava, see what breaks

2018-02-28 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15272:
---

 Summary: Update Guava, see what breaks
 Key: HADOOP-15272
 URL: https://issues.apache.org/jira/browse/HADOOP-15272
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 3.1.0
Reporter: Steve Loughran


We're still on Guava 11; the last attempt at an update (HADOOP-10101) failed to 
take

Now we have better shading, we should try again. I suspect that YARN timeline 
service is going to be the problem because of its use of HBase. That's the 
price of a loop in the DAG. We cannot keep everything frozen just because of 
that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15218) Make Hadoop compatible with Guava 22.0+

2018-02-08 Thread Igor Dvorzhak (JIRA)
Igor Dvorzhak created HADOOP-15218:
--

 Summary: Make Hadoop compatible with Guava 22.0+
 Key: HADOOP-15218
 URL: https://issues.apache.org/jira/browse/HADOOP-15218
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Igor Dvorzhak
Assignee: Igor Dvorzhak


Deprecated HostAndPort#getHostText method was deleted in Guava 22.0 and new 
HostAndPort#getHost method is not available before Guava 20.0.

This patch implements getHost(HostAndPort) method that extracts host from 
HostAndPort#toString value.

This is a little hacky, that's why I'm not sure if it worth to merge this 
patch, but it could be nice if Hadoop will be Guava-neutral.

With this patch Hadoop can be built against latest Guava v24.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15214) Make Hadoop compatible with Guava 21.0

2018-02-07 Thread Igor Dvorzhak (JIRA)
Igor Dvorzhak created HADOOP-15214:
--

 Summary: Make Hadoop compatible with Guava 21.0
 Key: HADOOP-15214
 URL: https://issues.apache.org/jira/browse/HADOOP-15214
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.0.0
Reporter: Igor Dvorzhak
Assignee: Igor Dvorzhak
 Attachments: HADOOP-11032.001.patch





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14957) ReconfigurationTaskStatus is exposing guava Optional in its public api

2017-10-17 Thread Haibo Chen (JIRA)
Haibo Chen created HADOOP-14957:
---

 Summary: ReconfigurationTaskStatus is exposing guava Optional in 
its public api
 Key: HADOOP-14957
 URL: https://issues.apache.org/jira/browse/HADOOP-14957
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.0.0-beta1
Reporter: Haibo Chen
Assignee: Haibo Chen






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14891) Guava 21.0+ libraries not compatible with user jobs

2017-09-21 Thread Jonathan Eagles (JIRA)
Jonathan Eagles created HADOOP-14891:


 Summary: Guava 21.0+ libraries not compatible with user jobs
 Key: HADOOP-14891
 URL: https://issues.apache.org/jira/browse/HADOOP-14891
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.8.1
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles


Use provided a guava 23.0 jar as part of the job submission.

{code}
2017-09-20 16:10:42,897 [INFO] [main] |service.AbstractService|: Service 
org.apache.tez.dag.app.DAGAppMaster failed in state STARTED; cause: 
org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: 
com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper;
org.apache.hadoop.service.ServiceStateException: java.lang.NoSuchMethodError: 
com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper;
at 
org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at 
org.apache.tez.dag.app.DAGAppMaster.startServices(DAGAppMaster.java:1989)
at 
org.apache.tez.dag.app.DAGAppMaster.serviceStart(DAGAppMaster.java:2056)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2707)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1936)
at 
org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2703)
at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2508)
Caused by: java.lang.NoSuchMethodError: 
com.google.common.base.Objects.toStringHelper(Ljava/lang/Object;)Lcom/google/common/base/Objects$ToStringHelper;
at 
org.apache.hadoop.metrics2.lib.MetricsRegistry.toString(MetricsRegistry.java:419)
at java.lang.String.valueOf(String.java:2994)
at java.lang.StringBuilder.append(StringBuilder.java:131)
at org.apache.hadoop.ipc.metrics.RpcMetrics.(RpcMetrics.java:74)
at org.apache.hadoop.ipc.metrics.RpcMetrics.create(RpcMetrics.java:80)
at org.apache.hadoop.ipc.Server.(Server.java:2658)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810)
at 
org.apache.tez.dag.api.client.DAGClientServer.createServer(DAGClientServer.java:134)
at 
org.apache.tez.dag.api.client.DAGClientServer.serviceStart(DAGClientServer.java:82)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.tez.dag.app.DAGAppMaster$ServiceWithDependency.start(DAGAppMaster.java:1909)
at 
org.apache.tez.dag.app.DAGAppMaster$ServiceThread.run(DAGAppMaster.java:1930)
2017-09-20 16:10:42,898 [ERROR] [main] |rm.TaskSchedulerManager|: Failed to do 
a clean initiateStop for Scheduler: [0:TezYarn]
{code}

Metrics2 has been relying on deprecated toStringHelper for some time now which 
was finally removed in guava 21.0. Removing the dependency on this method will 
free up the user to supplying their own guava jar again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14284) Shade Guava everywhere

2017-09-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa resolved HADOOP-14284.
-
Resolution: Invalid

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14847) Remove Guava Supplier and change to java Supplier in AMRMClient and AMRMClientAysnc

2017-09-07 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-14847:
---

 Summary: Remove Guava Supplier and change to java Supplier in 
AMRMClient and AMRMClientAysnc
 Key: HADOOP-14847
 URL: https://issues.apache.org/jira/browse/HADOOP-14847
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Bharat Viswanadham


Remove the Guava library Supplier usage in user facing API's in AMRMClient.java 
and AMRMClientAsync.java



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14386) Make trunk work with Guava 11.0.2 again

2017-05-04 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-14386:


 Summary: Make trunk work with Guava 11.0.2 again
 Key: HADOOP-14386
 URL: https://issues.apache.org/jira/browse/HADOOP-14386
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0-alpha3
Reporter: Andrew Wang


As an alternative to reverting or shading HADOOP-10101 (the upgrade of Guava 
from 11.0.2 to 21.0), HADOOP-14380 makes the Guava version configurable. 
However, it still doesn't compile with Guava 11.0.2, since HADOOP-10101 chose 
to use the moved Guava classes rather than replacing them with alternatives.

This JIRA aims to make Hadoop compatible with Guava 11.0.2 as well as 21.0 by 
replacing usage of these moved Guava classes.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14380) Make Guava version Hadoop builds with configurable

2017-05-04 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14380:
---

 Summary: Make Guava version Hadoop builds with configurable
 Key: HADOOP-14380
 URL: https://issues.apache.org/jira/browse/HADOOP-14380
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0-alpha3
Reporter: Steve Loughran
Assignee: Steve Loughran


Make the choice of guava version Hadoop builds with configurable, so people 
building Hadoop 3 alphas can build with an older version and so cause less 
unhappiness downstream



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14284) Shade Guava everywhere

2017-04-05 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-14284:


 Summary: Shade Guava everywhere
 Key: HADOOP-14284
 URL: https://issues.apache.org/jira/browse/HADOOP-14284
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0-alpha3
Reporter: Andrew Wang
Priority: Blocker


HADOOP-10101 upgraded the guava version for 3.x to 21.

Guava is broadly used by Java projects that consume our artifacts. 
Unfortunately, these projects also consume our private artifacts like 
{{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced by 
HADOOP-11804, currently only available in 3.0.0-alpha2.

We should shade Guava everywhere to proactively avoid breaking downstreams. 
This isn't a requirement for all dependency upgrades, but it's necessary for 
known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-12592) Remove guava usage in the hdfs-client module

2015-11-23 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-12592:
---

 Summary: Remove guava usage in the hdfs-client module
 Key: HADOOP-12592
 URL: https://issues.apache.org/jira/browse/HADOOP-12592
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Haohui Mai


The following classes in hdfs-client use Google's guava library:

{noformat}
./src/main/java/org/apache/hadoop/hdfs/BlockReaderFactory.java
./src/main/java/org/apache/hadoop/hdfs/BlockReaderLocal.java
./src/main/java/org/apache/hadoop/hdfs/ClientContext.java
./src/main/java/org/apache/hadoop/hdfs/DFSClient.java
./src/main/java/org/apache/hadoop/hdfs/DFSClientFaultInjector.java
./src/main/java/org/apache/hadoop/hdfs/DFSInotifyEventInputStream.java
./src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java
./src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java
./src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java
./src/main/java/org/apache/hadoop/hdfs/DFSStripedOutputStream.java
./src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java
./src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
./src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java
./src/main/java/org/apache/hadoop/hdfs/KeyProviderCache.java
./src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java
./src/main/java/org/apache/hadoop/hdfs/PeerCache.java
./src/main/java/org/apache/hadoop/hdfs/RemoteBlockReader2.java
./src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java
./src/main/java/org/apache/hadoop/hdfs/XAttrHelper.java
./src/main/java/org/apache/hadoop/hdfs/client/HdfsDataInputStream.java
./src/main/java/org/apache/hadoop/hdfs/client/HdfsDataOutputStream.java
./src/main/java/org/apache/hadoop/hdfs/client/impl/DfsClientConf.java
./src/main/java/org/apache/hadoop/hdfs/client/impl/LeaseRenewer.java
./src/main/java/org/apache/hadoop/hdfs/protocol/BlockStoragePolicy.java
./src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveInfo.java
./src/main/java/org/apache/hadoop/hdfs/protocol/CacheDirectiveIterator.java
./src/main/java/org/apache/hadoop/hdfs/protocol/DatanodeID.java
./src/main/java/org/apache/hadoop/hdfs/protocol/LocatedBlock.java
./src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReport.java
./src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PacketHeader.java
./src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PacketReceiver.java
./src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/PipelineAck.java
./src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/DataTransferSaslUtil.java
./src/main/java/org/apache/hadoop/hdfs/protocol/datatransfer/sasl/SaslDataTransferClient.java
./src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientDatanodeProtocolTranslatorPB.java
./src/main/java/org/apache/hadoop/hdfs/protocolPB/ClientNamenodeProtocolTranslatorPB.java
./src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java
./src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockMetadataHeader.java
./src/main/java/org/apache/hadoop/hdfs/shortcircuit/DfsClientShm.java
./src/main/java/org/apache/hadoop/hdfs/shortcircuit/DfsClientShmManager.java
./src/main/java/org/apache/hadoop/hdfs/shortcircuit/DomainSocketFactory.java
./src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitCache.java
./src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitReplica.java
./src/main/java/org/apache/hadoop/hdfs/shortcircuit/ShortCircuitShm.java
./src/main/java/org/apache/hadoop/hdfs/util/ByteArrayManager.java
./src/main/java/org/apache/hadoop/hdfs/util/StripedBlockUtil.java
./src/main/java/org/apache/hadoop/hdfs/web/ByteRangeInputStream.java
./src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java
./src/main/java/org/apache/hadoop/hdfs/web/TokenAspect.java
./src/main/java/org/apache/hadoop/hdfs/web/URLConnectionFactory.java
./src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
./src/main/java/org/apache/hadoop/hdfs/web/resources/UserParam.java
./src/test/java/org/apache/hadoop/hdfs/TestPeerCache.java
./src/test/java/org/apache/hadoop/hdfs/client/impl/TestLeaseRenewer.java
./src/test/java/org/apache/hadoop/hdfs/web/TestByteRangeInputStream.java
./src/test/java/org/apache/hadoop/hdfs/web/TestURLConnectionFactory.java
{noformat}

Guava has created quite a few dependency headache for downstream, it would be 
nice to not using Guava code in the hdfs-client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12475) Replace guava Cache with ConcurrentHashMap for caching Connection in ipc Client

2015-10-13 Thread Walter Su (JIRA)
Walter Su created HADOOP-12475:
--

 Summary: Replace guava Cache with ConcurrentHashMap for caching 
Connection in ipc Client
 Key: HADOOP-12475
 URL: https://issues.apache.org/jira/browse/HADOOP-12475
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su


quote [~daryn] from HADOOP-11772:
{quote}
CacheBuilder is obscenely expensive for concurrent map, and it requires 
generating unnecessary garbage even just to look up a key. Replace it with 
ConcurrentHashMap.

I identified this issue that impaired my own perf testing under load. The 
slowdown isn't just the sync. It's the expensive of Connection's ctor stalling 
other connections. The expensive of ConnectionId#equals causes delays. 
Synch'ing on connections causes unfair contention unlike a sync'ed method. 
Concurrency simply hides this.
{quote}

BTW, guava Cache is heavyweight. Per local test, ConcurrentHashMap has better 
overal performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11616) Remove workaround for Curator's ChildReaper requiring Guava 15+

2015-02-19 Thread Robert Kanter (JIRA)
Robert Kanter created HADOOP-11616:
--

 Summary: Remove workaround for Curator's ChildReaper requiring 
Guava 15+
 Key: HADOOP-11616
 URL: https://issues.apache.org/jira/browse/HADOOP-11616
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Robert Kanter


HADOOP-11612 adds a copy of Curator 2.7.1's {{ChildReaper}} and 
{{TestChildReaper}} with minor modifications to work with Guava 11.0.2.  We 
should remove these classes and update any usages to point to Curator itself 
once we update Guava.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11612) Workaround for Curator's ChildReaper requiring Guava 15+

2015-02-18 Thread Robert Kanter (JIRA)
Robert Kanter created HADOOP-11612:
--

 Summary: Workaround for Curator's ChildReaper requiring Guava 15+
 Key: HADOOP-11612
 URL: https://issues.apache.org/jira/browse/HADOOP-11612
 Project: Hadoop Common
  Issue Type: Task
Affects Versions: 2.8.0
Reporter: Robert Kanter
Assignee: Robert Kanter


HADOOP-11492 upped the Curator version to 2.7.1, which makes the 
{{ChildReaper}} class use a method that only exists in newer versions of Guava 
(we have 11.0.2, and it needs 15+).  As a workaround, we can copy the 
{{ChildReaper}} class into hadoop-common and make a minor modification to allow 
it to work with Guava 11.

The {{ChildReaper}} is used by Curator to cleanup old lock znodes.  Curator 
locks are needed by YARN-2942.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11600) Fix up source codes to be compiled with Guava 18.0

2015-02-16 Thread Tsuyoshi OZAWA (JIRA)
Tsuyoshi OZAWA created HADOOP-11600:
---

 Summary: Fix up source codes to be compiled with Guava 18.0
 Key: HADOOP-11600
 URL: https://issues.apache.org/jira/browse/HADOOP-11600
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 2.6.0
Reporter: Tsuyoshi OZAWA
Assignee: Tsuyoshi OZAWA
 Fix For: 2.7.0


Removing usage of Guava's deprecated or missing methods in latest version(18.0) 
without updating pom file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11470) eliminate use of incompatible guava APIs from the hadoop codebase

2015-01-08 Thread Sangjin Lee (JIRA)
Sangjin Lee created HADOOP-11470:


 Summary: eliminate use of incompatible guava APIs from the hadoop 
codebase
 Key: HADOOP-11470
 URL: https://issues.apache.org/jira/browse/HADOOP-11470
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Sangjin Lee
Assignee: Sangjin Lee


Along the same vein as HADOOP-11286, there are now several remaining usages of 
guava APIs that are now incompatible with a more recent version (e.g. 16).

This JIRA proposes eliminating those usages. With this, the hadoop code base 
should run/compile cleanly even if guava 16 is used for example.

This JIRA doesn't propose upgrading the guava dependency version however (just 
making the codebase compatible with guava 16+).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11319) Update Guava to 18.0

2014-11-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-11319.
-
Resolution: Duplicate

> Update Guava to 18.0
> 
>
> Key: HADOOP-11319
> URL: https://issues.apache.org/jira/browse/HADOOP-11319
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Tim Robertson
>Priority: Critical
>
> In the hadoop libraries you'll find both 11.0.2 (test scope IIRC) and 14.0.1 
> which are both very outdated.  14.0.1 removes things used in 11.0.2 and 15.0 
> has removed things in use by Hadoop code in 14.0.1.
> In our experience through CDH3,4 and 5 Guava (along with Jackson and SLF4J 
> 1.7.5) have been the biggest cause for CP issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11319) Update Guava to 18.0

2014-11-19 Thread Tim Robertson (JIRA)
Tim Robertson created HADOOP-11319:
--

 Summary: Update Guava to 18.0
 Key: HADOOP-11319
 URL: https://issues.apache.org/jira/browse/HADOOP-11319
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Tim Robertson
Priority: Critical


In the hadoop libraries you'll find both 11.0.2 (test scope IIRC) and 14.0.1 
which are both very outdated.  14.0.1 removes things used in 11.0.2 and 15.0 
has removed things in use by Hadoop code in 14.0.1.

In our experience through CDH3,4 and 5 Guava (along with Jackson and SLF4J 
1.7.5) have been the biggest cause for CP issues.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Guava

2014-11-10 Thread Colin McCabe
I'm usually an advocate for getting rid of unnecessary dependencies
(cough, jetty, cough), but a lot of the things in Guava are really
useful.

Immutable collections, BiMap, Multisets, Arrays#asList, the stuff for
writing hashCode() and equals(), String#Joiner, the list goes on.  We
particularly use the Cache/CacheBuilder stuff a lot in HDFS to get
maps with LRU eviction without writing a lot of boilerplate.  The QJM
stuff uses ListenableFuture a lot, although perhaps we could come up
with our own equivalent for that.

On Mon, Nov 10, 2014 at 9:26 AM, Alejandro Abdelnur  wrote:
> IMO we should:
>
> 1* have a clean and thin client API JAR (which does not drag any 3rd party
> dependencies, or a well defined small set -i.e. slf4j & log4j-)
> 2* have a client implementation that uses a classloader to isolate client
> impl 3rd party deps from app dependencies.
>
> #2 can be done using a stock URLClassLoader (i would just subclass it to
> forbid packages in the API JAR and exposed 3rd parties to be loaded from
> the app JAR)
>
> #1 is the tricky thing as our current API modules don't have a clean
> API/impl separation.
>
> thx
> PS: If folks are interested in pursing this, I can put together a prototype
> of how  #2 would work (I don't think it will be more than 200 lines of code)

Absolutely, I agree that we should not be using Guava types in public
APIs.  Guava has not been very responsible with backwards
compatibility, that much is clear.

A client / server jar separation is an interesting idea.  But then we
still have to get rid of Guava and other library deps in the client
jars.  I think it would be more work than it seems.  For example, the
HDFS client uses Guava Cache a lot, so we'd have to write our own
version of this.

Can't we just shade this stuff?  Has anyone tried shading Hadoop's Guava?

best,
Colin


>
>
> On Mon, Nov 10, 2014 at 5:18 AM, Steve Loughran 
> wrote:
>
>> Yes, Guava is a constant pain; there's lots of open JIRAs related to it, as
>> its the one we can't seamlessly upgrade. Not unless we do our own fork and
>> reinsert the missing classes.
>>
>> The most common uses in the code are
>>
>> @VisibleForTesting (easily replicated)
>> and the Precondition.check() operations
>>
>> The latter is also easily swapped out, and we could even add the check they
>> forgot:
>> Preconditions.checkArgNotNull(argname, arg)
>>
>>
>> These are easy; its the more complex data structures that matter more.
>>
>> I think for Hadoop 2.7 & java 7 we need to look at this problem and do
>> something. Even if we continue to ship Guava 11 so that the HBase team
>> don't send any (more) death threats, we can/should rework Hadoop to build
>> and run against Guava 16+ too. That's needed to fix some of the recent java
>> 7/8+ changes.
>>
>> -Everything in v11 dropped from v16 MUST  to be implemented with our own
>> versions.
>> -anything tagged as deprecated in 11+ SHOULD be replaced by newer stuff,
>> wherever possible.
>>
>> I think for 2.7+ we should add some new profiles to the POM, for Java 8 and
>> 9 alongside the new baseline java 7. For those later versions we could
>> perhaps mandate Guava 16.
>>
>>
>>
>> On 10 November 2014 00:42, Arun C Murthy  wrote:
>>
>> > … has been a constant pain w.r.t compatibility etc.
>> >
>> > Should we consider adopting a policy to not use guava in
>> Common/HDFS/YARN?
>> >
>> > MR doesn't matter too much since it's application-side issue, it does
>> hurt
>> > end-users though since they still might want a newer guava-version, but
>> at
>> > least they can modify MR.
>> >
>> > Thoughts?
>> >
>> > thanks,
>> > Arun
>> >
>> >
>> > --
>> > CONFIDENTIALITY NOTICE
>> > NOTICE: This message is intended for the use of the individual or entity
>> to
>> > which it is addressed and may contain information that is confidential,
>> > privileged and exempt from disclosure under applicable law. If the reader
>> > of this message is not the intended recipient, you are hereby notified
>> that
>> > any printing, copying, dissemination, distribution, disclosure or
>> > forwarding of this communication is strictly prohibited. If you have
>> > received this communication in error, please contact the sender
>> immediately
>> > and delete it from your system. Thank You.
>> >
>>
>> --
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>>


Re: Guava

2014-11-10 Thread Sangjin Lee
FYI, we have an existing ApplicationClassLoader implementation that is used
to isolate client/task classes from the rest. If we're going down the route
of classloader isolation on this, it would be good to come up with a
coherent strategy regarding both of these.

As a more practical step, I like the idea of isolating usage of guava that
breaks with guava 16 and later. I assume (but I haven't looked into it)
that it's fairly straightforward to isolate them and fix them. That work
could be done at any time without any version upgrades or impacting users.

On Mon, Nov 10, 2014 at 9:26 AM, Alejandro Abdelnur 
wrote:

> IMO we should:
>
> 1* have a clean and thin client API JAR (which does not drag any 3rd party
> dependencies, or a well defined small set -i.e. slf4j & log4j-)
> 2* have a client implementation that uses a classloader to isolate client
> impl 3rd party deps from app dependencies.
>
> #2 can be done using a stock URLClassLoader (i would just subclass it to
> forbid packages in the API JAR and exposed 3rd parties to be loaded from
> the app JAR)
>
> #1 is the tricky thing as our current API modules don't have a clean
> API/impl separation.
>
> thx
> PS: If folks are interested in pursing this, I can put together a prototype
> of how  #2 would work (I don't think it will be more than 200 lines of
> code)
>
>
> On Mon, Nov 10, 2014 at 5:18 AM, Steve Loughran 
> wrote:
>
> > Yes, Guava is a constant pain; there's lots of open JIRAs related to it,
> as
> > its the one we can't seamlessly upgrade. Not unless we do our own fork
> and
> > reinsert the missing classes.
> >
> > The most common uses in the code are
> >
> > @VisibleForTesting (easily replicated)
> > and the Precondition.check() operations
> >
> > The latter is also easily swapped out, and we could even add the check
> they
> > forgot:
> > Preconditions.checkArgNotNull(argname, arg)
> >
> >
> > These are easy; its the more complex data structures that matter more.
> >
> > I think for Hadoop 2.7 & java 7 we need to look at this problem and do
> > something. Even if we continue to ship Guava 11 so that the HBase team
> > don't send any (more) death threats, we can/should rework Hadoop to build
> > and run against Guava 16+ too. That's needed to fix some of the recent
> java
> > 7/8+ changes.
> >
> > -Everything in v11 dropped from v16 MUST  to be implemented with our own
> > versions.
> > -anything tagged as deprecated in 11+ SHOULD be replaced by newer stuff,
> > wherever possible.
> >
> > I think for 2.7+ we should add some new profiles to the POM, for Java 8
> and
> > 9 alongside the new baseline java 7. For those later versions we could
> > perhaps mandate Guava 16.
> >
> >
> >
> > On 10 November 2014 00:42, Arun C Murthy  wrote:
> >
> > > ... has been a constant pain w.r.t compatibility etc.
> > >
> > > Should we consider adopting a policy to not use guava in
> > Common/HDFS/YARN?
> > >
> > > MR doesn't matter too much since it's application-side issue, it does
> > hurt
> > > end-users though since they still might want a newer guava-version, but
> > at
> > > least they can modify MR.
> > >
> > > Thoughts?
> > >
> > > thanks,
> > > Arun
> > >
> > >
> > > --
> > > CONFIDENTIALITY NOTICE
> > > NOTICE: This message is intended for the use of the individual or
> entity
> > to
> > > which it is addressed and may contain information that is confidential,
> > > privileged and exempt from disclosure under applicable law. If the
> reader
> > > of this message is not the intended recipient, you are hereby notified
> > that
> > > any printing, copying, dissemination, distribution, disclosure or
> > > forwarding of this communication is strictly prohibited. If you have
> > > received this communication in error, please contact the sender
> > immediately
> > > and delete it from your system. Thank You.
> > >
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
> >
>


Re: Guava

2014-11-10 Thread Alejandro Abdelnur
IMO we should:

1* have a clean and thin client API JAR (which does not drag any 3rd party
dependencies, or a well defined small set -i.e. slf4j & log4j-)
2* have a client implementation that uses a classloader to isolate client
impl 3rd party deps from app dependencies.

#2 can be done using a stock URLClassLoader (i would just subclass it to
forbid packages in the API JAR and exposed 3rd parties to be loaded from
the app JAR)

#1 is the tricky thing as our current API modules don't have a clean
API/impl separation.

thx
PS: If folks are interested in pursing this, I can put together a prototype
of how  #2 would work (I don't think it will be more than 200 lines of code)


On Mon, Nov 10, 2014 at 5:18 AM, Steve Loughran 
wrote:

> Yes, Guava is a constant pain; there's lots of open JIRAs related to it, as
> its the one we can't seamlessly upgrade. Not unless we do our own fork and
> reinsert the missing classes.
>
> The most common uses in the code are
>
> @VisibleForTesting (easily replicated)
> and the Precondition.check() operations
>
> The latter is also easily swapped out, and we could even add the check they
> forgot:
> Preconditions.checkArgNotNull(argname, arg)
>
>
> These are easy; its the more complex data structures that matter more.
>
> I think for Hadoop 2.7 & java 7 we need to look at this problem and do
> something. Even if we continue to ship Guava 11 so that the HBase team
> don't send any (more) death threats, we can/should rework Hadoop to build
> and run against Guava 16+ too. That's needed to fix some of the recent java
> 7/8+ changes.
>
> -Everything in v11 dropped from v16 MUST  to be implemented with our own
> versions.
> -anything tagged as deprecated in 11+ SHOULD be replaced by newer stuff,
> wherever possible.
>
> I think for 2.7+ we should add some new profiles to the POM, for Java 8 and
> 9 alongside the new baseline java 7. For those later versions we could
> perhaps mandate Guava 16.
>
>
>
> On 10 November 2014 00:42, Arun C Murthy  wrote:
>
> > … has been a constant pain w.r.t compatibility etc.
> >
> > Should we consider adopting a policy to not use guava in
> Common/HDFS/YARN?
> >
> > MR doesn't matter too much since it's application-side issue, it does
> hurt
> > end-users though since they still might want a newer guava-version, but
> at
> > least they can modify MR.
> >
> > Thoughts?
> >
> > thanks,
> > Arun
> >
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
> >
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>


Re: Guava

2014-11-10 Thread Steve Loughran
Yes, Guava is a constant pain; there's lots of open JIRAs related to it, as
its the one we can't seamlessly upgrade. Not unless we do our own fork and
reinsert the missing classes.

The most common uses in the code are

@VisibleForTesting (easily replicated)
and the Precondition.check() operations

The latter is also easily swapped out, and we could even add the check they
forgot:
Preconditions.checkArgNotNull(argname, arg)


These are easy; its the more complex data structures that matter more.

I think for Hadoop 2.7 & java 7 we need to look at this problem and do
something. Even if we continue to ship Guava 11 so that the HBase team
don't send any (more) death threats, we can/should rework Hadoop to build
and run against Guava 16+ too. That's needed to fix some of the recent java
7/8+ changes.

-Everything in v11 dropped from v16 MUST  to be implemented with our own
versions.
-anything tagged as deprecated in 11+ SHOULD be replaced by newer stuff,
wherever possible.

I think for 2.7+ we should add some new profiles to the POM, for Java 8 and
9 alongside the new baseline java 7. For those later versions we could
perhaps mandate Guava 16.



On 10 November 2014 00:42, Arun C Murthy  wrote:

> … has been a constant pain w.r.t compatibility etc.
>
> Should we consider adopting a policy to not use guava in Common/HDFS/YARN?
>
> MR doesn't matter too much since it's application-side issue, it does hurt
> end-users though since they still might want a newer guava-version, but at
> least they can modify MR.
>
> Thoughts?
>
> thanks,
> Arun
>
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Guava

2014-11-09 Thread Vinayakumar B
As Haohui Mai said, removing the dependency on the Guava may not be a good
idea.

But, instead can we use a fixed guava version in Hadoop which is stable as
of now, with a shaded package structure ?
 so that it will not break the application level dependency on another
version of the Guava. Inside Hadoop we can always use the shaded package of
guava
I think similar idea has been proposed in some Jira, I don't remember the
exact Jira number.

Regards,
Vinay

On Mon, Nov 10, 2014 at 7:13 AM, Haohui Mai  wrote:

> Guava did make the lives of Hadoop development easier in many cases -- What
> I've been consistently hearing is that the version of Guava used is Hadoop
> is so old that it starts to hurt the application developers.
>
> I appreciate the value of Guava -- things like CacheMap are fairly
> difficult to implement efficiently and correctly.
>
> I think that creating separate client libraries for Hadoop can largely
> alleviate the problem -- obviously these libraries cannot use Guava, but it
> allows us to use Guava's help on the server side. For example, HDFS-6200 is
> one of the initiatives.
>
> Just my two cents.
>
> Regards,
> Haohui
>
> On Sun, Nov 9, 2014 at 4:42 PM, Arun C Murthy  wrote:
>
> > … has been a constant pain w.r.t compatibility etc.
> >
> > Should we consider adopting a policy to not use guava in
> Common/HDFS/YARN?
> >
> > MR doesn't matter too much since it's application-side issue, it does
> hurt
> > end-users though since they still might want a newer guava-version, but
> at
> > least they can modify MR.
> >
> > Thoughts?
> >
> > thanks,
> > Arun
> >
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
> >
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>


Re: Guava

2014-11-09 Thread Haohui Mai
Guava did make the lives of Hadoop development easier in many cases -- What
I've been consistently hearing is that the version of Guava used is Hadoop
is so old that it starts to hurt the application developers.

I appreciate the value of Guava -- things like CacheMap are fairly
difficult to implement efficiently and correctly.

I think that creating separate client libraries for Hadoop can largely
alleviate the problem -- obviously these libraries cannot use Guava, but it
allows us to use Guava's help on the server side. For example, HDFS-6200 is
one of the initiatives.

Just my two cents.

Regards,
Haohui

On Sun, Nov 9, 2014 at 4:42 PM, Arun C Murthy  wrote:

> … has been a constant pain w.r.t compatibility etc.
>
> Should we consider adopting a policy to not use guava in Common/HDFS/YARN?
>
> MR doesn't matter too much since it's application-side issue, it does hurt
> end-users though since they still might want a newer guava-version, but at
> least they can modify MR.
>
> Thoughts?
>
> thanks,
> Arun
>
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Guava

2014-11-09 Thread Arun C Murthy
… has been a constant pain w.r.t compatibility etc.

Should we consider adopting a policy to not use guava in Common/HDFS/YARN? 

MR doesn't matter too much since it's application-side issue, it does hurt 
end-users though since they still might want a newer guava-version, but at 
least they can modify MR.

Thoughts?

thanks,
Arun


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Time to address the Guava version problem

2014-10-13 Thread Steve Loughran
I've a patch for HADOOP-11102 which rolls curator back to v 2.4.1, which
only pulls in Guava 14...hadoop should now be weakly consistent -at least
not "strongly inconsistent"- in its guava versions.

allowing hadoop to work on 16.x while still remaining compatible with 11.x
is still something to work on -there's some patches there already

On 24 September 2014 07:35, Billie Rinaldi  wrote:

> The use of an unnecessarily old dependency encourages problems like
> HDFS-7040.  The current Guava dependency is a big problem for downstream
> apps and I'd really like to see it addressed.
>
> On Tue, Sep 23, 2014 at 2:09 PM, Steve Loughran 
> wrote:
>
> > I'm using curator elsewhere, it does log a lot (as does the ZK client),
> but
> > it solves a lot of problem. It's being adopted more downstream too.
> >
> > I'm wondering if we can move the code to the extent we know it works with
> > Guava 16, with the hadoop core being 16-compatible, but not actually
> > migrated to 16.x only. Then hadoop ships with 16 for curator & downstream
> > apps, but we say "you can probably roll back to 11 provided you don't use
> > features x-y-z".
> >
> > On 23 September 2014 21:55, Robert Kanter  wrote:
> >
> > > At the same time, not being able to use Curator will require a lot of
> > extra
> > > code, a lot of which we probably already have from the ZKRMStateStore,
> > but
> > > it's not available to use in hadoop-auth.  We'd need to create our own
> ZK
> > > libraries that Hadoop components can use, but (a) that's going to take
> a
> > > while, and (b) it seems silly to reinvent the wheel when Curator
> already
> > > does all this.
> > >
> > > I agree that upgrading Guava will be a compatibility problem though...
> > >
> > > On Tue, Sep 23, 2014 at 9:30 AM, Sandy Ryza 
> > > wrote:
> > >
> > > > If we've broken compatibility in branch-2, that's a bug that we need
> to
> > > > fix. HADOOP-10868 has not yet made it into a release; I don't see it
> > as a
> > > > justification for solidifying the breakage.
> > > >
> > > > -1 to upgrading Guava in branch-2.
> > > >
> > > > On Tue, Sep 23, 2014 at 3:06 AM, Steve Loughran <
> > ste...@hortonworks.com>
> > > > wrote:
> > > >
> > > > > +1 to upgrading guava. Irrespective of downstream apps, the hadoop
> > > source
> > > > > tree is now internally inconsistent
> > > > >
> > > > > On 22 September 2014 17:56, Sangjin Lee  wrote:
> > > > >
> > > > > > I agree that a more robust solution is to have better
> classloading
> > > > > > isolation.
> > > > > >
> > > > > > Still, IMHO guava (and possibly protobuf as well) sticks out
> like a
> > > > sore
> > > > > > thumb. There are just too many issues in trying to support both
> > guava
> > > > 11
> > > > > > and guava 16. Independent of what we may do with the classloading
> > > > > > isolation, we should still consider upgrading guava.
> > > > > >
> > > > > > My 2 cents.
> > > > > >
> > > > > > On Sun, Sep 21, 2014 at 3:11 PM, Karthik Kambatla <
> > > ka...@cloudera.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Upgrading Guava version is tricky. While it helps in many
> cases,
> > it
> > > > can
> > > > > > > break existing applications/deployments. I understand we do not
> > > have
> > > > a
> > > > > > > policy for updating dependencies, but still we should be
> careful
> > > with
> > > > > > > Guava.
> > > > > > >
> > > > > > > I would be more inclined towards a more permanent solution to
> > this
> > > > > > problem
> > > > > > > - how about prioritizing classpath isolation so applications
> > aren't
> > > > > > > affected by Hadoop dependency updates at all? I understand that
> > > will
> > > > > also
> > > > > > > break user applications, but it might be the driving feature
> for
> > > > Hadoop
> > > > > > > 3.0?
> > > > > > >
> > > > > > > On Fri, Sep 19, 2014 a

[jira] [Resolved] (HADOOP-10961) Use of deprecated Google Guava (v17) Stopwatch constructor in Hadoop FileInputFormat causes an exception

2014-09-29 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-10961.
-
Resolution: Duplicate

> Use of deprecated Google Guava (v17) Stopwatch constructor in Hadoop 
> FileInputFormat causes an exception
> 
>
> Key: HADOOP-10961
> URL: https://issues.apache.org/jira/browse/HADOOP-10961
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Daniel Nydegger
>
> With Google Guava 17 the Stopwatch() constructor is marked as deprecated. The 
> use of the constructor in 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat (Line 369) causes an 
> exception 
> Exception in thread "main" java.lang.IllegalAccessError: tried to access 
> method com.google.common.base.Stopwatch.()V from class 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat
>   at 
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:369)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
>   at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
>   at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Time to address the Guava version problem

2014-09-24 Thread Billie Rinaldi
The use of an unnecessarily old dependency encourages problems like
HDFS-7040.  The current Guava dependency is a big problem for downstream
apps and I'd really like to see it addressed.

On Tue, Sep 23, 2014 at 2:09 PM, Steve Loughran 
wrote:

> I'm using curator elsewhere, it does log a lot (as does the ZK client), but
> it solves a lot of problem. It's being adopted more downstream too.
>
> I'm wondering if we can move the code to the extent we know it works with
> Guava 16, with the hadoop core being 16-compatible, but not actually
> migrated to 16.x only. Then hadoop ships with 16 for curator & downstream
> apps, but we say "you can probably roll back to 11 provided you don't use
> features x-y-z".
>
> On 23 September 2014 21:55, Robert Kanter  wrote:
>
> > At the same time, not being able to use Curator will require a lot of
> extra
> > code, a lot of which we probably already have from the ZKRMStateStore,
> but
> > it's not available to use in hadoop-auth.  We'd need to create our own ZK
> > libraries that Hadoop components can use, but (a) that's going to take a
> > while, and (b) it seems silly to reinvent the wheel when Curator already
> > does all this.
> >
> > I agree that upgrading Guava will be a compatibility problem though...
> >
> > On Tue, Sep 23, 2014 at 9:30 AM, Sandy Ryza 
> > wrote:
> >
> > > If we've broken compatibility in branch-2, that's a bug that we need to
> > > fix. HADOOP-10868 has not yet made it into a release; I don't see it
> as a
> > > justification for solidifying the breakage.
> > >
> > > -1 to upgrading Guava in branch-2.
> > >
> > > On Tue, Sep 23, 2014 at 3:06 AM, Steve Loughran <
> ste...@hortonworks.com>
> > > wrote:
> > >
> > > > +1 to upgrading guava. Irrespective of downstream apps, the hadoop
> > source
> > > > tree is now internally inconsistent
> > > >
> > > > On 22 September 2014 17:56, Sangjin Lee  wrote:
> > > >
> > > > > I agree that a more robust solution is to have better classloading
> > > > > isolation.
> > > > >
> > > > > Still, IMHO guava (and possibly protobuf as well) sticks out like a
> > > sore
> > > > > thumb. There are just too many issues in trying to support both
> guava
> > > 11
> > > > > and guava 16. Independent of what we may do with the classloading
> > > > > isolation, we should still consider upgrading guava.
> > > > >
> > > > > My 2 cents.
> > > > >
> > > > > On Sun, Sep 21, 2014 at 3:11 PM, Karthik Kambatla <
> > ka...@cloudera.com>
> > > > > wrote:
> > > > >
> > > > > > Upgrading Guava version is tricky. While it helps in many cases,
> it
> > > can
> > > > > > break existing applications/deployments. I understand we do not
> > have
> > > a
> > > > > > policy for updating dependencies, but still we should be careful
> > with
> > > > > > Guava.
> > > > > >
> > > > > > I would be more inclined towards a more permanent solution to
> this
> > > > > problem
> > > > > > - how about prioritizing classpath isolation so applications
> aren't
> > > > > > affected by Hadoop dependency updates at all? I understand that
> > will
> > > > also
> > > > > > break user applications, but it might be the driving feature for
> > > Hadoop
> > > > > > 3.0?
> > > > > >
> > > > > > On Fri, Sep 19, 2014 at 5:13 PM, Sangjin Lee 
> > > wrote:
> > > > > >
> > > > > > > I would also agree on upgrading guava. Yes I am aware of the
> > > > potential
> > > > > > > impact on customers who might rely on hadoop bringing in guava
> > 11.
> > > > > > However,
> > > > > > > IMHO the balance tipped over to the other side a while ago;
> i.e.
> > I
> > > > > think
> > > > > > > there are far more people using guava 16 in their code and
> > > scrambling
> > > > > to
> > > > > > > make things work than the other way around.
> > > > > > >
> > > > > > > On Thu, Sep 18, 2014 at 2:40 PM, Steve Loughran <
> > > > > ste...@hortonworks.com>
> &

Re: Time to address the Guava version problem

2014-09-23 Thread Steve Loughran
I'm using curator elsewhere, it does log a lot (as does the ZK client), but
it solves a lot of problem. It's being adopted more downstream too.

I'm wondering if we can move the code to the extent we know it works with
Guava 16, with the hadoop core being 16-compatible, but not actually
migrated to 16.x only. Then hadoop ships with 16 for curator & downstream
apps, but we say "you can probably roll back to 11 provided you don't use
features x-y-z".

On 23 September 2014 21:55, Robert Kanter  wrote:

> At the same time, not being able to use Curator will require a lot of extra
> code, a lot of which we probably already have from the ZKRMStateStore, but
> it's not available to use in hadoop-auth.  We'd need to create our own ZK
> libraries that Hadoop components can use, but (a) that's going to take a
> while, and (b) it seems silly to reinvent the wheel when Curator already
> does all this.
>
> I agree that upgrading Guava will be a compatibility problem though...
>
> On Tue, Sep 23, 2014 at 9:30 AM, Sandy Ryza 
> wrote:
>
> > If we've broken compatibility in branch-2, that's a bug that we need to
> > fix. HADOOP-10868 has not yet made it into a release; I don't see it as a
> > justification for solidifying the breakage.
> >
> > -1 to upgrading Guava in branch-2.
> >
> > On Tue, Sep 23, 2014 at 3:06 AM, Steve Loughran 
> > wrote:
> >
> > > +1 to upgrading guava. Irrespective of downstream apps, the hadoop
> source
> > > tree is now internally inconsistent
> > >
> > > On 22 September 2014 17:56, Sangjin Lee  wrote:
> > >
> > > > I agree that a more robust solution is to have better classloading
> > > > isolation.
> > > >
> > > > Still, IMHO guava (and possibly protobuf as well) sticks out like a
> > sore
> > > > thumb. There are just too many issues in trying to support both guava
> > 11
> > > > and guava 16. Independent of what we may do with the classloading
> > > > isolation, we should still consider upgrading guava.
> > > >
> > > > My 2 cents.
> > > >
> > > > On Sun, Sep 21, 2014 at 3:11 PM, Karthik Kambatla <
> ka...@cloudera.com>
> > > > wrote:
> > > >
> > > > > Upgrading Guava version is tricky. While it helps in many cases, it
> > can
> > > > > break existing applications/deployments. I understand we do not
> have
> > a
> > > > > policy for updating dependencies, but still we should be careful
> with
> > > > > Guava.
> > > > >
> > > > > I would be more inclined towards a more permanent solution to this
> > > > problem
> > > > > - how about prioritizing classpath isolation so applications aren't
> > > > > affected by Hadoop dependency updates at all? I understand that
> will
> > > also
> > > > > break user applications, but it might be the driving feature for
> > Hadoop
> > > > > 3.0?
> > > > >
> > > > > On Fri, Sep 19, 2014 at 5:13 PM, Sangjin Lee 
> > wrote:
> > > > >
> > > > > > I would also agree on upgrading guava. Yes I am aware of the
> > > potential
> > > > > > impact on customers who might rely on hadoop bringing in guava
> 11.
> > > > > However,
> > > > > > IMHO the balance tipped over to the other side a while ago; i.e.
> I
> > > > think
> > > > > > there are far more people using guava 16 in their code and
> > scrambling
> > > > to
> > > > > > make things work than the other way around.
> > > > > >
> > > > > > On Thu, Sep 18, 2014 at 2:40 PM, Steve Loughran <
> > > > ste...@hortonworks.com>
> > > > > > wrote:
> > > > > >
> > > > > > > I know we've been ignoring the Guava version problem, but
> > > > HADOOP-10868
> > > > > > > added a transitive dependency on Guava 16 by way of Curator
> 2.6.
> > > > > > >
> > > > > > > Maven currently forces the build to use Guava 11.0.2, but this
> is
> > > > > hiding
> > > > > > at
> > > > > > > compile timeall code paths from curator which may use classes &
> > > > methods
> > > > > > > that aren't there.
> > > > > > >
> > > > > > > I need curator fo

Re: Time to address the Guava version problem

2014-09-23 Thread Robert Kanter
At the same time, not being able to use Curator will require a lot of extra
code, a lot of which we probably already have from the ZKRMStateStore, but
it's not available to use in hadoop-auth.  We'd need to create our own ZK
libraries that Hadoop components can use, but (a) that's going to take a
while, and (b) it seems silly to reinvent the wheel when Curator already
does all this.

I agree that upgrading Guava will be a compatibility problem though...

On Tue, Sep 23, 2014 at 9:30 AM, Sandy Ryza  wrote:

> If we've broken compatibility in branch-2, that's a bug that we need to
> fix. HADOOP-10868 has not yet made it into a release; I don't see it as a
> justification for solidifying the breakage.
>
> -1 to upgrading Guava in branch-2.
>
> On Tue, Sep 23, 2014 at 3:06 AM, Steve Loughran 
> wrote:
>
> > +1 to upgrading guava. Irrespective of downstream apps, the hadoop source
> > tree is now internally inconsistent
> >
> > On 22 September 2014 17:56, Sangjin Lee  wrote:
> >
> > > I agree that a more robust solution is to have better classloading
> > > isolation.
> > >
> > > Still, IMHO guava (and possibly protobuf as well) sticks out like a
> sore
> > > thumb. There are just too many issues in trying to support both guava
> 11
> > > and guava 16. Independent of what we may do with the classloading
> > > isolation, we should still consider upgrading guava.
> > >
> > > My 2 cents.
> > >
> > > On Sun, Sep 21, 2014 at 3:11 PM, Karthik Kambatla 
> > > wrote:
> > >
> > > > Upgrading Guava version is tricky. While it helps in many cases, it
> can
> > > > break existing applications/deployments. I understand we do not have
> a
> > > > policy for updating dependencies, but still we should be careful with
> > > > Guava.
> > > >
> > > > I would be more inclined towards a more permanent solution to this
> > > problem
> > > > - how about prioritizing classpath isolation so applications aren't
> > > > affected by Hadoop dependency updates at all? I understand that will
> > also
> > > > break user applications, but it might be the driving feature for
> Hadoop
> > > > 3.0?
> > > >
> > > > On Fri, Sep 19, 2014 at 5:13 PM, Sangjin Lee 
> wrote:
> > > >
> > > > > I would also agree on upgrading guava. Yes I am aware of the
> > potential
> > > > > impact on customers who might rely on hadoop bringing in guava 11.
> > > > However,
> > > > > IMHO the balance tipped over to the other side a while ago; i.e. I
> > > think
> > > > > there are far more people using guava 16 in their code and
> scrambling
> > > to
> > > > > make things work than the other way around.
> > > > >
> > > > > On Thu, Sep 18, 2014 at 2:40 PM, Steve Loughran <
> > > ste...@hortonworks.com>
> > > > > wrote:
> > > > >
> > > > > > I know we've been ignoring the Guava version problem, but
> > > HADOOP-10868
> > > > > > added a transitive dependency on Guava 16 by way of Curator 2.6.
> > > > > >
> > > > > > Maven currently forces the build to use Guava 11.0.2, but this is
> > > > hiding
> > > > > at
> > > > > > compile timeall code paths from curator which may use classes &
> > > methods
> > > > > > that aren't there.
> > > > > >
> > > > > > I need curator for my own work (2.4.1 & Guava 14.0 was what I'd
> > been
> > > > > > using), so don't think we can go back.
> > > > > >
> > > > > > HADOOP-11102 covers the problem -but doesn't propose a specific
> > > > solution.
> > > > > > But to me the one that seems most likely to work is: update Guava
> > > > > >
> > > > > > -steve
> > > > > >
> > > > > > --
> > > > > > CONFIDENTIALITY NOTICE
> > > > > > NOTICE: This message is intended for the use of the individual or
> > > > entity
> > > > > to
> > > > > > which it is addressed and may contain information that is
> > > confidential,
> > > > > > privileged and exempt from disclosure under applicable law. If
> the
> > > > reader
> > > > > > of this message is not the intended recipient, you are hereby
> > > notified
> > > > > that
> > > > > > any printing, copying, dissemination, distribution, disclosure or
> > > > > > forwarding of this communication is strictly prohibited. If you
> > have
> > > > > > received this communication in error, please contact the sender
> > > > > immediately
> > > > > > and delete it from your system. Thank You.
> > > > > >
> > > > >
> > > >
> > >
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
> >
>


Re: Time to address the Guava version problem

2014-09-23 Thread Sandy Ryza
If we've broken compatibility in branch-2, that's a bug that we need to
fix. HADOOP-10868 has not yet made it into a release; I don't see it as a
justification for solidifying the breakage.

-1 to upgrading Guava in branch-2.

On Tue, Sep 23, 2014 at 3:06 AM, Steve Loughran 
wrote:

> +1 to upgrading guava. Irrespective of downstream apps, the hadoop source
> tree is now internally inconsistent
>
> On 22 September 2014 17:56, Sangjin Lee  wrote:
>
> > I agree that a more robust solution is to have better classloading
> > isolation.
> >
> > Still, IMHO guava (and possibly protobuf as well) sticks out like a sore
> > thumb. There are just too many issues in trying to support both guava 11
> > and guava 16. Independent of what we may do with the classloading
> > isolation, we should still consider upgrading guava.
> >
> > My 2 cents.
> >
> > On Sun, Sep 21, 2014 at 3:11 PM, Karthik Kambatla 
> > wrote:
> >
> > > Upgrading Guava version is tricky. While it helps in many cases, it can
> > > break existing applications/deployments. I understand we do not have a
> > > policy for updating dependencies, but still we should be careful with
> > > Guava.
> > >
> > > I would be more inclined towards a more permanent solution to this
> > problem
> > > - how about prioritizing classpath isolation so applications aren't
> > > affected by Hadoop dependency updates at all? I understand that will
> also
> > > break user applications, but it might be the driving feature for Hadoop
> > > 3.0?
> > >
> > > On Fri, Sep 19, 2014 at 5:13 PM, Sangjin Lee  wrote:
> > >
> > > > I would also agree on upgrading guava. Yes I am aware of the
> potential
> > > > impact on customers who might rely on hadoop bringing in guava 11.
> > > However,
> > > > IMHO the balance tipped over to the other side a while ago; i.e. I
> > think
> > > > there are far more people using guava 16 in their code and scrambling
> > to
> > > > make things work than the other way around.
> > > >
> > > > On Thu, Sep 18, 2014 at 2:40 PM, Steve Loughran <
> > ste...@hortonworks.com>
> > > > wrote:
> > > >
> > > > > I know we've been ignoring the Guava version problem, but
> > HADOOP-10868
> > > > > added a transitive dependency on Guava 16 by way of Curator 2.6.
> > > > >
> > > > > Maven currently forces the build to use Guava 11.0.2, but this is
> > > hiding
> > > > at
> > > > > compile timeall code paths from curator which may use classes &
> > methods
> > > > > that aren't there.
> > > > >
> > > > > I need curator for my own work (2.4.1 & Guava 14.0 was what I'd
> been
> > > > > using), so don't think we can go back.
> > > > >
> > > > > HADOOP-11102 covers the problem -but doesn't propose a specific
> > > solution.
> > > > > But to me the one that seems most likely to work is: update Guava
> > > > >
> > > > > -steve
> > > > >
> > > > > --
> > > > > CONFIDENTIALITY NOTICE
> > > > > NOTICE: This message is intended for the use of the individual or
> > > entity
> > > > to
> > > > > which it is addressed and may contain information that is
> > confidential,
> > > > > privileged and exempt from disclosure under applicable law. If the
> > > reader
> > > > > of this message is not the intended recipient, you are hereby
> > notified
> > > > that
> > > > > any printing, copying, dissemination, distribution, disclosure or
> > > > > forwarding of this communication is strictly prohibited. If you
> have
> > > > > received this communication in error, please contact the sender
> > > > immediately
> > > > > and delete it from your system. Thank You.
> > > > >
> > > >
> > >
> >
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>


Re: Time to address the Guava version problem

2014-09-23 Thread Steve Loughran
+1 to upgrading guava. Irrespective of downstream apps, the hadoop source
tree is now internally inconsistent

On 22 September 2014 17:56, Sangjin Lee  wrote:

> I agree that a more robust solution is to have better classloading
> isolation.
>
> Still, IMHO guava (and possibly protobuf as well) sticks out like a sore
> thumb. There are just too many issues in trying to support both guava 11
> and guava 16. Independent of what we may do with the classloading
> isolation, we should still consider upgrading guava.
>
> My 2 cents.
>
> On Sun, Sep 21, 2014 at 3:11 PM, Karthik Kambatla 
> wrote:
>
> > Upgrading Guava version is tricky. While it helps in many cases, it can
> > break existing applications/deployments. I understand we do not have a
> > policy for updating dependencies, but still we should be careful with
> > Guava.
> >
> > I would be more inclined towards a more permanent solution to this
> problem
> > - how about prioritizing classpath isolation so applications aren't
> > affected by Hadoop dependency updates at all? I understand that will also
> > break user applications, but it might be the driving feature for Hadoop
> > 3.0?
> >
> > On Fri, Sep 19, 2014 at 5:13 PM, Sangjin Lee  wrote:
> >
> > > I would also agree on upgrading guava. Yes I am aware of the potential
> > > impact on customers who might rely on hadoop bringing in guava 11.
> > However,
> > > IMHO the balance tipped over to the other side a while ago; i.e. I
> think
> > > there are far more people using guava 16 in their code and scrambling
> to
> > > make things work than the other way around.
> > >
> > > On Thu, Sep 18, 2014 at 2:40 PM, Steve Loughran <
> ste...@hortonworks.com>
> > > wrote:
> > >
> > > > I know we've been ignoring the Guava version problem, but
> HADOOP-10868
> > > > added a transitive dependency on Guava 16 by way of Curator 2.6.
> > > >
> > > > Maven currently forces the build to use Guava 11.0.2, but this is
> > hiding
> > > at
> > > > compile timeall code paths from curator which may use classes &
> methods
> > > > that aren't there.
> > > >
> > > > I need curator for my own work (2.4.1 & Guava 14.0 was what I'd been
> > > > using), so don't think we can go back.
> > > >
> > > > HADOOP-11102 covers the problem -but doesn't propose a specific
> > solution.
> > > > But to me the one that seems most likely to work is: update Guava
> > > >
> > > > -steve
> > > >
> > > > --
> > > > CONFIDENTIALITY NOTICE
> > > > NOTICE: This message is intended for the use of the individual or
> > entity
> > > to
> > > > which it is addressed and may contain information that is
> confidential,
> > > > privileged and exempt from disclosure under applicable law. If the
> > reader
> > > > of this message is not the intended recipient, you are hereby
> notified
> > > that
> > > > any printing, copying, dissemination, distribution, disclosure or
> > > > forwarding of this communication is strictly prohibited. If you have
> > > > received this communication in error, please contact the sender
> > > immediately
> > > > and delete it from your system. Thank You.
> > > >
> > >
> >
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Time to address the Guava version problem

2014-09-22 Thread Sangjin Lee
I agree that a more robust solution is to have better classloading
isolation.

Still, IMHO guava (and possibly protobuf as well) sticks out like a sore
thumb. There are just too many issues in trying to support both guava 11
and guava 16. Independent of what we may do with the classloading
isolation, we should still consider upgrading guava.

My 2 cents.

On Sun, Sep 21, 2014 at 3:11 PM, Karthik Kambatla 
wrote:

> Upgrading Guava version is tricky. While it helps in many cases, it can
> break existing applications/deployments. I understand we do not have a
> policy for updating dependencies, but still we should be careful with
> Guava.
>
> I would be more inclined towards a more permanent solution to this problem
> - how about prioritizing classpath isolation so applications aren't
> affected by Hadoop dependency updates at all? I understand that will also
> break user applications, but it might be the driving feature for Hadoop
> 3.0?
>
> On Fri, Sep 19, 2014 at 5:13 PM, Sangjin Lee  wrote:
>
> > I would also agree on upgrading guava. Yes I am aware of the potential
> > impact on customers who might rely on hadoop bringing in guava 11.
> However,
> > IMHO the balance tipped over to the other side a while ago; i.e. I think
> > there are far more people using guava 16 in their code and scrambling to
> > make things work than the other way around.
> >
> > On Thu, Sep 18, 2014 at 2:40 PM, Steve Loughran 
> > wrote:
> >
> > > I know we've been ignoring the Guava version problem, but HADOOP-10868
> > > added a transitive dependency on Guava 16 by way of Curator 2.6.
> > >
> > > Maven currently forces the build to use Guava 11.0.2, but this is
> hiding
> > at
> > > compile timeall code paths from curator which may use classes & methods
> > > that aren't there.
> > >
> > > I need curator for my own work (2.4.1 & Guava 14.0 was what I'd been
> > > using), so don't think we can go back.
> > >
> > > HADOOP-11102 covers the problem -but doesn't propose a specific
> solution.
> > > But to me the one that seems most likely to work is: update Guava
> > >
> > > -steve
> > >
> > > --
> > > CONFIDENTIALITY NOTICE
> > > NOTICE: This message is intended for the use of the individual or
> entity
> > to
> > > which it is addressed and may contain information that is confidential,
> > > privileged and exempt from disclosure under applicable law. If the
> reader
> > > of this message is not the intended recipient, you are hereby notified
> > that
> > > any printing, copying, dissemination, distribution, disclosure or
> > > forwarding of this communication is strictly prohibited. If you have
> > > received this communication in error, please contact the sender
> > immediately
> > > and delete it from your system. Thank You.
> > >
> >
>


Re: Time to address the Guava version problem

2014-09-22 Thread Steve Loughran
On 21 September 2014 23:11, Karthik Kambatla  wrote:

> Upgrading Guava version is tricky. While it helps in many cases, it can
> break existing applications/deployments. I understand we do not have a
> policy for updating dependencies, but still we should be careful with
> Guava.
>

I agree, but the classpath is currently in an inconsistent state: it
incudes Guava 11 and a library  built against Guava 16.


>
> I would be more inclined towards a more permanent solution to this problem
> - how about prioritizing classpath isolation so applications aren't
> affected by Hadoop dependency updates at all? I understand that will also
> break user applications, but it might be the driving feature for Hadoop
> 3.0?
>


I think this would be good;

if you look at where we're going with YARN-deployed apps, there is a trend
towards pushing up all the JARs, using the distributed cache to reduce the
cost of that upload. All you should really need at the far end are the .xml
files.

Except this has caused a problem with branch-2 to surface, the native libs
aren't binary-signature-compatible with 2.5 or earlier JARs -look  at
HADOOP-11064

This something to be fixed —but it highlights the problem where even the
native .lib, .so. .dll files are implicitly part of the in-hadoop
compatibility layer.

so: the "upload all JARs" strategy has weaknesses too; some OSGi solution
could address that, though we need time playing with that before being able
to claim it solves all problems ... I worry that it may help address JAR
dependencies at the price of performance.



Returning to Guava, which can't be put off as we are already in a mess


>
> On Fri, Sep 19, 2014 at 5:13 PM, Sangjin Lee  wrote:
>
> > I would also agree on upgrading guava. Yes I am aware of the potential
> > impact on customers who might rely on hadoop bringing in guava 11.
> However,
> > IMHO the balance tipped over to the other side a while ago; i.e. I think
> > there are far more people using guava 16 in their code and scrambling to
> > make things work than the other way around.
> >
>

I concur ... to many things you build downstream need a 15+ guava. Which
includes Hadoop now ... we just haven't fully admitted it yet.

Steve






> > On Thu, Sep 18, 2014 at 2:40 PM, Steve Loughran 
> > wrote:
> >
> > > I know we've been ignoring the Guava version problem, but HADOOP-10868
> > > added a transitive dependency on Guava 16 by way of Curator 2.6.
> > >
> > > Maven currently forces the build to use Guava 11.0.2, but this is
> hiding
> > at
> > > compile timeall code paths from curator which may use classes & methods
> > > that aren't there.
> > >
> > > I need curator for my own work (2.4.1 & Guava 14.0 was what I'd been
> > > using), so don't think we can go back.
> > >
> > > HADOOP-11102 covers the problem -but doesn't propose a specific
> solution.
> > > But to me the one that seems most likely to work is: update Guava
> > >
> > > -steve
> > >
> > > --
> > > CONFIDENTIALITY NOTICE
> > > NOTICE: This message is intended for the use of the individual or
> entity
> > to
> > > which it is addressed and may contain information that is confidential,
> > > privileged and exempt from disclosure under applicable law. If the
> reader
> > > of this message is not the intended recipient, you are hereby notified
> > that
> > > any printing, copying, dissemination, distribution, disclosure or
> > > forwarding of this communication is strictly prohibited. If you have
> > > received this communication in error, please contact the sender
> > immediately
> > > and delete it from your system. Thank You.
> > >
> >
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Time to address the Guava version problem

2014-09-21 Thread Karthik Kambatla
Upgrading Guava version is tricky. While it helps in many cases, it can
break existing applications/deployments. I understand we do not have a
policy for updating dependencies, but still we should be careful with
Guava.

I would be more inclined towards a more permanent solution to this problem
- how about prioritizing classpath isolation so applications aren't
affected by Hadoop dependency updates at all? I understand that will also
break user applications, but it might be the driving feature for Hadoop
3.0?

On Fri, Sep 19, 2014 at 5:13 PM, Sangjin Lee  wrote:

> I would also agree on upgrading guava. Yes I am aware of the potential
> impact on customers who might rely on hadoop bringing in guava 11. However,
> IMHO the balance tipped over to the other side a while ago; i.e. I think
> there are far more people using guava 16 in their code and scrambling to
> make things work than the other way around.
>
> On Thu, Sep 18, 2014 at 2:40 PM, Steve Loughran 
> wrote:
>
> > I know we've been ignoring the Guava version problem, but HADOOP-10868
> > added a transitive dependency on Guava 16 by way of Curator 2.6.
> >
> > Maven currently forces the build to use Guava 11.0.2, but this is hiding
> at
> > compile timeall code paths from curator which may use classes & methods
> > that aren't there.
> >
> > I need curator for my own work (2.4.1 & Guava 14.0 was what I'd been
> > using), so don't think we can go back.
> >
> > HADOOP-11102 covers the problem -but doesn't propose a specific solution.
> > But to me the one that seems most likely to work is: update Guava
> >
> > -steve
> >
> > --
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to
> > which it is addressed and may contain information that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
> >
>


Re: Time to address the Guava version problem

2014-09-19 Thread Sangjin Lee
I would also agree on upgrading guava. Yes I am aware of the potential
impact on customers who might rely on hadoop bringing in guava 11. However,
IMHO the balance tipped over to the other side a while ago; i.e. I think
there are far more people using guava 16 in their code and scrambling to
make things work than the other way around.

On Thu, Sep 18, 2014 at 2:40 PM, Steve Loughran 
wrote:

> I know we've been ignoring the Guava version problem, but HADOOP-10868
> added a transitive dependency on Guava 16 by way of Curator 2.6.
>
> Maven currently forces the build to use Guava 11.0.2, but this is hiding at
> compile timeall code paths from curator which may use classes & methods
> that aren't there.
>
> I need curator for my own work (2.4.1 & Guava 14.0 was what I'd been
> using), so don't think we can go back.
>
> HADOOP-11102 covers the problem -but doesn't propose a specific solution.
> But to me the one that seems most likely to work is: update Guava
>
> -steve
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>


Time to address the Guava version problem

2014-09-18 Thread Steve Loughran
I know we've been ignoring the Guava version problem, but HADOOP-10868
added a transitive dependency on Guava 16 by way of Curator 2.6.

Maven currently forces the build to use Guava 11.0.2, but this is hiding at
compile timeall code paths from curator which may use classes & methods
that aren't there.

I need curator for my own work (2.4.1 & Guava 14.0 was what I'd been
using), so don't think we can go back.

HADOOP-11102 covers the problem -but doesn't propose a specific solution.
But to me the one that seems most likely to work is: update Guava

-steve

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Created] (HADOOP-11102) Hadoop now has transient dependency on Guava 16

2014-09-17 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-11102:
---

 Summary: Hadoop now has transient dependency on Guava 16
 Key: HADOOP-11102
 URL: https://issues.apache.org/jira/browse/HADOOP-11102
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.6.0
Reporter: Steve Loughran


HADOOP-10868 includes apache curator 2.6.0

This depends on Guava 16.01

It's not being picked up, as Hadoop is forcing in 11.0.2 -but this means:
there is now a risk that curator depends on methods and classes that are not in 
the Hadoop version





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11032) Replace use of Guava Stopwatch with Apache StopWatch

2014-08-29 Thread Gary Steelman (JIRA)
Gary Steelman created HADOOP-11032:
--

 Summary: Replace use of Guava Stopwatch with Apache StopWatch
 Key: HADOOP-11032
 URL: https://issues.apache.org/jira/browse/HADOOP-11032
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Gary Steelman


This patch reduces Hadoop's dependency on an old version of guava. 
Stopwatch.elapsedMillis() isn't part of guava past v16 and the tools I'm 
working on use v17. 

To remedy this and also reduce Hadoop's reliance on old versions of guava, we 
can use the Apache StopWatch (org.apache.commons.lang.time.StopWatch) which 
provides nearly equivalent functionality. apache.commons.lang is already a 
dependency for Hadoop so this will not introduce new dependencies. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10961) Use of deprecated Google Guava (v17) Stopwatch constructor in Hadoop FileInputFormat causes an exception

2014-08-12 Thread Daniel Nydegger (JIRA)
Daniel Nydegger created HADOOP-10961:


 Summary: Use of deprecated Google Guava (v17) Stopwatch 
constructor in Hadoop FileInputFormat causes an exception
 Key: HADOOP-10961
 URL: https://issues.apache.org/jira/browse/HADOOP-10961
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Daniel Nydegger


With Google Guava 17 the Stopwatch() constructor is marked as deprecated. The 
use of the constructor in org.apache.hadoop.mapreduce.lib.input.FileInputFormat 
(Line 369) causes an exception 

Exception in thread "main" java.lang.IllegalAccessError: tried to access method 
com.google.common.base.Stopwatch.()V from class 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat
at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:369)
at 
org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
at 
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10101) Update guava dependency to the latest version 15.0

2013-11-14 Thread Rakesh R (JIRA)
Rakesh R created HADOOP-10101:
-

 Summary: Update guava dependency to the latest version 15.0
 Key: HADOOP-10101
 URL: https://issues.apache.org/jira/browse/HADOOP-10101
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Rakesh R


The existing guava version is 11.0.2 which is quite old.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Add Guava as a dependency?

2011-02-11 Thread Mathias Herberts
+1, Guava is the magical piece that can get rid of all those
UnsupportedEncodingException ...!


Re: Add Guava as a dependency?

2011-02-11 Thread Arun C Murthy

+1

Arun

On Feb 10, 2011, at 7:45 PM, Todd Lipcon wrote:

Anyone mind if I pull in the Guava library as a dependency for  
Common? It
has a bunch of very useful utilities - in this particular case the  
one I'm

itching to use is ThreadFactoryBuilder:

http://guava-libraries.googlecode.com/svn/tags/release05/javadoc/com/google/common/util/concurrent/ThreadFactoryBuilder.html

More info here:
http://code.google.com/p/guava-libraries/

-Todd
--
Todd Lipcon
Software Engineer, Cloudera




Re: Add Guava as a dependency?

2011-02-11 Thread Owen O'Malley


On Feb 11, 2011, at 10:01 AM, Todd Lipcon wrote:

Cool, seems like enough people are on board. I'll just include this  
in the
patch for HADOOP-7132 (naming the IPC Reader Threads) since that's  
where I

wanted to use it.

Can't wait to use this stuff in more patches. I love guava :)


+1 on using Guava, although there was resistance to adding any new  
libraries to the classpath. I think that using well designed and  
supported libraries is much better than rolling custom code.


-- Owen

Re: Add Guava as a dependency?

2011-02-11 Thread Todd Lipcon
Cool, seems like enough people are on board. I'll just include this in the
patch for HADOOP-7132 (naming the IPC Reader Threads) since that's where I
wanted to use it.

Can't wait to use this stuff in more patches. I love guava :)

-Todd

On Fri, Feb 11, 2011 at 4:09 AM, Luke Lu  wrote:

> +1. guava is the new apache commons, maintained by java experts with
> comprehensive test coverage.
>
> I also propose to deprecate any existing utils in hadoop-common with
> duplicate functionality.
>
> On Thu, Feb 10, 2011 at 10:25 PM, Jakob Homan  wrote:
> > +1
> >
> > On Thu, Feb 10, 2011 at 10:04 PM, Konstantin Boudnik 
> wrote:
> >> Actually it seems that Pig uses this already, which perhaps means it
> >> is good enough for us as well ;)
> >> --
> >>   Take care,
> >> Konstantin (Cos) Boudnik
> >>
> >> On Thu, Feb 10, 2011 at 19:45, Todd Lipcon  wrote:
> >>> Anyone mind if I pull in the Guava library as a dependency for Common?
> It
> >>> has a bunch of very useful utilities - in this particular case the one
> I'm
> >>> itching to use is ThreadFactoryBuilder:
> >>>
> >>>
> http://guava-libraries.googlecode.com/svn/tags/release05/javadoc/com/google/common/util/concurrent/ThreadFactoryBuilder.html
> >>>
> >>> More info here:
> >>> http://code.google.com/p/guava-libraries/
> >>>
> >>> -Todd
> >>> --
> >>> Todd Lipcon
> >>> Software Engineer, Cloudera
> >>>
> >>
> >
>



-- 
Todd Lipcon
Software Engineer, Cloudera


Re: Add Guava as a dependency?

2011-02-11 Thread Luke Lu
+1. guava is the new apache commons, maintained by java experts with
comprehensive test coverage.

I also propose to deprecate any existing utils in hadoop-common with
duplicate functionality.

On Thu, Feb 10, 2011 at 10:25 PM, Jakob Homan  wrote:
> +1
>
> On Thu, Feb 10, 2011 at 10:04 PM, Konstantin Boudnik  wrote:
>> Actually it seems that Pig uses this already, which perhaps means it
>> is good enough for us as well ;)
>> --
>>   Take care,
>> Konstantin (Cos) Boudnik
>>
>> On Thu, Feb 10, 2011 at 19:45, Todd Lipcon  wrote:
>>> Anyone mind if I pull in the Guava library as a dependency for Common? It
>>> has a bunch of very useful utilities - in this particular case the one I'm
>>> itching to use is ThreadFactoryBuilder:
>>>
>>> http://guava-libraries.googlecode.com/svn/tags/release05/javadoc/com/google/common/util/concurrent/ThreadFactoryBuilder.html
>>>
>>> More info here:
>>> http://code.google.com/p/guava-libraries/
>>>
>>> -Todd
>>> --
>>> Todd Lipcon
>>> Software Engineer, Cloudera
>>>
>>
>


Re: Add Guava as a dependency?

2011-02-10 Thread Jakob Homan
+1

On Thu, Feb 10, 2011 at 10:04 PM, Konstantin Boudnik  wrote:
> Actually it seems that Pig uses this already, which perhaps means it
> is good enough for us as well ;)
> --
>   Take care,
> Konstantin (Cos) Boudnik
>
> On Thu, Feb 10, 2011 at 19:45, Todd Lipcon  wrote:
>> Anyone mind if I pull in the Guava library as a dependency for Common? It
>> has a bunch of very useful utilities - in this particular case the one I'm
>> itching to use is ThreadFactoryBuilder:
>>
>> http://guava-libraries.googlecode.com/svn/tags/release05/javadoc/com/google/common/util/concurrent/ThreadFactoryBuilder.html
>>
>> More info here:
>> http://code.google.com/p/guava-libraries/
>>
>> -Todd
>> --
>> Todd Lipcon
>> Software Engineer, Cloudera
>>
>


Re: Add Guava as a dependency?

2011-02-10 Thread Konstantin Boudnik
Actually it seems that Pig uses this already, which perhaps means it
is good enough for us as well ;)
--
  Take care,
Konstantin (Cos) Boudnik

On Thu, Feb 10, 2011 at 19:45, Todd Lipcon  wrote:
> Anyone mind if I pull in the Guava library as a dependency for Common? It
> has a bunch of very useful utilities - in this particular case the one I'm
> itching to use is ThreadFactoryBuilder:
>
> http://guava-libraries.googlecode.com/svn/tags/release05/javadoc/com/google/common/util/concurrent/ThreadFactoryBuilder.html
>
> More info here:
> http://code.google.com/p/guava-libraries/
>
> -Todd
> --
> Todd Lipcon
> Software Engineer, Cloudera
>


Add Guava as a dependency?

2011-02-10 Thread Todd Lipcon
Anyone mind if I pull in the Guava library as a dependency for Common? It
has a bunch of very useful utilities - in this particular case the one I'm
itching to use is ThreadFactoryBuilder:

http://guava-libraries.googlecode.com/svn/tags/release05/javadoc/com/google/common/util/concurrent/ThreadFactoryBuilder.html

More info here:
http://code.google.com/p/guava-libraries/

-Todd
-- 
Todd Lipcon
Software Engineer, Cloudera


<    1   2