[ https://issues.apache.org/jira/browse/SPARK-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Dongjoon Hyun updated SPARK-13702: ---------------------------------- Description: In order to make docs/example (and other related code) more simple and readable, this issue replaces existing codes like the followings by using `diamond` operator. {code} - final ArrayList<Product2<Object, Object>> dataToWrite = - new ArrayList<Product2<Object, Object>>(); + final ArrayList<Product2<Object, Object>> dataToWrite = new ArrayList<>(); {code} {code} -List<JavaPairDStream<String, String>> kafkaStreams = new ArrayList<JavaPairDStream<String, String>>(numStreams); +List<JavaPairDStream<String, String>> kafkaStreams = new ArrayList<>(numStreams); {code} {code} -Set<Tuple2<Integer, Integer>> edges = new HashSet<Tuple2<Integer, Integer>>(numEdges); +Set<Tuple2<Integer, Integer>> edges = new HashSet<>(numEdges); {code} Java 7 or higher supports *diamond* operator which replaces the type arguments required to invoke the constructor of a generic class with an empty set of type parameters (<>). Currently, Spark Java code use mixed usage of this. *Reference* https://docs.oracle.com/javase/8/docs/technotes/guides/language/type-inference-generic-instance-creation.html was: In order to make docs/example (and other related code) more simple and readable, this issue replaces existing codes like the followings by using `diamond` operator and adding Checkstyle rule. {code} - final ArrayList<Product2<Object, Object>> dataToWrite = - new ArrayList<Product2<Object, Object>>(); + final ArrayList<Product2<Object, Object>> dataToWrite = new ArrayList<>(); {code} {code} -List<JavaPairDStream<String, String>> kafkaStreams = new ArrayList<JavaPairDStream<String, String>>(numStreams); +List<JavaPairDStream<String, String>> kafkaStreams = new ArrayList<>(numStreams); {code} {code} -Set<Tuple2<Integer, Integer>> edges = new HashSet<Tuple2<Integer, Integer>>(numEdges); +Set<Tuple2<Integer, Integer>> edges = new HashSet<>(numEdges); {code} Java 7 or higher supports *diamond* operator which replaces the type arguments required to invoke the constructor of a generic class with an empty set of type parameters (<>). Currently, Spark Java code use mixed usage of this. *Reference* https://docs.oracle.com/javase/8/docs/technotes/guides/language/type-inference-generic-instance-creation.html > Use diamond operator for generic instance creation in Java code > --------------------------------------------------------------- > > Key: SPARK-13702 > URL: https://issues.apache.org/jira/browse/SPARK-13702 > Project: Spark > Issue Type: Improvement > Components: Examples > Reporter: Dongjoon Hyun > Priority: Trivial > > In order to make docs/example (and other related code) more simple and > readable, this issue replaces existing codes like the followings by using > `diamond` operator. > {code} > - final ArrayList<Product2<Object, Object>> dataToWrite = > - new ArrayList<Product2<Object, Object>>(); > + final ArrayList<Product2<Object, Object>> dataToWrite = new > ArrayList<>(); > {code} > {code} > -List<JavaPairDStream<String, String>> kafkaStreams = new > ArrayList<JavaPairDStream<String, String>>(numStreams); > +List<JavaPairDStream<String, String>> kafkaStreams = new > ArrayList<>(numStreams); > {code} > {code} > -Set<Tuple2<Integer, Integer>> edges = new HashSet<Tuple2<Integer, > Integer>>(numEdges); > +Set<Tuple2<Integer, Integer>> edges = new HashSet<>(numEdges); > {code} > Java 7 or higher supports *diamond* operator which replaces the type > arguments required to invoke the constructor of a generic class with an empty > set of type parameters (<>). Currently, Spark Java code use mixed usage of > this. > *Reference* > https://docs.oracle.com/javase/8/docs/technotes/guides/language/type-inference-generic-instance-creation.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org