[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/14014


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-19 Thread yhuai
Github user yhuai commented on a diff in the pull request:

https://github.com/apache/spark/pull/14014#discussion_r71277147
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
 ---
@@ -442,13 +445,23 @@ private[parquet] class ParquetRowConverter(
 private val elementConverter: Converter = {
   val repeatedType = parquetSchema.getType(0)
   val elementType = catalystSchema.elementType
-  val parentName = parquetSchema.getName
 
-  if (isElementType(repeatedType, elementType, parentName)) {
+  // At this stage, we're not sure whether the repeated field maps to 
the element type or is
+  // just the syntactic repeated group of the 3-level standard LIST 
layout. Here we try to
+  // convert the repeated field into a Catalyst type to see whether 
the converted type matches
+  // the Catalyst array element type.
+  val guessedElementType = schemaConverter.convertField(repeatedType)
+
+  if (DataType.equalsIgnoreCompatibleNullability(guessedElementType, 
elementType)) {
+// If the repeated field corresponds to the element type, creates 
a new converter using the
+// type of the repeated field.
 newConverter(repeatedType, elementType, new ParentContainerUpdater 
{
   override def set(value: Any): Unit = currentArray += value
 })
   } else {
+// If the repeated field corresponds to the syntactic group in the 
standard 3-level Parquet
+// LIST layout, creates a new converter using the only child field 
of the repeated field.
+assert(!repeatedType.isPrimitive && 
repeatedType.asGroupType().getFieldCount == 1)
 new ElementConverter(repeatedType.asGroupType().getType(0), 
elementType)
--- End diff --

Can we add examples at here?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-19 Thread yhuai
Github user yhuai commented on a diff in the pull request:

https://github.com/apache/spark/pull/14014#discussion_r71276489
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRecordMaterializer.scala
 ---
@@ -30,10 +30,11 @@ import org.apache.spark.sql.types.StructType
  * @param catalystSchema Catalyst schema of the rows to be constructed
  */
 private[parquet] class ParquetRecordMaterializer(
-parquetSchema: MessageType, catalystSchema: StructType)
+parquetSchema: MessageType, catalystSchema: StructType, 
schemaConverter: ParquetSchemaConverter)
--- End diff --

Add `schemaConverter` to the scaladoc?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-12 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/14014#discussion_r70415596
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaConverter.scala
 ---
@@ -260,7 +260,7 @@ private[parquet] class ParquetSchemaConverter(
 {
   // For legacy 2-level list types with primitive element type, e.g.:
   //
-  //// List (nullable list, non-null elements)
+  //// ARRAY (nullable list, non-null elements)
--- End diff --

I'm using concrete SQL types here for better readability.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-08 Thread yhuai
Github user yhuai commented on a diff in the pull request:

https://github.com/apache/spark/pull/14014#discussion_r70030627
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
 ---
@@ -482,13 +482,105 @@ private[parquet] class ParquetRowConverter(
  */
 // scalastyle:on
 private def isElementType(
-parquetRepeatedType: Type, catalystElementType: DataType, 
parentName: String): Boolean = {
+parquetRepeatedType: Type, catalystElementType: DataType, parent: 
GroupType): Boolean = {
   (parquetRepeatedType, catalystElementType) match {
-case (t: PrimitiveType, _) => true
-case (t: GroupType, _) if t.getFieldCount > 1 => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parentName + "_tuple" => true
-case (t: GroupType, StructType(Array(f))) if f.name == 
t.getFieldName(0) => true
+case (t: PrimitiveType, _) =>
+  // For legacy 2-level list types with primitive element type, 
e.g.:
+  //
+  //// List (nullable list, non-null elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated int32 element; <-- repeatedType
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount > 1 =>
+  // For legacy 2-level list types whose element type is a group 
type with 2 or more fields,
+  // e.g.:
+  //
+  //// List> (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group element {<-- repeatedType
+  //required binary str (UTF8);
+  //required int32 num;
+  //  };
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" =>
+  // For legacy 2-level list types generated by parquet-avro 
(Parquet version < 1.6.0),
+  // e.g.:
+  //
+  //// List (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group array {  <-- repeatedType
+  //required binary str (UTF8);
+  //  };
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parent.getName + "_tuple" =>
+  // For Parquet data generated by parquet-thrift, e.g.:
+  //
+  //// List (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group my_list_tuple {  <-- repeatedType
+  //required binary str (UTF8);
+  //  };
+  //}
+  true
+
+case (t: GroupType, _)
+if parent.getOriginalType == LIST &&
+  t.getFieldCount == 1 &&
+  t.getName == "list" &&
+  t.getFieldName(0) == "element" =>
+  // For standard 3-level list types, e.g.:
+  //
+  //// List (list nullable, elements non-null)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group list {   <-- repeatedType
+  //required binary element (UTF8);
+  //  }
+  //}
+  //
+  // This case branch must appear before the next one. See 
comments of the next case branch
+  // for details.
+  false
+
+case (t: GroupType, StructType(fields)) =>
+  // For legacy 2-level list types whose element type is a group 
type with a single field,
+  // e.g.:
+  //
+  //// List (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group list {   <-- repeatedType
+  //required binary str (UTF8);
+  //  };
+  //}
--- End diff --

Also provide the Spark's corresponding StructType in this example?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA 

[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-08 Thread yhuai
Github user yhuai commented on a diff in the pull request:

https://github.com/apache/spark/pull/14014#discussion_r70030569
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
 ---
@@ -482,13 +482,105 @@ private[parquet] class ParquetRowConverter(
  */
 // scalastyle:on
 private def isElementType(
-parquetRepeatedType: Type, catalystElementType: DataType, 
parentName: String): Boolean = {
+parquetRepeatedType: Type, catalystElementType: DataType, parent: 
GroupType): Boolean = {
   (parquetRepeatedType, catalystElementType) match {
-case (t: PrimitiveType, _) => true
-case (t: GroupType, _) if t.getFieldCount > 1 => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parentName + "_tuple" => true
-case (t: GroupType, StructType(Array(f))) if f.name == 
t.getFieldName(0) => true
+case (t: PrimitiveType, _) =>
+  // For legacy 2-level list types with primitive element type, 
e.g.:
+  //
+  //// List (nullable list, non-null elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated int32 element; <-- repeatedType
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount > 1 =>
+  // For legacy 2-level list types whose element type is a group 
type with 2 or more fields,
+  // e.g.:
+  //
+  //// List> (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group element {<-- repeatedType
+  //required binary str (UTF8);
+  //required int32 num;
+  //  };
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" =>
+  // For legacy 2-level list types generated by parquet-avro 
(Parquet version < 1.6.0),
+  // e.g.:
+  //
+  //// List (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group array {  <-- repeatedType
+  //required binary str (UTF8);
+  //  };
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parent.getName + "_tuple" =>
+  // For Parquet data generated by parquet-thrift, e.g.:
+  //
+  //// List (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group my_list_tuple {  <-- repeatedType
+  //required binary str (UTF8);
+  //  };
+  //}
+  true
+
+case (t: GroupType, _)
+if parent.getOriginalType == LIST &&
+  t.getFieldCount == 1 &&
+  t.getName == "list" &&
+  t.getFieldName(0) == "element" =>
+  // For standard 3-level list types, e.g.:
+  //
+  //// List (list nullable, elements non-null)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group list {   <-- repeatedType
+  //required binary element (UTF8);
+  //  }
+  //}
+  //
+  // This case branch must appear before the next one. See 
comments of the next case branch
+  // for details.
+  false
+
+case (t: GroupType, StructType(fields)) =>
--- End diff --

Probably it is good to explain why we are explicitly matching `GroupType` 
with `StructType` (because we are trying to determine if `GroupType` represents 
a Struct or just a middle layer?).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-08 Thread yhuai
Github user yhuai commented on a diff in the pull request:

https://github.com/apache/spark/pull/14014#discussion_r70030381
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
 ---
@@ -482,13 +482,105 @@ private[parquet] class ParquetRowConverter(
  */
 // scalastyle:on
 private def isElementType(
-parquetRepeatedType: Type, catalystElementType: DataType, 
parentName: String): Boolean = {
+parquetRepeatedType: Type, catalystElementType: DataType, parent: 
GroupType): Boolean = {
   (parquetRepeatedType, catalystElementType) match {
-case (t: PrimitiveType, _) => true
-case (t: GroupType, _) if t.getFieldCount > 1 => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parentName + "_tuple" => true
-case (t: GroupType, StructType(Array(f))) if f.name == 
t.getFieldName(0) => true
+case (t: PrimitiveType, _) =>
+  // For legacy 2-level list types with primitive element type, 
e.g.:
+  //
+  //// List (nullable list, non-null elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated int32 element; <-- repeatedType
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount > 1 =>
+  // For legacy 2-level list types whose element type is a group 
type with 2 or more fields,
+  // e.g.:
+  //
+  //// List> (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group element {<-- repeatedType
+  //required binary str (UTF8);
+  //required int32 num;
+  //  };
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" =>
+  // For legacy 2-level list types generated by parquet-avro 
(Parquet version < 1.6.0),
+  // e.g.:
+  //
+  //// List (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group array {  <-- repeatedType
+  //required binary str (UTF8);
+  //  };
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parent.getName + "_tuple" =>
+  // For Parquet data generated by parquet-thrift, e.g.:
+  //
+  //// List (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group my_list_tuple {  <-- repeatedType
+  //required binary str (UTF8);
+  //  };
+  //}
+  true
+
+case (t: GroupType, _)
+if parent.getOriginalType == LIST &&
+  t.getFieldCount == 1 &&
+  t.getName == "list" &&
+  t.getFieldName(0) == "element" =>
+  // For standard 3-level list types, e.g.:
+  //
+  //// List (list nullable, elements non-null)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group list {   <-- repeatedType
+  //required binary element (UTF8);
+  //  }
+  //}
+  //
+  // This case branch must appear before the next one. See 
comments of the next case branch
+  // for details.
+  false
+
+case (t: GroupType, StructType(fields)) =>
--- End diff --

Should we have a case to explicitly handle `t.getFieldCount == 0` (to make 
the code easier to follow)?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-08 Thread yhuai
Github user yhuai commented on a diff in the pull request:

https://github.com/apache/spark/pull/14014#discussion_r70030343
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
 ---
@@ -482,13 +482,105 @@ private[parquet] class ParquetRowConverter(
  */
 // scalastyle:on
 private def isElementType(
-parquetRepeatedType: Type, catalystElementType: DataType, 
parentName: String): Boolean = {
+parquetRepeatedType: Type, catalystElementType: DataType, parent: 
GroupType): Boolean = {
   (parquetRepeatedType, catalystElementType) match {
-case (t: PrimitiveType, _) => true
-case (t: GroupType, _) if t.getFieldCount > 1 => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parentName + "_tuple" => true
-case (t: GroupType, StructType(Array(f))) if f.name == 
t.getFieldName(0) => true
+case (t: PrimitiveType, _) =>
+  // For legacy 2-level list types with primitive element type, 
e.g.:
+  //
+  //// List (nullable list, non-null elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated int32 element; <-- repeatedType
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount > 1 =>
+  // For legacy 2-level list types whose element type is a group 
type with 2 or more fields,
+  // e.g.:
+  //
+  //// List> (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group element {<-- repeatedType
+  //required binary str (UTF8);
+  //required int32 num;
+  //  };
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" =>
+  // For legacy 2-level list types generated by parquet-avro 
(Parquet version < 1.6.0),
+  // e.g.:
+  //
+  //// List (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group array {  <-- repeatedType
+  //required binary str (UTF8);
+  //  };
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parent.getName + "_tuple" =>
+  // For Parquet data generated by parquet-thrift, e.g.:
+  //
+  //// List (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group my_list_tuple {  <-- repeatedType
+  //required binary str (UTF8);
+  //  };
+  //}
+  true
+
+case (t: GroupType, _)
+if parent.getOriginalType == LIST &&
+  t.getFieldCount == 1 &&
+  t.getName == "list" &&
+  t.getFieldName(0) == "element" =>
+  // For standard 3-level list types, e.g.:
+  //
+  //// List (list nullable, elements non-null)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group list {   <-- repeatedType
+  //required binary element (UTF8);
+  //  }
+  //}
+  //
+  // This case branch must appear before the next one. See 
comments of the next case branch
+  // for details.
+  false
+
+case (t: GroupType, StructType(fields)) =>
--- End diff --

So, when we reach here, we have `t.getFieldCount==1`? Is it possible that 
`t.getFieldCount == 0`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-08 Thread yhuai
Github user yhuai commented on a diff in the pull request:

https://github.com/apache/spark/pull/14014#discussion_r70029947
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
 ---
@@ -482,13 +482,105 @@ private[parquet] class ParquetRowConverter(
  */
 // scalastyle:on
 private def isElementType(
-parquetRepeatedType: Type, catalystElementType: DataType, 
parentName: String): Boolean = {
+parquetRepeatedType: Type, catalystElementType: DataType, parent: 
GroupType): Boolean = {
   (parquetRepeatedType, catalystElementType) match {
-case (t: PrimitiveType, _) => true
-case (t: GroupType, _) if t.getFieldCount > 1 => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parentName + "_tuple" => true
-case (t: GroupType, StructType(Array(f))) if f.name == 
t.getFieldName(0) => true
+case (t: PrimitiveType, _) =>
+  // For legacy 2-level list types with primitive element type, 
e.g.:
+  //
+  //// List (nullable list, non-null elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated int32 element; <-- repeatedType
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount > 1 =>
+  // For legacy 2-level list types whose element type is a group 
type with 2 or more fields,
+  // e.g.:
+  //
+  //// List> (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group element {<-- repeatedType
+  //required binary str (UTF8);
+  //required int32 num;
--- End diff --

Also mention that `t.getFieldCount` is 2 (including `str` and `num`) for 
this case?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-08 Thread yhuai
Github user yhuai commented on a diff in the pull request:

https://github.com/apache/spark/pull/14014#discussion_r70029907
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
 ---
@@ -482,13 +482,105 @@ private[parquet] class ParquetRowConverter(
  */
 // scalastyle:on
 private def isElementType(
-parquetRepeatedType: Type, catalystElementType: DataType, 
parentName: String): Boolean = {
+parquetRepeatedType: Type, catalystElementType: DataType, parent: 
GroupType): Boolean = {
   (parquetRepeatedType, catalystElementType) match {
-case (t: PrimitiveType, _) => true
-case (t: GroupType, _) if t.getFieldCount > 1 => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parentName + "_tuple" => true
-case (t: GroupType, StructType(Array(f))) if f.name == 
t.getFieldName(0) => true
+case (t: PrimitiveType, _) =>
+  // For legacy 2-level list types with primitive element type, 
e.g.:
+  //
+  //// List (nullable list, non-null elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated int32 element; <-- repeatedType
--- End diff --

Seems it is better to make the name match the variable name.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-08 Thread yhuai
Github user yhuai commented on a diff in the pull request:

https://github.com/apache/spark/pull/14014#discussion_r70029843
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
 ---
@@ -482,13 +482,105 @@ private[parquet] class ParquetRowConverter(
  */
 // scalastyle:on
 private def isElementType(
-parquetRepeatedType: Type, catalystElementType: DataType, 
parentName: String): Boolean = {
+parquetRepeatedType: Type, catalystElementType: DataType, parent: 
GroupType): Boolean = {
   (parquetRepeatedType, catalystElementType) match {
-case (t: PrimitiveType, _) => true
-case (t: GroupType, _) if t.getFieldCount > 1 => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parentName + "_tuple" => true
-case (t: GroupType, StructType(Array(f))) if f.name == 
t.getFieldName(0) => true
+case (t: PrimitiveType, _) =>
+  // For legacy 2-level list types with primitive element type, 
e.g.:
+  //
+  //// List (nullable list, non-null elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated int32 element; <-- repeatedType
--- End diff --

`<-- repeatedType` => `<-- parquetRepeatedType`?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-06 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/14014#discussion_r69707873
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
 ---
@@ -482,13 +482,105 @@ private[parquet] class ParquetRowConverter(
  */
 // scalastyle:on
 private def isElementType(
-parquetRepeatedType: Type, catalystElementType: DataType, 
parentName: String): Boolean = {
+parquetRepeatedType: Type, catalystElementType: DataType, parent: 
GroupType): Boolean = {
   (parquetRepeatedType, catalystElementType) match {
-case (t: PrimitiveType, _) => true
-case (t: GroupType, _) if t.getFieldCount > 1 => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parentName + "_tuple" => true
-case (t: GroupType, StructType(Array(f))) if f.name == 
t.getFieldName(0) => true
+case (t: PrimitiveType, _) =>
+  // For legacy 2-level list types with primitive element type, 
e.g.:
+  //
+  //// List (nullable list, non-null elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated int32 element; <-- repeatedType
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount > 1 =>
+  // For legacy 2-level list types whose element type is a group 
type with 2 or more fields,
+  // e.g.:
+  //
+  //// List> (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group element {<-- repeatedType
+  //required binary str (UTF8);
+  //required int32 num;
+  //  };
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" =>
+  // For legacy 2-level list types generated by parquet-avro 
(Parquet version < 1.6.0),
+  // e.g.:
+  //
+  //// List (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group array {  <-- repeatedType
+  //required binary str (UTF8);
+  //  };
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parent.getName + "_tuple" =>
+  // For Parquet data generated by parquet-thrift, e.g.:
+  //
+  //// List (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group my_list_tuple {  <-- repeatedType
+  //required binary str (UTF8);
+  //  };
+  //}
+  true
+
+case (t: GroupType, _)
+if parent.getOriginalType == LIST &&
+  t.getFieldCount == 1 &&
+  t.getName == "list" &&
+  t.getFieldName(0) == "element" =>
+  // For standard 3-level list types, e.g.:
+  //
+  //// List (list nullable, elements non-null)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group list {   <-- repeatedType
+  //required binary element (UTF8);
+  //  }
+  //}
+  //
+  // This case branch must appear before the next one. See 
comments of the next case branch
+  // for details.
+  false
+
+case (t: GroupType, StructType(fields)) =>
+  // For legacy 2-level list types whose element type is a group 
type with a single field,
+  // e.g.:
+  //
+  //// List (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group list {   <-- repeatedType
+  //required binary str (UTF8);
+  //  };
+  //}
+  //
+  // NOTE: This kind of schema is ambiguous. According to 
parquet-format spec, this schema
--- End diff --

Yea, the current comment is kinda contrived... Let me try again.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes 

[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-06 Thread viirya
Github user viirya commented on a diff in the pull request:

https://github.com/apache/spark/pull/14014#discussion_r69701391
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
 ---
@@ -482,13 +482,105 @@ private[parquet] class ParquetRowConverter(
  */
 // scalastyle:on
 private def isElementType(
-parquetRepeatedType: Type, catalystElementType: DataType, 
parentName: String): Boolean = {
+parquetRepeatedType: Type, catalystElementType: DataType, parent: 
GroupType): Boolean = {
   (parquetRepeatedType, catalystElementType) match {
-case (t: PrimitiveType, _) => true
-case (t: GroupType, _) if t.getFieldCount > 1 => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parentName + "_tuple" => true
-case (t: GroupType, StructType(Array(f))) if f.name == 
t.getFieldName(0) => true
+case (t: PrimitiveType, _) =>
+  // For legacy 2-level list types with primitive element type, 
e.g.:
+  //
+  //// List (nullable list, non-null elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated int32 element; <-- repeatedType
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount > 1 =>
+  // For legacy 2-level list types whose element type is a group 
type with 2 or more fields,
+  // e.g.:
+  //
+  //// List> (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group element {<-- repeatedType
+  //required binary str (UTF8);
+  //required int32 num;
+  //  };
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" =>
+  // For legacy 2-level list types generated by parquet-avro 
(Parquet version < 1.6.0),
+  // e.g.:
+  //
+  //// List (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group array {  <-- repeatedType
+  //required binary str (UTF8);
+  //  };
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parent.getName + "_tuple" =>
+  // For Parquet data generated by parquet-thrift, e.g.:
+  //
+  //// List (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group my_list_tuple {  <-- repeatedType
+  //required binary str (UTF8);
+  //  };
+  //}
+  true
+
+case (t: GroupType, _)
+if parent.getOriginalType == LIST &&
+  t.getFieldCount == 1 &&
+  t.getName == "list" &&
+  t.getFieldName(0) == "element" =>
+  // For standard 3-level list types, e.g.:
+  //
+  //// List (list nullable, elements non-null)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group list {   <-- repeatedType
+  //required binary element (UTF8);
+  //  }
+  //}
+  //
+  // This case branch must appear before the next one. See 
comments of the next case branch
+  // for details.
+  false
+
+case (t: GroupType, StructType(fields)) =>
+  // For legacy 2-level list types whose element type is a group 
type with a single field,
+  // e.g.:
+  //
+  //// List (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group list {   <-- repeatedType
+  //required binary str (UTF8);
+  //  };
+  //}
+  //
+  // NOTE: This kind of schema is ambiguous. According to 
parquet-format spec, this schema
--- End diff --

This few sentences seems can be improved to be more clear. Like: According 
to parquet-format spec, this schema can also be interpreted as `List` 
if following standard 3-level list types as shown in above case branch. In 
order to avoid such 

[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-06 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/14014#discussion_r69700045
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
 ---
@@ -482,13 +482,104 @@ private[parquet] class ParquetRowConverter(
  */
 // scalastyle:on
 private def isElementType(
-parquetRepeatedType: Type, catalystElementType: DataType, 
parentName: String): Boolean = {
+parquetRepeatedType: Type, catalystElementType: DataType, parent: 
GroupType): Boolean = {
   (parquetRepeatedType, catalystElementType) match {
-case (t: PrimitiveType, _) => true
-case (t: GroupType, _) if t.getFieldCount > 1 => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parentName + "_tuple" => true
-case (t: GroupType, StructType(Array(f))) if f.name == 
t.getFieldName(0) => true
+case (t: PrimitiveType, _) =>
+  // For legacy 2-level list types with primitive element type, 
e.g.:
+  //
+  //// List (nullable list, non-null elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated int32 element; <-- repeatedType
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount > 1 =>
+  // For legacy 2-level list types whose element type is a group 
type with 2 or more fields,
+  // e.g.:
+  //
+  //// List> (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group element {<-- repeatedType
+  //required binary str (UTF8);
+  //required int32 num;
+  //  };
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" =>
--- End diff --

Nice catch!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-06 Thread viirya
Github user viirya commented on a diff in the pull request:

https://github.com/apache/spark/pull/14014#discussion_r69696837
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
 ---
@@ -482,13 +482,104 @@ private[parquet] class ParquetRowConverter(
  */
 // scalastyle:on
 private def isElementType(
-parquetRepeatedType: Type, catalystElementType: DataType, 
parentName: String): Boolean = {
+parquetRepeatedType: Type, catalystElementType: DataType, parent: 
GroupType): Boolean = {
   (parquetRepeatedType, catalystElementType) match {
-case (t: PrimitiveType, _) => true
-case (t: GroupType, _) if t.getFieldCount > 1 => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" => true
-case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
parentName + "_tuple" => true
-case (t: GroupType, StructType(Array(f))) if f.name == 
t.getFieldName(0) => true
+case (t: PrimitiveType, _) =>
+  // For legacy 2-level list types with primitive element type, 
e.g.:
+  //
+  //// List (nullable list, non-null elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated int32 element; <-- repeatedType
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount > 1 =>
+  // For legacy 2-level list types whose element type is a group 
type with 2 or more fields,
+  // e.g.:
+  //
+  //// List> (nullable list, non-null 
elements)
+  //optional group my_list (LIST) {   <-- parent
+  //  repeated group element {<-- repeatedType
+  //required binary str (UTF8);
+  //required int32 num;
+  //  };
+  //}
+  true
+
+case (t: GroupType, _) if t.getFieldCount == 1 && t.getName == 
"array" =>
--- End diff --

The below comment doesn't match this condition (`t.getName == "array"`)?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-05 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/14014#discussion_r69675124
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
 ---
@@ -482,13 +482,106 @@ private[parquet] class ParquetRowConverter(
  */
 // scalastyle:on
 private def isElementType(
-parquetRepeatedType: Type, catalystElementType: DataType, 
parentName: String): Boolean = {
+parquetRepeatedType: Type, catalystElementType: DataType, parent: 
GroupType): Boolean = {
+
+  def isStandardListLayout(t: GroupType): Boolean =
--- End diff --

Done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-05 Thread rxin
Github user rxin commented on a diff in the pull request:

https://github.com/apache/spark/pull/14014#discussion_r69657499
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
 ---
@@ -482,13 +482,106 @@ private[parquet] class ParquetRowConverter(
  */
 // scalastyle:on
 private def isElementType(
-parquetRepeatedType: Type, catalystElementType: DataType, 
parentName: String): Boolean = {
+parquetRepeatedType: Type, catalystElementType: DataType, parent: 
GroupType): Boolean = {
+
+  def isStandardListLayout(t: GroupType): Boolean =
--- End diff --

can we inline this in the pattern match


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14014: [SPARK-16344][SQL] Decoding Parquet array of stru...

2016-07-01 Thread liancheng
GitHub user liancheng opened a pull request:

https://github.com/apache/spark/pull/14014

[SPARK-16344][SQL] Decoding Parquet array of struct with a single field 
named "element"

## What changes were proposed in this pull request?

This PR ports #14013 to master and branch-2.0.

## How was this patch tested?

See #14013.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/liancheng/spark spark-16344-for-master-and-2.0

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/14014.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #14014


commit 3bfe45fe8b81f44141b737df6b292f12cd37d06a
Author: Cheng Lian 
Date:   2016-07-01T11:32:52Z

Fixes SPARK-16344




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org