[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-12-03 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r761831780



##
File path: 
flink-formats/flink-parquet/src/test/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriterTest.java
##
@@ -158,6 +190,65 @@ private void innerTest(Configuration conf, boolean 
utcTimestamp) throws IOExcept
 Assert.assertEquals(number, cnt);
 }
 
+public void complexTypeTest(Configuration conf, boolean utcTimestamp) 
throws Exception {
+Path path = new Path(TEMPORARY_FOLDER.newFolder().getPath(), 
UUID.randomUUID().toString());
+int number = 1000;
+List rows = new ArrayList<>(number);
+Map mapData = new HashMap<>();
+mapData.put("k1", "v1");
+mapData.put(null, "v2");
+mapData.put("k2", null);
+
+for (int i = 0; i < number; i++) {
+Integer v = i;
+rows.add(Row.of(new Integer[] {v}, mapData, 
Row.of(String.valueOf(v), v)));
+}
+
+ParquetWriterFactory factory =
+ParquetRowDataBuilder.createWriterFactory(ROW_TYPE_COMPLEX, 
conf, utcTimestamp);
+BulkWriter writer =
+factory.create(path.getFileSystem().create(path, 
FileSystem.WriteMode.OVERWRITE));
+for (int i = 0; i < number; i++) {
+writer.addElement(CONVERTER_COMPLEX.toInternal(rows.get(i)));
+}
+writer.flush();
+writer.finish();
+
+File file = new File(path.getPath());
+final List fileContent = readParquetFile(file);
+assertEquals(rows, fileContent);
+}
+
+private static List readParquetFile(File file) throws IOException {
+InputFile inFile =
+HadoopInputFile.fromPath(
+new org.apache.hadoop.fs.Path(file.toURI()), new 
Configuration());
+
+ArrayList results = new ArrayList<>();
+try (ParquetReader reader =
+AvroParquetReader.builder(inFile).build()) {
+GenericRecord next;
+while ((next = reader.read()) != null) {
+Integer c0 = (Integer) ((ArrayList) 
next.get(0)).get(0).get(0);
+HashMap map = ((HashMap) next.get(1));
+String c21 = ((GenericData.Record) 
next.get(2)).get(0).toString();
+Integer c22 = (Integer) ((GenericData.Record) 
next.get(2)).get(1);
+
+Map c1 = new HashMap<>();
+for (Utf8 key : map.keySet()) {
+String k = Strings.isEmpty(key) ? null : key.toString();

Review comment:
   done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-12-03 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r761829487



##
File path: 
flink-formats/flink-parquet/src/test/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriterTest.java
##
@@ -158,6 +190,65 @@ private void innerTest(Configuration conf, boolean 
utcTimestamp) throws IOExcept
 Assert.assertEquals(number, cnt);
 }
 
+public void complexTypeTest(Configuration conf, boolean utcTimestamp) 
throws Exception {
+Path path = new Path(TEMPORARY_FOLDER.newFolder().getPath(), 
UUID.randomUUID().toString());
+int number = 1000;
+List rows = new ArrayList<>(number);
+Map mapData = new HashMap<>();
+mapData.put("k1", "v1");
+mapData.put(null, "v2");
+mapData.put("k2", null);
+
+for (int i = 0; i < number; i++) {
+Integer v = i;
+rows.add(Row.of(new Integer[] {v}, mapData, 
Row.of(String.valueOf(v), v)));
+}
+
+ParquetWriterFactory factory =
+ParquetRowDataBuilder.createWriterFactory(ROW_TYPE_COMPLEX, 
conf, utcTimestamp);
+BulkWriter writer =
+factory.create(path.getFileSystem().create(path, 
FileSystem.WriteMode.OVERWRITE));
+for (int i = 0; i < number; i++) {
+writer.addElement(CONVERTER_COMPLEX.toInternal(rows.get(i)));
+}
+writer.flush();
+writer.finish();
+
+File file = new File(path.getPath());
+final List fileContent = readParquetFile(file);
+assertEquals(rows, fileContent);
+}
+
+private static List readParquetFile(File file) throws IOException {
+InputFile inFile =
+HadoopInputFile.fromPath(
+new org.apache.hadoop.fs.Path(file.toURI()), new 
Configuration());
+
+ArrayList results = new ArrayList<>();
+try (ParquetReader reader =
+AvroParquetReader.builder(inFile).build()) {
+GenericRecord next;
+while ((next = reader.read()) != null) {
+Integer c0 = (Integer) ((ArrayList) 
next.get(0)).get(0).get(0);
+HashMap map = ((HashMap) next.get(1));
+String c21 = ((GenericData.Record) 
next.get(2)).get(0).toString();
+Integer c22 = (Integer) ((GenericData.Record) 
next.get(2)).get(1);
+
+Map c1 = new HashMap<>();
+for (Utf8 key : map.keySet()) {
+String k = Strings.isEmpty(key) ? null : key.toString();

Review comment:
   Oh? Then I may have been wrong about yesterday




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-12-02 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r761667459



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -224,6 +277,153 @@ private TimestampWriter(int precision) {
 public void write(RowData row, int ordinal) {
 
recordConsumer.addBinary(timestampToInt96(row.getTimestamp(ordinal, 
precision)));
 }
+
+@Override
+public void write(ArrayData arrayData, int ordinal) {
+
recordConsumer.addBinary(timestampToInt96(arrayData.getTimestamp(ordinal, 
precision)));
+}
+}
+
+/** It writes a map field to parquet, both key and value are nullable. */
+private class MapWriter implements FieldWriter {
+
+private String repeatedGroupName;
+private String keyName, valueName;
+private FieldWriter keyWriter, valueWriter;
+
+private MapWriter(LogicalType keyType, LogicalType valueType, 
GroupType groupType) {
+// Get the internal map structure (MAP_KEY_VALUE)
+GroupType repeatedType = groupType.getType(0).asGroupType();
+this.repeatedGroupName = repeatedType.getName();
+
+// Get key element information
+Type type = repeatedType.getType(0);
+this.keyName = type.getName();
+this.keyWriter = createWriter(keyType, type);
+
+// Get value element information
+Type valuetype = repeatedType.getType(1);
+this.valueName = valuetype.getName();
+this.valueWriter = createWriter(valueType, valuetype);
+}
+
+@Override
+public void write(RowData row, int ordinal) {
+recordConsumer.startGroup();
+
+MapData mapData = row.getMap(ordinal);
+
+if (mapData != null && mapData.size() > 0) {
+recordConsumer.startField(repeatedGroupName, 0);
+
+ArrayData keyArray = mapData.keyArray();
+ArrayData valueArray = mapData.valueArray();
+for (int i = 0; i < keyArray.size(); i++) {
+recordConsumer.startGroup();
+// write key element
+recordConsumer.startField(keyName, 0);
+keyWriter.write(keyArray, i);
+recordConsumer.endField(keyName, 0);
+// write value element
+recordConsumer.startField(valueName, 1);

Review comment:
   done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-12-02 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r761667191



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -165,6 +187,11 @@ public void write(RowData row, int ordinal) {
 public void write(RowData row, int ordinal) {
 recordConsumer.addLong(row.getLong(ordinal));
 }
+
+@Override
+public void write(ArrayData arrayData, int ordinal) {

Review comment:
   done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-12-02 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r761629258



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -224,6 +277,153 @@ private TimestampWriter(int precision) {
 public void write(RowData row, int ordinal) {
 
recordConsumer.addBinary(timestampToInt96(row.getTimestamp(ordinal, 
precision)));
 }
+
+@Override
+public void write(ArrayData arrayData, int ordinal) {
+
recordConsumer.addBinary(timestampToInt96(arrayData.getTimestamp(ordinal, 
precision)));
+}
+}
+
+/** It writes a map field to parquet, both key and value are nullable. */
+private class MapWriter implements FieldWriter {
+
+private String repeatedGroupName;
+private String keyName, valueName;
+private FieldWriter keyWriter, valueWriter;
+
+private MapWriter(LogicalType keyType, LogicalType valueType, 
GroupType groupType) {
+// Get the internal map structure (MAP_KEY_VALUE)
+GroupType repeatedType = groupType.getType(0).asGroupType();
+this.repeatedGroupName = repeatedType.getName();
+
+// Get key element information
+Type type = repeatedType.getType(0);
+this.keyName = type.getName();
+this.keyWriter = createWriter(keyType, type);
+
+// Get value element information
+Type valuetype = repeatedType.getType(1);
+this.valueName = valuetype.getName();
+this.valueWriter = createWriter(valueType, valuetype);
+}
+
+@Override
+public void write(RowData row, int ordinal) {
+recordConsumer.startGroup();
+
+MapData mapData = row.getMap(ordinal);
+
+if (mapData != null && mapData.size() > 0) {
+recordConsumer.startField(repeatedGroupName, 0);
+
+ArrayData keyArray = mapData.keyArray();
+ArrayData valueArray = mapData.valueArray();
+for (int i = 0; i < keyArray.size(); i++) {
+recordConsumer.startGroup();
+// write key element
+recordConsumer.startField(keyName, 0);

Review comment:
   update as you say, but the current version does not affect the results




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-12-02 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r761628631



##
File path: 
flink-formats/flink-parquet/src/test/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriterTest.java
##
@@ -158,6 +190,65 @@ private void innerTest(Configuration conf, boolean 
utcTimestamp) throws IOExcept
 Assert.assertEquals(number, cnt);
 }
 
+public void complexTypeTest(Configuration conf, boolean utcTimestamp) 
throws Exception {
+Path path = new Path(TEMPORARY_FOLDER.newFolder().getPath(), 
UUID.randomUUID().toString());
+int number = 1000;
+List rows = new ArrayList<>(number);
+Map mapData = new HashMap<>();
+mapData.put("k1", "v1");
+mapData.put(null, "v2");
+mapData.put("k2", null);
+
+for (int i = 0; i < number; i++) {
+Integer v = i;
+rows.add(Row.of(new Integer[] {v}, mapData, 
Row.of(String.valueOf(v), v)));
+}
+
+ParquetWriterFactory factory =
+ParquetRowDataBuilder.createWriterFactory(ROW_TYPE_COMPLEX, 
conf, utcTimestamp);
+BulkWriter writer =
+factory.create(path.getFileSystem().create(path, 
FileSystem.WriteMode.OVERWRITE));
+for (int i = 0; i < number; i++) {
+writer.addElement(CONVERTER_COMPLEX.toInternal(rows.get(i)));
+}
+writer.flush();
+writer.finish();
+
+File file = new File(path.getPath());
+final List fileContent = readParquetFile(file);
+assertEquals(rows, fileContent);
+}
+
+private static List readParquetFile(File file) throws IOException {
+InputFile inFile =
+HadoopInputFile.fromPath(
+new org.apache.hadoop.fs.Path(file.toURI()), new 
Configuration());
+
+ArrayList results = new ArrayList<>();
+try (ParquetReader reader =
+AvroParquetReader.builder(inFile).build()) {
+GenericRecord next;
+while ((next = reader.read()) != null) {
+Integer c0 = (Integer) ((ArrayList) 
next.get(0)).get(0).get(0);
+HashMap map = ((HashMap) next.get(1));
+String c21 = ((GenericData.Record) 
next.get(2)).get(0).toString();
+Integer c22 = (Integer) ((GenericData.Record) 
next.get(2)).get(1);
+
+Map c1 = new HashMap<>();
+for (Utf8 key : map.keySet()) {
+String k = Strings.isEmpty(key) ? null : key.toString();

Review comment:
   key is nullable, avro GenericRecord will auto convert null to empty 
string.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-12-02 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r761007129



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -224,6 +293,176 @@ private TimestampWriter(int precision) {
 public void write(RowData row, int ordinal) {
 
recordConsumer.addBinary(timestampToInt96(row.getTimestamp(ordinal, 
precision)));
 }
+
+@Override
+public void write(Object value) {
+recordConsumer.addBinary(timestampToInt96((TimestampData) value));
+}
+}
+
+private class MapWriter implements FieldWriter {
+
+private String repeatedGroupName;
+private String keyName, valueName;
+private FieldWriter keyWriter, valueWriter;
+private ArrayData.ElementGetter keyElementGetter, valueElementGetter;
+
+private MapWriter(LogicalType keyType, LogicalType valueType, 
GroupType groupType) {
+
+// Get the internal map structure (MAP_KEY_VALUE)
+GroupType repeatedType = groupType.getType(0).asGroupType();
+this.repeatedGroupName = repeatedType.getName();
+
+// Get key element information
+Type type = repeatedType.getType(0);
+this.keyName = type.getName();
+this.keyWriter = createWriter(keyType, type);
+
+// Get value element information
+Type valuetype = repeatedType.getType(1);
+this.valueName = valuetype.getName();
+this.valueWriter = createWriter(valueType, valuetype);
+
+this.keyElementGetter = ArrayData.createElementGetter(keyType);
+this.valueElementGetter = ArrayData.createElementGetter(valueType);
+}
+
+@Override
+public void write(RowData row, int ordinal) {
+recordConsumer.startGroup();
+
+MapData mapData = row.getMap(ordinal);
+
+if (mapData != null && mapData.size() > 0) {
+recordConsumer.startField(repeatedGroupName, 0);
+
+ArrayData keyArray = mapData.keyArray();
+ArrayData valueArray = mapData.valueArray();
+for (int i = 0; i < keyArray.size(); i++) {
+Object key = keyElementGetter.getElementOrNull(keyArray, 
i);
+Object value = 
valueElementGetter.getElementOrNull(valueArray, i);
+
+recordConsumer.startGroup();
+if (key != null) {
+// write key element
+recordConsumer.startField(keyName, 0);
+keyWriter.write(key);
+recordConsumer.endField(keyName, 0);
+
+// write value element
+if (value != null) {
+recordConsumer.startField(valueName, 1);
+valueWriter.write(value);
+recordConsumer.endField(valueName, 1);
+}
+}
+recordConsumer.endGroup();
+}
+
+recordConsumer.endField(repeatedGroupName, 0);
+}
+recordConsumer.endGroup();
+}
+
+@Override
+public void write(Object value) {}
+}
+
+private class ArrayWriter implements FieldWriter {

Review comment:
   done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-12-02 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r761004060



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -189,6 +243,11 @@ public void write(RowData row, int ordinal) {
 public void write(RowData row, int ordinal) {
 
recordConsumer.addBinary(Binary.fromReusedByteArray(row.getString(ordinal).toBytes()));

Review comment:
   We don't have a `write(Object value)` method now




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-12-02 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r761001017



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -224,6 +293,176 @@ private TimestampWriter(int precision) {
 public void write(RowData row, int ordinal) {
 
recordConsumer.addBinary(timestampToInt96(row.getTimestamp(ordinal, 
precision)));
 }
+
+@Override
+public void write(Object value) {
+recordConsumer.addBinary(timestampToInt96((TimestampData) value));
+}
+}
+
+private class MapWriter implements FieldWriter {
+
+private String repeatedGroupName;
+private String keyName, valueName;
+private FieldWriter keyWriter, valueWriter;
+private ArrayData.ElementGetter keyElementGetter, valueElementGetter;
+
+private MapWriter(LogicalType keyType, LogicalType valueType, 
GroupType groupType) {
+
+// Get the internal map structure (MAP_KEY_VALUE)
+GroupType repeatedType = groupType.getType(0).asGroupType();
+this.repeatedGroupName = repeatedType.getName();
+
+// Get key element information
+Type type = repeatedType.getType(0);
+this.keyName = type.getName();
+this.keyWriter = createWriter(keyType, type);
+
+// Get value element information
+Type valuetype = repeatedType.getType(1);
+this.valueName = valuetype.getName();
+this.valueWriter = createWriter(valueType, valuetype);
+
+this.keyElementGetter = ArrayData.createElementGetter(keyType);
+this.valueElementGetter = ArrayData.createElementGetter(valueType);
+}
+
+@Override
+public void write(RowData row, int ordinal) {
+recordConsumer.startGroup();
+
+MapData mapData = row.getMap(ordinal);
+
+if (mapData != null && mapData.size() > 0) {
+recordConsumer.startField(repeatedGroupName, 0);
+
+ArrayData keyArray = mapData.keyArray();
+ArrayData valueArray = mapData.valueArray();
+for (int i = 0; i < keyArray.size(); i++) {
+Object key = keyElementGetter.getElementOrNull(keyArray, 
i);
+Object value = 
valueElementGetter.getElementOrNull(valueArray, i);
+
+recordConsumer.startGroup();
+if (key != null) {

Review comment:
   supports null in latest commit




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-12-02 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r761000688



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/utils/ParquetSchemaConverter.java
##
@@ -101,11 +110,32 @@ private static Type convertToParquetType(
 case TIMESTAMP_WITH_LOCAL_TIME_ZONE:
 return Types.primitive(PrimitiveType.PrimitiveTypeName.INT96, 
repetition)
 .named(name);
+case ARRAY:
+ArrayType arrayType = (ArrayType) type;
+return ConversionPatterns.listOfElements(
+repetition,
+name,
+convertToParquetType(LIST_ELEMENT_NAME, 
arrayType.getElementType()));
+case MAP:
+MapType mapType = (MapType) type;
+return ConversionPatterns.stringKeyMapType(

Review comment:
   supports multiple types in latest commit.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-12-02 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r761000484



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -224,6 +293,176 @@ private TimestampWriter(int precision) {
 public void write(RowData row, int ordinal) {
 
recordConsumer.addBinary(timestampToInt96(row.getTimestamp(ordinal, 
precision)));
 }
+
+@Override
+public void write(Object value) {
+recordConsumer.addBinary(timestampToInt96((TimestampData) value));
+}
+}
+
+private class MapWriter implements FieldWriter {
+
+private String repeatedGroupName;
+private String keyName, valueName;
+private FieldWriter keyWriter, valueWriter;
+private ArrayData.ElementGetter keyElementGetter, valueElementGetter;
+
+private MapWriter(LogicalType keyType, LogicalType valueType, 
GroupType groupType) {
+
+// Get the internal map structure (MAP_KEY_VALUE)
+GroupType repeatedType = groupType.getType(0).asGroupType();
+this.repeatedGroupName = repeatedType.getName();
+
+// Get key element information
+Type type = repeatedType.getType(0);
+this.keyName = type.getName();
+this.keyWriter = createWriter(keyType, type);
+
+// Get value element information
+Type valuetype = repeatedType.getType(1);
+this.valueName = valuetype.getName();
+this.valueWriter = createWriter(valueType, valuetype);
+
+this.keyElementGetter = ArrayData.createElementGetter(keyType);
+this.valueElementGetter = ArrayData.createElementGetter(valueType);
+}
+
+@Override
+public void write(RowData row, int ordinal) {
+recordConsumer.startGroup();
+
+MapData mapData = row.getMap(ordinal);
+
+if (mapData != null && mapData.size() > 0) {
+recordConsumer.startField(repeatedGroupName, 0);
+
+ArrayData keyArray = mapData.keyArray();
+ArrayData valueArray = mapData.valueArray();
+for (int i = 0; i < keyArray.size(); i++) {
+Object key = keyElementGetter.getElementOrNull(keyArray, 
i);
+Object value = 
valueElementGetter.getElementOrNull(valueArray, i);
+
+recordConsumer.startGroup();
+if (key != null) {

Review comment:
   supports multiple types in latest commit.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-12-02 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r76086



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -224,6 +293,176 @@ private TimestampWriter(int precision) {
 public void write(RowData row, int ordinal) {
 
recordConsumer.addBinary(timestampToInt96(row.getTimestamp(ordinal, 
precision)));
 }
+
+@Override
+public void write(Object value) {
+recordConsumer.addBinary(timestampToInt96((TimestampData) value));
+}
+}
+
+private class MapWriter implements FieldWriter {
+
+private String repeatedGroupName;
+private String keyName, valueName;
+private FieldWriter keyWriter, valueWriter;
+private ArrayData.ElementGetter keyElementGetter, valueElementGetter;
+
+private MapWriter(LogicalType keyType, LogicalType valueType, 
GroupType groupType) {
+
+// Get the internal map structure (MAP_KEY_VALUE)
+GroupType repeatedType = groupType.getType(0).asGroupType();
+this.repeatedGroupName = repeatedType.getName();
+
+// Get key element information
+Type type = repeatedType.getType(0);
+this.keyName = type.getName();
+this.keyWriter = createWriter(keyType, type);
+
+// Get value element information
+Type valuetype = repeatedType.getType(1);
+this.valueName = valuetype.getName();
+this.valueWriter = createWriter(valueType, valuetype);
+
+this.keyElementGetter = ArrayData.createElementGetter(keyType);
+this.valueElementGetter = ArrayData.createElementGetter(valueType);
+}
+
+@Override
+public void write(RowData row, int ordinal) {
+recordConsumer.startGroup();
+
+MapData mapData = row.getMap(ordinal);
+
+if (mapData != null && mapData.size() > 0) {
+recordConsumer.startField(repeatedGroupName, 0);
+
+ArrayData keyArray = mapData.keyArray();
+ArrayData valueArray = mapData.valueArray();
+for (int i = 0; i < keyArray.size(); i++) {
+Object key = keyElementGetter.getElementOrNull(keyArray, 
i);
+Object value = 
valueElementGetter.getElementOrNull(valueArray, i);
+
+recordConsumer.startGroup();
+if (key != null) {
+// write key element
+recordConsumer.startField(keyName, 0);
+keyWriter.write(key);
+recordConsumer.endField(keyName, 0);
+
+// write value element
+if (value != null) {
+recordConsumer.startField(valueName, 1);
+valueWriter.write(value);
+recordConsumer.endField(valueName, 1);
+}
+}
+recordConsumer.endGroup();
+}
+
+recordConsumer.endField(repeatedGroupName, 0);
+}
+recordConsumer.endGroup();
+}
+
+@Override
+public void write(Object value) {}
+}
+
+private class ArrayWriter implements FieldWriter {
+
+private String elementName;
+private FieldWriter elementWriter;
+private String repeatedGroupName;
+private ArrayData.ElementGetter elementGetter;
+
+private ArrayWriter(LogicalType t, GroupType groupType) {
+
+// Get the internal array structure
+GroupType repeatedType = groupType.getType(0).asGroupType();
+this.repeatedGroupName = repeatedType.getName();
+
+Type elementType = repeatedType.getType(0);
+this.elementName = elementType.getName();
+
+this.elementWriter = createWriter(t, elementType);
+this.elementGetter = ArrayData.createElementGetter(t);
+}
+
+@Override
+public void write(RowData row, int ordinal) {
+recordConsumer.startGroup();
+ArrayData arrayData = row.getArray(ordinal);
+int listLength = arrayData.size();
+
+if (listLength > 0) {
+recordConsumer.startField(repeatedGroupName, 0);
+
+for (int i = 0; i < listLength; i++) {
+Object object = elementGetter.getElementOrNull(arrayData, 
i);
+recordConsumer.startGroup();
+if (object != null) {
+recordConsumer.startField(elementName, 0);
+elementWriter.write(object);
+recordConsumer.endField(elementName, 0);
+}
+recordConsumer.endGroup();
+}
+
+  

[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-12-02 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r760999864



##
File path: 
flink-formats/flink-parquet/src/test/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriterTest.java
##
@@ -120,15 +144,18 @@ private void innerTest(Configuration conf, boolean 
utcTimestamp) throws IOExcept
 toDateTime(v),
 BigDecimal.valueOf(v),
 BigDecimal.valueOf(v),
-BigDecimal.valueOf(v)));
+BigDecimal.valueOf(v),
+new Integer[] {v},
+mapData,
+Row.of(String.valueOf(v), v)));
 }
 
 ParquetWriterFactory factory =
-ParquetRowDataBuilder.createWriterFactory(ROW_TYPE, conf, 
utcTimestamp);
+ParquetRowDataBuilder.createWriterFactory(ROW_TYPE_COMPLEX, 
conf, utcTimestamp);
 BulkWriter writer =
 factory.create(path.getFileSystem().create(path, 
FileSystem.WriteMode.OVERWRITE));
 for (int i = 0; i < number; i++) {
-writer.addElement(CONVERTER.toInternal(rows.get(i)));
+writer.addElement(CONVERTER_COMPLEX.toInternal(rows.get(i)));
 }
 writer.flush();
 writer.finish();

Review comment:
   add test function `complexTypeTest`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-12-02 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r760999104



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -224,6 +293,176 @@ private TimestampWriter(int precision) {
 public void write(RowData row, int ordinal) {
 
recordConsumer.addBinary(timestampToInt96(row.getTimestamp(ordinal, 
precision)));
 }
+
+@Override
+public void write(Object value) {
+recordConsumer.addBinary(timestampToInt96((TimestampData) value));
+}
+}
+
+private class MapWriter implements FieldWriter {
+
+private String repeatedGroupName;
+private String keyName, valueName;
+private FieldWriter keyWriter, valueWriter;
+private ArrayData.ElementGetter keyElementGetter, valueElementGetter;
+
+private MapWriter(LogicalType keyType, LogicalType valueType, 
GroupType groupType) {
+
+// Get the internal map structure (MAP_KEY_VALUE)
+GroupType repeatedType = groupType.getType(0).asGroupType();
+this.repeatedGroupName = repeatedType.getName();
+
+// Get key element information
+Type type = repeatedType.getType(0);
+this.keyName = type.getName();
+this.keyWriter = createWriter(keyType, type);
+
+// Get value element information
+Type valuetype = repeatedType.getType(1);
+this.valueName = valuetype.getName();
+this.valueWriter = createWriter(valueType, valuetype);
+
+this.keyElementGetter = ArrayData.createElementGetter(keyType);
+this.valueElementGetter = ArrayData.createElementGetter(valueType);
+}
+
+@Override
+public void write(RowData row, int ordinal) {
+recordConsumer.startGroup();
+
+MapData mapData = row.getMap(ordinal);
+
+if (mapData != null && mapData.size() > 0) {
+recordConsumer.startField(repeatedGroupName, 0);
+
+ArrayData keyArray = mapData.keyArray();
+ArrayData valueArray = mapData.valueArray();
+for (int i = 0; i < keyArray.size(); i++) {
+Object key = keyElementGetter.getElementOrNull(keyArray, 
i);
+Object value = 
valueElementGetter.getElementOrNull(valueArray, i);
+
+recordConsumer.startGroup();
+if (key != null) {
+// write key element
+recordConsumer.startField(keyName, 0);
+keyWriter.write(key);
+recordConsumer.endField(keyName, 0);
+
+// write value element
+if (value != null) {
+recordConsumer.startField(valueName, 1);
+valueWriter.write(value);
+recordConsumer.endField(valueName, 1);
+}
+}
+recordConsumer.endGroup();
+}
+
+recordConsumer.endField(repeatedGroupName, 0);
+}
+recordConsumer.endGroup();
+}
+
+@Override
+public void write(Object value) {}
+}
+
+private class ArrayWriter implements FieldWriter {
+
+private String elementName;
+private FieldWriter elementWriter;
+private String repeatedGroupName;
+private ArrayData.ElementGetter elementGetter;
+
+private ArrayWriter(LogicalType t, GroupType groupType) {
+
+// Get the internal array structure
+GroupType repeatedType = groupType.getType(0).asGroupType();
+this.repeatedGroupName = repeatedType.getName();
+
+Type elementType = repeatedType.getType(0);
+this.elementName = elementType.getName();
+
+this.elementWriter = createWriter(t, elementType);
+this.elementGetter = ArrayData.createElementGetter(t);
+}
+
+@Override
+public void write(RowData row, int ordinal) {
+recordConsumer.startGroup();
+ArrayData arrayData = row.getArray(ordinal);
+int listLength = arrayData.size();
+
+if (listLength > 0) {
+recordConsumer.startField(repeatedGroupName, 0);
+
+for (int i = 0; i < listLength; i++) {
+Object object = elementGetter.getElementOrNull(arrayData, 
i);
+recordConsumer.startGroup();
+if (object != null) {
+recordConsumer.startField(elementName, 0);
+elementWriter.write(object);
+recordConsumer.endField(elementName, 0);
+}
+recordConsumer.endGroup();
+}
+
+  

[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-12-02 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r760997452



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -126,13 +134,29 @@ private FieldWriter createWriter(LogicalType t, Type 
type) {
 throw new UnsupportedOperationException("Unsupported type: 
" + type);
 }
 } else {
-throw new IllegalArgumentException("Unsupported  data type: " + t);
+GroupType groupType = type.asGroupType();
+LogicalTypeAnnotation logicalType = 
type.getLogicalTypeAnnotation();
+
+if (t instanceof ArrayType
+&& logicalType instanceof 
LogicalTypeAnnotation.ListLogicalTypeAnnotation) {
+return new ArrayWriter(((ArrayType) t).getElementType(), 
groupType);
+} else if (t instanceof MapType
+&& logicalType instanceof 
LogicalTypeAnnotation.MapLogicalTypeAnnotation) {
+return new MapWriter(
+((MapType) t).getKeyType(), ((MapType) 
t).getValueType(), groupType);
+} else if (t instanceof RowType && type instanceof GroupType) {
+return new RowWriter(t, groupType);
+} else {
+throw new UnsupportedOperationException("Unsupported type: " + 
type);
+}
 }
 }
 
 private interface FieldWriter {
 
 void write(RowData row, int ordinal);
+
+void write(Object value);

Review comment:
   done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-10-27 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r737956485



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -141,6 +163,11 @@ private FieldWriter createWriter(LogicalType t, Type type) 
{
 public void write(RowData row, int ordinal) {
 recordConsumer.addBoolean(row.getBoolean(ordinal));
 }
+
+@Override
+public void write(Object value) {
+recordConsumer.addBoolean((boolean) value);

Review comment:
   This issue has been logged. 
https://issues.apache.org/jira/browse/FLINK-24614




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-10-27 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r737293619



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -141,6 +163,11 @@ private FieldWriter createWriter(LogicalType t, Type type) 
{
 public void write(RowData row, int ordinal) {
 recordConsumer.addBoolean(row.getBoolean(ordinal));
 }
+
+@Override
+public void write(Object value) {
+recordConsumer.addBoolean((boolean) value);

Review comment:
   Must test cases be added before merge this PR?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-10-27 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r737241851



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -141,6 +163,11 @@ private FieldWriter createWriter(LogicalType t, Type type) 
{
 public void write(RowData row, int ordinal) {
 recordConsumer.addBoolean(row.getBoolean(ordinal));
 }
+
+@Override
+public void write(Object value) {
+recordConsumer.addBoolean((boolean) value);

Review comment:
   Flink does not support read composite type now, could i  add test cases 
later ?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-10-27 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r737239171



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -224,6 +291,176 @@ private TimestampWriter(int precision) {
 public void write(RowData row, int ordinal) {
 
recordConsumer.addBinary(timestampToInt96(row.getTimestamp(ordinal, 
precision)));
 }
+
+@Override
+public void write(Object value) {
+recordConsumer.addBinary(timestampToInt96((TimestampData) value));
+}
+}
+
+private class MapWriter implements FieldWriter {
+
+private String repeatedGroupName;
+private String keyName, valueName;
+private FieldWriter keyWriter, valueWriter;
+private ArrayData.ElementGetter keyElementGetter, valueElementGetter;
+
+private MapWriter(LogicalType keyType, LogicalType valueType, 
GroupType groupType) {
+
+// Get the internal map structure (MAP_KEY_VALUE)
+GroupType repeatedType = groupType.getType(0).asGroupType();
+this.repeatedGroupName = repeatedType.getName();
+
+// Get key element information
+Type type = repeatedType.getType(0);
+this.keyName = type.getName();
+this.keyWriter = createWriter(keyType, type);
+
+// Get value element information
+Type valuetype = repeatedType.getType(1);
+this.valueName = valuetype.getName();
+this.valueWriter = createWriter(valueType, valuetype);
+
+this.keyElementGetter = ArrayData.createElementGetter(keyType);
+this.valueElementGetter = ArrayData.createElementGetter(valueType);
+}
+
+@Override
+public void write(RowData row, int ordinal) {
+recordConsumer.startGroup();
+
+MapData mapData = row.getMap(ordinal);
+
+if (mapData != null && mapData.size() > 0) {
+recordConsumer.startField(repeatedGroupName, 0);
+
+ArrayData keyArray = mapData.keyArray();
+ArrayData valueArray = mapData.valueArray();
+for (int i = 0; i < keyArray.size(); i++) {
+Object key = keyElementGetter.getElementOrNull(keyArray, 
i);
+Object value = 
valueElementGetter.getElementOrNull(valueArray, i);
+
+recordConsumer.startGroup();
+if (key != null) {
+// write key element
+recordConsumer.startField(keyName, 0);
+keyWriter.write(key);
+recordConsumer.endField(keyName, 0);
+
+// write value element
+if (value != null) {
+recordConsumer.startField(valueName, 1);
+valueWriter.write(value);
+recordConsumer.endField(valueName, 1);
+}
+}
+recordConsumer.endGroup();
+}
+
+recordConsumer.endField(repeatedGroupName, 0);
+}
+recordConsumer.endGroup();
+}
+
+@Override
+public void write(Object value) {}
+}
+
+private class ArrayWriter implements FieldWriter {
+
+private String elementName;
+private FieldWriter elementWriter;
+private String repeatedGroupName;
+private ArrayData.ElementGetter elementGetter;
+
+private ArrayWriter(LogicalType t, GroupType groupType) {
+
+// Get the internal array structure
+GroupType repeatedType = groupType.getType(0).asGroupType();
+this.repeatedGroupName = repeatedType.getName();
+
+Type elementType = repeatedType.getType(0);
+this.elementName = elementType.getName();
+
+this.elementWriter = createWriter(t, elementType);
+this.elementGetter = ArrayData.createElementGetter(t);
+}
+
+@Override
+public void write(RowData row, int ordinal) {
+recordConsumer.startGroup();
+ArrayData arrayData = row.getArray(ordinal);
+int listLength = arrayData.size();
+
+if (listLength > 0) {
+recordConsumer.startField(repeatedGroupName, 0);
+
+for (int i = 0; i < listLength; i++) {
+Object object = elementGetter.getElementOrNull(arrayData, 
i);
+recordConsumer.startGroup();
+if (object != null) {
+recordConsumer.startField(elementName, 0);
+elementWriter.write(object);
+recordConsumer.endField(elementName, 0);
+}
+recordConsumer.endGroup();

Review comment:
   

[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-10-27 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r737235601



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -141,6 +163,11 @@ private FieldWriter createWriter(LogicalType t, Type type) 
{
 public void write(RowData row, int ordinal) {
 recordConsumer.addBoolean(row.getBoolean(ordinal));
 }
+
+@Override
+public void write(Object value) {
+recordConsumer.addBoolean((boolean) value);

Review comment:
   If the value is `null`, although the `write()` method is skipped here, 
but RecordConsumer will automatically write a null value in the `endGroup()` 
method




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-10-27 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r737202866



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -141,6 +163,11 @@ private FieldWriter createWriter(LogicalType t, Type type) 
{
 public void write(RowData row, int ordinal) {
 recordConsumer.addBoolean(row.getBoolean(ordinal));
 }
+
+@Override
+public void write(Object value) {
+recordConsumer.addBoolean((boolean) value);

Review comment:
   If the value is `null`, the `isNullAt` method has been called before, so 
there is no need to write a null value here, Flink does not support read 
composite type now , I will add test cases later




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-10-27 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r737206308



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -224,6 +291,176 @@ private TimestampWriter(int precision) {
 public void write(RowData row, int ordinal) {
 
recordConsumer.addBinary(timestampToInt96(row.getTimestamp(ordinal, 
precision)));
 }
+
+@Override
+public void write(Object value) {
+recordConsumer.addBinary(timestampToInt96((TimestampData) value));
+}
+}
+
+private class MapWriter implements FieldWriter {
+
+private String repeatedGroupName;
+private String keyName, valueName;
+private FieldWriter keyWriter, valueWriter;
+private ArrayData.ElementGetter keyElementGetter, valueElementGetter;
+
+private MapWriter(LogicalType keyType, LogicalType valueType, 
GroupType groupType) {
+
+// Get the internal map structure (MAP_KEY_VALUE)
+GroupType repeatedType = groupType.getType(0).asGroupType();
+this.repeatedGroupName = repeatedType.getName();
+
+// Get key element information
+Type type = repeatedType.getType(0);
+this.keyName = type.getName();
+this.keyWriter = createWriter(keyType, type);
+
+// Get value element information
+Type valuetype = repeatedType.getType(1);
+this.valueName = valuetype.getName();
+this.valueWriter = createWriter(valueType, valuetype);
+
+this.keyElementGetter = ArrayData.createElementGetter(keyType);
+this.valueElementGetter = ArrayData.createElementGetter(valueType);
+}
+
+@Override
+public void write(RowData row, int ordinal) {
+recordConsumer.startGroup();
+
+MapData mapData = row.getMap(ordinal);
+
+if (mapData != null && mapData.size() > 0) {
+recordConsumer.startField(repeatedGroupName, 0);
+
+ArrayData keyArray = mapData.keyArray();
+ArrayData valueArray = mapData.valueArray();
+for (int i = 0; i < keyArray.size(); i++) {
+Object key = keyElementGetter.getElementOrNull(keyArray, 
i);
+Object value = 
valueElementGetter.getElementOrNull(valueArray, i);

Review comment:
   The `isNullAt` method has been called here




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-10-27 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r737205891



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -224,6 +291,176 @@ private TimestampWriter(int precision) {
 public void write(RowData row, int ordinal) {
 
recordConsumer.addBinary(timestampToInt96(row.getTimestamp(ordinal, 
precision)));
 }
+
+@Override
+public void write(Object value) {
+recordConsumer.addBinary(timestampToInt96((TimestampData) value));
+}
+}
+
+private class MapWriter implements FieldWriter {
+
+private String repeatedGroupName;
+private String keyName, valueName;
+private FieldWriter keyWriter, valueWriter;
+private ArrayData.ElementGetter keyElementGetter, valueElementGetter;
+
+private MapWriter(LogicalType keyType, LogicalType valueType, 
GroupType groupType) {
+
+// Get the internal map structure (MAP_KEY_VALUE)
+GroupType repeatedType = groupType.getType(0).asGroupType();
+this.repeatedGroupName = repeatedType.getName();
+
+// Get key element information
+Type type = repeatedType.getType(0);
+this.keyName = type.getName();
+this.keyWriter = createWriter(keyType, type);
+
+// Get value element information
+Type valuetype = repeatedType.getType(1);
+this.valueName = valuetype.getName();
+this.valueWriter = createWriter(valueType, valuetype);
+
+this.keyElementGetter = ArrayData.createElementGetter(keyType);
+this.valueElementGetter = ArrayData.createElementGetter(valueType);
+}
+
+@Override
+public void write(RowData row, int ordinal) {
+recordConsumer.startGroup();
+
+MapData mapData = row.getMap(ordinal);
+
+if (mapData != null && mapData.size() > 0) {
+recordConsumer.startField(repeatedGroupName, 0);
+
+ArrayData keyArray = mapData.keyArray();
+ArrayData valueArray = mapData.valueArray();
+for (int i = 0; i < keyArray.size(); i++) {
+Object key = keyElementGetter.getElementOrNull(keyArray, 
i);
+Object value = 
valueElementGetter.getElementOrNull(valueArray, i);
+
+recordConsumer.startGroup();
+if (key != null) {
+// write key element
+recordConsumer.startField(keyName, 0);
+keyWriter.write(key);
+recordConsumer.endField(keyName, 0);
+
+// write value element
+if (value != null) {
+recordConsumer.startField(valueName, 1);
+valueWriter.write(value);
+recordConsumer.endField(valueName, 1);
+}
+}
+recordConsumer.endGroup();
+}
+
+recordConsumer.endField(repeatedGroupName, 0);
+}
+recordConsumer.endGroup();
+}
+
+@Override
+public void write(Object value) {}
+}
+
+private class ArrayWriter implements FieldWriter {
+
+private String elementName;
+private FieldWriter elementWriter;
+private String repeatedGroupName;
+private ArrayData.ElementGetter elementGetter;
+
+private ArrayWriter(LogicalType t, GroupType groupType) {
+
+// Get the internal array structure
+GroupType repeatedType = groupType.getType(0).asGroupType();
+this.repeatedGroupName = repeatedType.getName();
+
+Type elementType = repeatedType.getType(0);
+this.elementName = elementType.getName();
+
+this.elementWriter = createWriter(t, elementType);
+this.elementGetter = ArrayData.createElementGetter(t);
+}
+
+@Override
+public void write(RowData row, int ordinal) {
+recordConsumer.startGroup();
+ArrayData arrayData = row.getArray(ordinal);
+int listLength = arrayData.size();
+
+if (listLength > 0) {
+recordConsumer.startField(repeatedGroupName, 0);
+
+for (int i = 0; i < listLength; i++) {
+Object object = elementGetter.getElementOrNull(arrayData, 
i);

Review comment:
   The `isNullAt` method has been called here




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:

[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-10-27 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r737202866



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -141,6 +163,11 @@ private FieldWriter createWriter(LogicalType t, Type type) 
{
 public void write(RowData row, int ordinal) {
 recordConsumer.addBoolean(row.getBoolean(ordinal));
 }
+
+@Override
+public void write(Object value) {
+recordConsumer.addBoolean((boolean) value);

Review comment:
   If the value is `null`, the `isNullAt` method has been called before, so 
there is no need to write a null value here, Flink does not support read 
composite type now , I will add test cases later




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-10-27 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r737202866



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -141,6 +163,11 @@ private FieldWriter createWriter(LogicalType t, Type type) 
{
 public void write(RowData row, int ordinal) {
 recordConsumer.addBoolean(row.getBoolean(ordinal));
 }
+
+@Override
+public void write(Object value) {
+recordConsumer.addBoolean((boolean) value);

Review comment:
   If the value is null, the isNullAt method has been called before, so 
there is no need to write a null value here, Flink does not support read 
composite type now , I will add test cases later




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-10-25 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r735603570



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -126,13 +134,27 @@ private FieldWriter createWriter(LogicalType t, Type 
type) {
 throw new UnsupportedOperationException("Unsupported type: 
" + type);
 }
 } else {
-throw new IllegalArgumentException("Unsupported  data type: " + t);
+GroupType groupType = type.asGroupType();
+LogicalTypeAnnotation logicalType = 
type.getLogicalTypeAnnotation();
+
+if (t instanceof ArrayType
+&& logicalType instanceof 
LogicalTypeAnnotation.ListLogicalTypeAnnotation) {
+return new ArrayWriter(((ArrayType) t).getElementType(), 
groupType);
+} else if (t instanceof MapType
+&& logicalType instanceof 
LogicalTypeAnnotation.MapLogicalTypeAnnotation) {
+return new MapWriter(
+((MapType) t).getKeyType(), ((MapType) 
t).getValueType(), groupType);
+} else {
+return new RowWriter(t, groupType);

Review comment:
   Sorry, I will throw a UnsupportedOperationException here




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] meetjunsu commented on a change in pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-10-25 Thread GitBox


meetjunsu commented on a change in pull request #17542:
URL: https://github.com/apache/flink/pull/17542#discussion_r735603151



##
File path: 
flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/row/ParquetRowDataWriter.java
##
@@ -141,6 +163,11 @@ private FieldWriter createWriter(LogicalType t, Type type) 
{
 public void write(RowData row, int ordinal) {
 recordConsumer.addBoolean(row.getBoolean(ordinal));
 }
+
+@Override
+public void write(Object value) {
+recordConsumer.addBoolean((boolean) value);

Review comment:
   `null` will be filtered in MapWriter and ArrayWriter




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org