echauchot commented on a change in pull request #17640: URL: https://github.com/apache/flink/pull/17640#discussion_r743628436
########## File path: docs/content/docs/connectors/datastream/formats/azure_table_storage.md ########## @@ -0,0 +1,130 @@ +--- +title: "Microsoft Azure table" +weight: 4 +type: docs +aliases: +- /dev/connectors/formats/azure_table_storage.html +- /apis/streaming/connectors/formats/azure_table_storage.html +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + +# Microsoft Azure Table Storage format + +_Note: This example works starting from Flink 0.6-incubating_ Review comment: agree, it was on dataSet conncetor page (cf my general comment) I did not clean anything. Will do thx for pointing out ########## File path: docs/content/docs/connectors/datastream/formats/mongodb.md ########## @@ -0,0 +1,33 @@ +--- +title: "MongoDb" +weight: 4 +type: docs +aliases: +- /dev/connectors/formats/mongodb.html +- /apis/streaming/connectors/formats/mongodb.html + +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + +# MongoDB format Review comment: it is a connector indeed I was too quick sorry, thanks ! ########## File path: docs/content/docs/connectors/datastream/formats/parquet.md ########## @@ -0,0 +1,67 @@ +--- +title: "Parquet" +weight: 4 +type: docs +aliases: +- /dev/connectors/formats/parquet.html +- /apis/streaming/connectors/formats/parquet.html +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + + +# Parquet formats Review comment: Well actually there are several `ParquetRowInputFormat`, `ParquetPojoInputFormat`, `ParquetAvroInputFormat`, hence the plural. ########## File path: docs/content/docs/connectors/datastream/formats/hadoop.md ########## @@ -0,0 +1,38 @@ +--- +title: "Hadoop" +weight: 4 +type: docs +aliases: + - /dev/connectors/formats/hadoop.html + - /apis/streaming/connectors/formats/hadoop.html + +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + +# Hadoop formats + +Apache Flink allows users to access many different systems as data sources. +The system is designed for very easy extensibility. Similar to Apache Hadoop, Flink has the concept +of so called `InputFormat`s + +One implementation of these `InputFormat`s is the `HadoopInputFormat`. This is a wrapper that allows +users to use all existing Hadoop input formats with Flink. + +{{< top >}} Review comment: It is true that the current page links to the whole hadoop compatibility page and this indirection is not needed. What I propose is to move all hadoop related formats content to this page and leave the hadoop mappers and reducers context to a new page. WDYT ? ########## File path: docs/content/docs/connectors/datastream/formats/hadoop.md ########## @@ -0,0 +1,38 @@ +--- +title: "Hadoop" +weight: 4 +type: docs +aliases: + - /dev/connectors/formats/hadoop.html + - /apis/streaming/connectors/formats/hadoop.html + +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + +# Hadoop formats + +Apache Flink allows users to access many different systems as data sources. +The system is designed for very easy extensibility. Similar to Apache Hadoop, Flink has the concept +of so called `InputFormat`s + +One implementation of these `InputFormat`s is the `HadoopInputFormat`. This is a wrapper that allows +users to use all existing Hadoop input formats with Flink. + +{{< top >}} Review comment: It is true that the current page links to the whole hadoop compatibility page and this indirection is not needed. What I propose is to move all hadoop related formats content to this page and leave the hadoop map reduce content to a new page. WDYT ? ########## File path: docs/content/docs/connectors/datastream/formats/hadoop.md ########## @@ -0,0 +1,38 @@ +--- +title: "Hadoop" +weight: 4 +type: docs +aliases: + - /dev/connectors/formats/hadoop.html + - /apis/streaming/connectors/formats/hadoop.html + +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + +# Hadoop formats + +Apache Flink allows users to access many different systems as data sources. +The system is designed for very easy extensibility. Similar to Apache Hadoop, Flink has the concept +of so called `InputFormat`s + +One implementation of these `InputFormat`s is the `HadoopInputFormat`. This is a wrapper that allows +users to use all existing Hadoop input formats with Flink. + +{{< top >}} Review comment: It is true that the current page links to the whole hadoop compatibility page and this indirection is not needed. What I propose is to move all hadoop formats related content to this page and leave the hadoop map reduce content to a new page. WDYT ? ########## File path: docs/content/docs/connectors/datastream/formats/hadoop.md ########## @@ -0,0 +1,38 @@ +--- +title: "Hadoop" +weight: 4 +type: docs +aliases: + - /dev/connectors/formats/hadoop.html + - /apis/streaming/connectors/formats/hadoop.html + +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + +# Hadoop formats + +Apache Flink allows users to access many different systems as data sources. +The system is designed for very easy extensibility. Similar to Apache Hadoop, Flink has the concept +of so called `InputFormat`s + +One implementation of these `InputFormat`s is the `HadoopInputFormat`. This is a wrapper that allows +users to use all existing Hadoop input formats with Flink. + +{{< top >}} Review comment: It is true that the current page links to the whole hadoop compatibility page and this indirection is not needed. What I propose is to move all hadoop formats related content to this page and leave the hadoop map reduce content to a new page. And I leave deps conf and complete example in both pages because these parts deal with both formats and map reduce WDYT ? ########## File path: docs/content/docs/connectors/datastream/formats/parquet.md ########## @@ -0,0 +1,67 @@ +--- +title: "Parquet" +weight: 4 +type: docs +aliases: +- /dev/connectors/formats/parquet.html +- /apis/streaming/connectors/formats/parquet.html +--- +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + + +# Parquet formats Review comment: Yes they read the same way the parquet files (ParquetReader impl is common), they just output different object types (Avro GenericRecords, Flink Row or java Pojo). So they are considered the same format then ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
