[ https://issues.apache.org/jira/browse/FLINK-2186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15859179#comment-15859179 ]
ASF GitHub Bot commented on FLINK-2186: --------------------------------------- Github user ex00 commented on a diff in the pull request: https://github.com/apache/flink/pull/3012#discussion_r100247870 --- Diff: flink-scala/src/main/scala/org/apache/flink/api/scala/ExecutionEnvironment.scala --- @@ -348,6 +349,47 @@ class ExecutionEnvironment(javaEnv: JavaEnv) { wrap(new DataSource[T](javaEnv, inputFormat, typeInfo, getCallLocationName())) } + def readCsvFileAsRow[T : ClassTag : TypeInformation]( --- End diff -- Could you add scaladoc for method? may do not understand, what is ```additionalTypes``` > Rework CSV import to support very wide files > -------------------------------------------- > > Key: FLINK-2186 > URL: https://issues.apache.org/jira/browse/FLINK-2186 > Project: Flink > Issue Type: Improvement > Components: Machine Learning Library, Scala API > Reporter: Theodore Vasiloudis > Assignee: Anton Solovev > > In the current readVcsFile implementation, importing CSV files with many > columns can become from cumbersome to impossible. > For example to import an 11 column file we need to write: > {code} > val cancer = env.readCsvFile[(String, String, String, String, String, String, > String, String, String, String, > String)]("/path/to/breast-cancer-wisconsin.data") > {code} > For many use cases in Machine Learning we might have CSV files with thousands > or millions of columns that we want to import as vectors. > In that case using the current readCsvFile method becomes impossible. > We therefore need to rework the current function, or create a new one that > will allow us to import CSV files with an arbitrary number of columns. -- This message was sent by Atlassian JIRA (v6.3.15#6346)