[ 
https://issues.apache.org/jira/browse/FLINK-1303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14322116#comment-14322116
 ] 

Yiannis Gkoufas edited comment on FLINK-1303 at 2/15/15 7:23 PM:
-----------------------------------------------------------------

Actually, my intention was to benchmark Flink with working with some 
Lzo-compressed csv files that I have.
Is there something more needed than including the lzo jar in the lib folder of 
flink? Do I need to set the path for the native libraries as well?

Also, I am getting this error with the latest 0.9 version for the above line:

 could not find implicit value for parameter tpe: 
org.apache.flink.api.common.typeinfo.TypeInformation[(org.apache.hadoop.io.LongWritable,
 org.apache.hadoop.io.Text)]


was (Author: johngouf):
Actually, my intention was to benchmark Flink with working with some 
Lzo-compressed csv files that I have.
Is there something more needed than including the lzo jar in the lib folder of 
flink? Do I need to set the path for the native libraries as well?

> HadoopInputFormat does not work with Scala API
> ----------------------------------------------
>
>                 Key: FLINK-1303
>                 URL: https://issues.apache.org/jira/browse/FLINK-1303
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Scala API
>            Reporter: Aljoscha Krettek
>            Assignee: Aljoscha Krettek
>             Fix For: 0.9, 0.8.1
>
>
> It fails because the HadoopInputFormat uses the Flink Tuple2 type. For this, 
> type extraction fails at runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to