[ 
https://issues.apache.org/jira/browse/SPARK-1916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Wendell updated SPARK-1916:
-----------------------------------

    Assignee: David Lemieux

> SparkFlumeEvent with body bigger than 1020 bytes are not read properly
> ----------------------------------------------------------------------
>
>                 Key: SPARK-1916
>                 URL: https://issues.apache.org/jira/browse/SPARK-1916
>             Project: Spark
>          Issue Type: Bug
>          Components: Streaming
>    Affects Versions: 0.9.0
>            Reporter: David Lemieux
>            Assignee: David Lemieux
>         Attachments: SPARK-1916.diff
>
>
> The readExternal implementation on SparkFlumeEvent will read only the first 
> 1020 bytes of the actual body when streaming data from flume.
> This means that any event sent to Spark via Flume will be processed properly 
> if the body is small, but will fail if the body is bigger than 1020.
> Considering that the default max size for a Flume Avro Event is 32K, the 
> implementation should be updated to read more.
> The following is related : 
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-using-Flume-body-size-limitation-tt6127.html



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to