Hi Vaibhav,

  As you said, from the second link,  I can figure out that, it is not able
to cast the class when it is trying to read from checkpoint. Can you try
explicit casting like asInstanceOf[T] for the broad casted value ?

>From the bug, looks like it affects version 1.5. Try sample wordcount
program with same spark 1.3 version and try to re-produce the error. If you
are able to, then change the version of spark to 1.5 or 1.4 and see if the
issue is seen. If not, then possibly it should have been fixed in latest
version.

As direct API is introduced in spark 1.3, it is likely to have bugs.

On Tue, Feb 23, 2016 at 5:09 PM, vaibhavrtk1 [via Apache Spark User List] <
ml-node+s1001560n26304...@n3.nabble.com> wrote:

> Hello
>
> I have tried with Direct API but i am getting this an error, which is
> being tracked here https://issues.apache.org/jira/browse/SPARK-5594
>
> I also tried using Receiver approach with Write Ahead Logs ,then this
> issue comes
> https://issues.apache.org/jira/browse/SPARK-12407
>
> In both cases it seems it is not able to get the broadcasted variable from
> checkpoint directory.
> Attached is the screenshot of errors I faced with both approaches.
>
> What do you guys suggest for solving this issue?
>
>
> *Vaibhav Nagpal*
> 9535433788
> <https://in.linkedin.com/in/vaibhav-nagpal-48237870>
>
> On Tue, Feb 23, 2016 at 1:50 PM, Gideon [via Apache Spark User List] <[hidden
> email] <http:///user/SendEmail.jtp?type=node&node=26304&i=0>> wrote:
>
>> Regarding the spark streaming receiver - can't you just use Kafka direct
>> receivers with checkpoints? So when you restart your application it will
>> read where it last stopped and continue from there
>> Regarding limiting the number of messages - you can do that by setting
>> spark.streaming.receiver.maxRate. Read more about it here
>> <http://spark.apache.org/docs/latest/configuration.html>
>>
>> ------------------------------
>> If you reply to this email, your message will be added to the discussion
>> below:
>>
>> http://apache-spark-user-list.1001560.n3.nabble.com/Read-from-kafka-after-application-is-restarted-tp26291p26303.html
>> To unsubscribe from Read from kafka after application is restarted, click
>> here.
>> NAML
>> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>
>
>
> *Capture.JPG* (222K) Download Attachment
> <http://apache-spark-user-list.1001560.n3.nabble.com/attachment/26304/0/Capture.JPG>
> *Capture1.JPG* (169K) Download Attachment
> <http://apache-spark-user-list.1001560.n3.nabble.com/attachment/26304/1/Capture1.JPG>
>
>
> ------------------------------
> If you reply to this email, your message will be added to the discussion
> below:
>
> http://apache-spark-user-list.1001560.n3.nabble.com/Read-from-kafka-after-application-is-restarted-tp26291p26304.html
> To start a new topic under Apache Spark User List, email
> ml-node+s1001560n1...@n3.nabble.com
> To unsubscribe from Apache Spark User List, click here
> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=1&code=bGVhcm5pbmdzLmNoaXR0dXJpQGdtYWlsLmNvbXwxfC03NzExMjUwMg==>
> .
> NAML
> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Read-from-kafka-after-application-is-restarted-tp26291p26316.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to