Re: Magic number 16: Why doesn't Spark Streaming process more than 16 files?

2015-02-23 Thread Tathagata Das
In case this mystery has not been solved, DStream.print() essentially does
a RDD.take(10) on each RDD, which computes only a subset of the partitions
in the RDD. But collects forces the evaluation of all the RDDs. Since you
are writing to json in the mapI() function, this could be the reason.

TD

On Wed, Feb 18, 2015 at 7:15 AM, Imran Rashid  wrote:

> so if you only change this line:
>
>
> https://gist.github.com/emres/0fb6de128baea099e741#file-mymoduledriver-java-L137
>
> to
>
> json.print()
>
> it processes 16 files instead?  I am totally perplexed.  My only
> suggestions to help debug are
> (1) see what happens when you get rid of MyModuleWorker completely --
> change MyModuleDriver#process to just
> inStream.print()
> and see what happens
>
> (2) stick a bunch of printlns into MyModuleWorker#call
>
> (3) turn on DEBUG logging
> for org.apache.spark.streaming.dstream.FileInputDStream
>
> my gut instinct is that something else is flaky about the file input
> stream (eg., it makes some assumption about the file system which maybe
> aren't valid in your case, it has a bunch of caveats), and that it has just
> happened to work sometimes with your foreachRdd and failed sometimes with
> print.
>
> Sorry I am not a lot of help in this case, hope this leads you down the
> right track or somebody else can help out.
>
> Imran
>
>
> On Wed, Feb 18, 2015 at 2:28 AM, Emre Sevinc 
> wrote:
>
>> Hello Imran,
>>
>> (a) I know that all 20 files are processed when I use foreachRDD, because
>> I can see the processed files in the output directory. (My application
>> logic writes them to an output directory after they are processed, *but*
>> that writing operation does not happen in foreachRDD, below you can see the
>> URL that includes my code and clarifies this).
>>
>> (b) I know only 16 files are processed because in the output directory I
>> see only 16 files processed. I wait for minutes and minutes and no more
>> files appear in the output directory. When I see only 16 files are
>> processed and Spark Streaming went to the mode of idly watching the input
>> directory, and then if I copy a few more files, they are also processed.
>>
>> (c) Sure, you can see part of my code in the following gist:
>> https://gist.github.com/emres/0fb6de128baea099e741
>>  It might seem a little convoluted at first, because my application
>> is divided into two classes, a Driver class (setting up things and
>> initializing them), and a Worker class (that implements the core
>> functionality). I've also put the relevant methods from the my utility
>> classes for completeness.
>>
>> I am as perplexed as you are as to why forcing the output via foreachRDD
>> ended up in different behaviour compared to simply using print() method.
>>
>> Kind regards,
>> Emre
>>
>>
>>
>> On Tue, Feb 17, 2015 at 4:23 PM, Imran Rashid 
>> wrote:
>>
>>> Hi Emre,
>>>
>>> there shouldn't be any difference in which files get processed w/
>>> print() vs. foreachRDD().  In fact, if you look at the definition of
>>> print(), it is just calling foreachRDD() underneath.  So there is something
>>> else going on here.
>>>
>>> We need a little more information to figure out exactly what is going
>>> on.  (I think Sean was getting at the same thing ...)
>>>
>>> (a) how do you know that when you use foreachRDD, all 20 files get
>>> processed?
>>>
>>> (b) How do you know that only 16 files get processed when you print()?
>>> Do you know the other files are being skipped, or maybe they are just
>>> "stuck" somewhere?  eg., suppose you start w/ 20 files, and you see 16 get
>>> processed ... what happens after you add a few more files to the
>>> directory?  Are they processed immediately, or are they never processed
>>> either?
>>>
>>> (c) Can you share any more code of what you are doing to the dstreams
>>> *before* the print() / foreachRDD()?  That might give us more details about
>>> what the difference is.
>>>
>>> I can't see how .count.println() would be different than just println(),
>>> but maybe I am missing something also.
>>>
>>> Imran
>>>
>>> On Mon, Feb 16, 2015 at 7:49 AM, Emre Sevinc 
>>> wrote:
>>>
 Sean,

 In this case, I've been testing the code on my local machine and using
 Spark locally, so I all the log output was available on my terminal. And
 I've used the .print() method to have an output operation, just to force
 Spark execute.

 And I was not using foreachRDD, I was only using print() method on a
 JavaDStream object, and it was working fine for a few files, up to 16 (and
 without print() it did not do anything because there were no output
 operations).

 To sum it up, in my case:

  - Initially, use .print() and no foreachRDD: processes up to 16 files
 and does not do anything for the remaining 4.
  - Remove .print() and use foreachRDD: processes all of the 20 files.

 Maybe, as in Akhil Das's suggestion, using .count.print() might also
 have fixed my problem, but I'm s

Re: Magic number 16: Why doesn't Spark Streaming process more than 16 files?

2015-02-18 Thread Imran Rashid
so if you only change this line:

https://gist.github.com/emres/0fb6de128baea099e741#file-mymoduledriver-java-L137

to

json.print()

it processes 16 files instead?  I am totally perplexed.  My only
suggestions to help debug are
(1) see what happens when you get rid of MyModuleWorker completely --
change MyModuleDriver#process to just
inStream.print()
and see what happens

(2) stick a bunch of printlns into MyModuleWorker#call

(3) turn on DEBUG logging
for org.apache.spark.streaming.dstream.FileInputDStream

my gut instinct is that something else is flaky about the file input stream
(eg., it makes some assumption about the file system which maybe aren't
valid in your case, it has a bunch of caveats), and that it has just
happened to work sometimes with your foreachRdd and failed sometimes with
print.

Sorry I am not a lot of help in this case, hope this leads you down the
right track or somebody else can help out.

Imran


On Wed, Feb 18, 2015 at 2:28 AM, Emre Sevinc  wrote:

> Hello Imran,
>
> (a) I know that all 20 files are processed when I use foreachRDD, because
> I can see the processed files in the output directory. (My application
> logic writes them to an output directory after they are processed, *but*
> that writing operation does not happen in foreachRDD, below you can see the
> URL that includes my code and clarifies this).
>
> (b) I know only 16 files are processed because in the output directory I
> see only 16 files processed. I wait for minutes and minutes and no more
> files appear in the output directory. When I see only 16 files are
> processed and Spark Streaming went to the mode of idly watching the input
> directory, and then if I copy a few more files, they are also processed.
>
> (c) Sure, you can see part of my code in the following gist:
> https://gist.github.com/emres/0fb6de128baea099e741
>  It might seem a little convoluted at first, because my application is
> divided into two classes, a Driver class (setting up things and
> initializing them), and a Worker class (that implements the core
> functionality). I've also put the relevant methods from the my utility
> classes for completeness.
>
> I am as perplexed as you are as to why forcing the output via foreachRDD
> ended up in different behaviour compared to simply using print() method.
>
> Kind regards,
> Emre
>
>
>
> On Tue, Feb 17, 2015 at 4:23 PM, Imran Rashid 
> wrote:
>
>> Hi Emre,
>>
>> there shouldn't be any difference in which files get processed w/ print()
>> vs. foreachRDD().  In fact, if you look at the definition of print(), it is
>> just calling foreachRDD() underneath.  So there is something else going on
>> here.
>>
>> We need a little more information to figure out exactly what is going on.
>>  (I think Sean was getting at the same thing ...)
>>
>> (a) how do you know that when you use foreachRDD, all 20 files get
>> processed?
>>
>> (b) How do you know that only 16 files get processed when you print()? Do
>> you know the other files are being skipped, or maybe they are just "stuck"
>> somewhere?  eg., suppose you start w/ 20 files, and you see 16 get
>> processed ... what happens after you add a few more files to the
>> directory?  Are they processed immediately, or are they never processed
>> either?
>>
>> (c) Can you share any more code of what you are doing to the dstreams
>> *before* the print() / foreachRDD()?  That might give us more details about
>> what the difference is.
>>
>> I can't see how .count.println() would be different than just println(),
>> but maybe I am missing something also.
>>
>> Imran
>>
>> On Mon, Feb 16, 2015 at 7:49 AM, Emre Sevinc 
>> wrote:
>>
>>> Sean,
>>>
>>> In this case, I've been testing the code on my local machine and using
>>> Spark locally, so I all the log output was available on my terminal. And
>>> I've used the .print() method to have an output operation, just to force
>>> Spark execute.
>>>
>>> And I was not using foreachRDD, I was only using print() method on a
>>> JavaDStream object, and it was working fine for a few files, up to 16 (and
>>> without print() it did not do anything because there were no output
>>> operations).
>>>
>>> To sum it up, in my case:
>>>
>>>  - Initially, use .print() and no foreachRDD: processes up to 16 files
>>> and does not do anything for the remaining 4.
>>>  - Remove .print() and use foreachRDD: processes all of the 20 files.
>>>
>>> Maybe, as in Akhil Das's suggestion, using .count.print() might also
>>> have fixed my problem, but I'm satisfied with foreachRDD approach for now.
>>> (Though it is still a mystery to me why using .print() had a difference,
>>> maybe my mental model of Spark is wrong, I thought no matter what output
>>> operation I used, the number of files processed by Spark would be
>>> independent of that because the processing is done in a different method,
>>> .print() is only used to force Spark execute that processing, am I wrong?).
>>>
>>> --
>>> Emre
>>>
>>>
>>> On Mon, Feb 16, 2015 at 2:26 PM, Sean Owen

Re: Magic number 16: Why doesn't Spark Streaming process more than 16 files?

2015-02-18 Thread Emre Sevinc
Hello Imran,

(a) I know that all 20 files are processed when I use foreachRDD, because I
can see the processed files in the output directory. (My application logic
writes them to an output directory after they are processed, *but* that
writing operation does not happen in foreachRDD, below you can see the URL
that includes my code and clarifies this).

(b) I know only 16 files are processed because in the output directory I
see only 16 files processed. I wait for minutes and minutes and no more
files appear in the output directory. When I see only 16 files are
processed and Spark Streaming went to the mode of idly watching the input
directory, and then if I copy a few more files, they are also processed.

(c) Sure, you can see part of my code in the following gist:
https://gist.github.com/emres/0fb6de128baea099e741
 It might seem a little convoluted at first, because my application is
divided into two classes, a Driver class (setting up things and
initializing them), and a Worker class (that implements the core
functionality). I've also put the relevant methods from the my utility
classes for completeness.

I am as perplexed as you are as to why forcing the output via foreachRDD
ended up in different behaviour compared to simply using print() method.

Kind regards,
Emre



On Tue, Feb 17, 2015 at 4:23 PM, Imran Rashid  wrote:

> Hi Emre,
>
> there shouldn't be any difference in which files get processed w/ print()
> vs. foreachRDD().  In fact, if you look at the definition of print(), it is
> just calling foreachRDD() underneath.  So there is something else going on
> here.
>
> We need a little more information to figure out exactly what is going on.
>  (I think Sean was getting at the same thing ...)
>
> (a) how do you know that when you use foreachRDD, all 20 files get
> processed?
>
> (b) How do you know that only 16 files get processed when you print()? Do
> you know the other files are being skipped, or maybe they are just "stuck"
> somewhere?  eg., suppose you start w/ 20 files, and you see 16 get
> processed ... what happens after you add a few more files to the
> directory?  Are they processed immediately, or are they never processed
> either?
>
> (c) Can you share any more code of what you are doing to the dstreams
> *before* the print() / foreachRDD()?  That might give us more details about
> what the difference is.
>
> I can't see how .count.println() would be different than just println(),
> but maybe I am missing something also.
>
> Imran
>
> On Mon, Feb 16, 2015 at 7:49 AM, Emre Sevinc 
> wrote:
>
>> Sean,
>>
>> In this case, I've been testing the code on my local machine and using
>> Spark locally, so I all the log output was available on my terminal. And
>> I've used the .print() method to have an output operation, just to force
>> Spark execute.
>>
>> And I was not using foreachRDD, I was only using print() method on a
>> JavaDStream object, and it was working fine for a few files, up to 16 (and
>> without print() it did not do anything because there were no output
>> operations).
>>
>> To sum it up, in my case:
>>
>>  - Initially, use .print() and no foreachRDD: processes up to 16 files
>> and does not do anything for the remaining 4.
>>  - Remove .print() and use foreachRDD: processes all of the 20 files.
>>
>> Maybe, as in Akhil Das's suggestion, using .count.print() might also have
>> fixed my problem, but I'm satisfied with foreachRDD approach for now.
>> (Though it is still a mystery to me why using .print() had a difference,
>> maybe my mental model of Spark is wrong, I thought no matter what output
>> operation I used, the number of files processed by Spark would be
>> independent of that because the processing is done in a different method,
>> .print() is only used to force Spark execute that processing, am I wrong?).
>>
>> --
>> Emre
>>
>>
>> On Mon, Feb 16, 2015 at 2:26 PM, Sean Owen  wrote:
>>
>>> Materialization shouldn't be relevant. The collect by itself doesn't let
>>> you detect whether it happened. Print should print some results to the
>>> console but on different machines, so may not be a reliable way to see what
>>> happened.
>>>
>>> Yes I understand your real process uses foreachRDD and that's what you
>>> should use. It sounds like that works. But you must always have been using
>>> that right? What do you mean that you changed to use it?
>>>
>>> Basically I'm not clear on what the real code does and what about the
>>> output of that code tells you only 16 files were processed.
>>> On Feb 16, 2015 1:18 PM, "Emre Sevinc"  wrote:
>>>
 Hello Sean,

 I did not understand your question very well, but what I do is checking
 the output directory (and I have various logger outputs at various stages
 showing the contents of an input file being processed, the response from
 the web service, etc.).

 By the way, I've already solved my problem by using foreachRDD instead
 of print (see my second message in this thread). Apparently forcing

Re: Magic number 16: Why doesn't Spark Streaming process more than 16 files?

2015-02-17 Thread Imran Rashid
Hi Emre,

there shouldn't be any difference in which files get processed w/ print()
vs. foreachRDD().  In fact, if you look at the definition of print(), it is
just calling foreachRDD() underneath.  So there is something else going on
here.

We need a little more information to figure out exactly what is going on.
 (I think Sean was getting at the same thing ...)

(a) how do you know that when you use foreachRDD, all 20 files get
processed?

(b) How do you know that only 16 files get processed when you print()? Do
you know the other files are being skipped, or maybe they are just "stuck"
somewhere?  eg., suppose you start w/ 20 files, and you see 16 get
processed ... what happens after you add a few more files to the
directory?  Are they processed immediately, or are they never processed
either?

(c) Can you share any more code of what you are doing to the dstreams
*before* the print() / foreachRDD()?  That might give us more details about
what the difference is.

I can't see how .count.println() would be different than just println(),
but maybe I am missing something also.

Imran

On Mon, Feb 16, 2015 at 7:49 AM, Emre Sevinc  wrote:

> Sean,
>
> In this case, I've been testing the code on my local machine and using
> Spark locally, so I all the log output was available on my terminal. And
> I've used the .print() method to have an output operation, just to force
> Spark execute.
>
> And I was not using foreachRDD, I was only using print() method on a
> JavaDStream object, and it was working fine for a few files, up to 16 (and
> without print() it did not do anything because there were no output
> operations).
>
> To sum it up, in my case:
>
>  - Initially, use .print() and no foreachRDD: processes up to 16 files and
> does not do anything for the remaining 4.
>  - Remove .print() and use foreachRDD: processes all of the 20 files.
>
> Maybe, as in Akhil Das's suggestion, using .count.print() might also have
> fixed my problem, but I'm satisfied with foreachRDD approach for now.
> (Though it is still a mystery to me why using .print() had a difference,
> maybe my mental model of Spark is wrong, I thought no matter what output
> operation I used, the number of files processed by Spark would be
> independent of that because the processing is done in a different method,
> .print() is only used to force Spark execute that processing, am I wrong?).
>
> --
> Emre
>
>
> On Mon, Feb 16, 2015 at 2:26 PM, Sean Owen  wrote:
>
>> Materialization shouldn't be relevant. The collect by itself doesn't let
>> you detect whether it happened. Print should print some results to the
>> console but on different machines, so may not be a reliable way to see what
>> happened.
>>
>> Yes I understand your real process uses foreachRDD and that's what you
>> should use. It sounds like that works. But you must always have been using
>> that right? What do you mean that you changed to use it?
>>
>> Basically I'm not clear on what the real code does and what about the
>> output of that code tells you only 16 files were processed.
>> On Feb 16, 2015 1:18 PM, "Emre Sevinc"  wrote:
>>
>>> Hello Sean,
>>>
>>> I did not understand your question very well, but what I do is checking
>>> the output directory (and I have various logger outputs at various stages
>>> showing the contents of an input file being processed, the response from
>>> the web service, etc.).
>>>
>>> By the way, I've already solved my problem by using foreachRDD instead
>>> of print (see my second message in this thread). Apparently forcing Spark
>>> to materialize DAG via print() is not the way to go. (My interpretation
>>> might be wrong, but this is what I've just seen in my case).
>>>
>>> --
>>> Emre
>>>
>>>
>>>
>>>
>>> On Mon, Feb 16, 2015 at 2:11 PM, Sean Owen  wrote:
>>>
 How are you deciding whether files are processed or not? It doesn't
 seem possible from this code. Maybe it just seems so.
 On Feb 16, 2015 12:51 PM, "Emre Sevinc"  wrote:

> I've managed to solve this, but I still don't know exactly why my
> solution works:
>
> In my code I was trying to force the Spark to output via:
>
>   jsonIn.print();
>
> jsonIn being a JavaDStream.
>
> When removed the code above, and added the code below to force the
> output operation, hence the execution:
>
> jsonIn.foreachRDD(new Function, Void>() {
>   @Override
>   public Void call(JavaRDD stringJavaRDD) throws Exception
> {
> stringJavaRDD.collect();
> return null;
>   }
> });
>
> It works as I expect, processing all of the 20 files I give to it,
> instead of stopping at 16.
>
> --
> Emre
>
>
> On Mon, Feb 16, 2015 at 12:56 PM, Emre Sevinc 
> wrote:
>
>> Hello,
>>
>> I have an application in Java that uses Spark Streaming 1.2.1 in the
>> following manner:
>>
>>  - Listen to the input directory.
>>  - If a new file is co

Re: Magic number 16: Why doesn't Spark Streaming process more than 16 files?

2015-02-16 Thread Emre Sevinc
Sean,

In this case, I've been testing the code on my local machine and using
Spark locally, so I all the log output was available on my terminal. And
I've used the .print() method to have an output operation, just to force
Spark execute.

And I was not using foreachRDD, I was only using print() method on a
JavaDStream object, and it was working fine for a few files, up to 16 (and
without print() it did not do anything because there were no output
operations).

To sum it up, in my case:

 - Initially, use .print() and no foreachRDD: processes up to 16 files and
does not do anything for the remaining 4.
 - Remove .print() and use foreachRDD: processes all of the 20 files.

Maybe, as in Akhil Das's suggestion, using .count.print() might also have
fixed my problem, but I'm satisfied with foreachRDD approach for now.
(Though it is still a mystery to me why using .print() had a difference,
maybe my mental model of Spark is wrong, I thought no matter what output
operation I used, the number of files processed by Spark would be
independent of that because the processing is done in a different method,
.print() is only used to force Spark execute that processing, am I wrong?).

--
Emre


On Mon, Feb 16, 2015 at 2:26 PM, Sean Owen  wrote:

> Materialization shouldn't be relevant. The collect by itself doesn't let
> you detect whether it happened. Print should print some results to the
> console but on different machines, so may not be a reliable way to see what
> happened.
>
> Yes I understand your real process uses foreachRDD and that's what you
> should use. It sounds like that works. But you must always have been using
> that right? What do you mean that you changed to use it?
>
> Basically I'm not clear on what the real code does and what about the
> output of that code tells you only 16 files were processed.
> On Feb 16, 2015 1:18 PM, "Emre Sevinc"  wrote:
>
>> Hello Sean,
>>
>> I did not understand your question very well, but what I do is checking
>> the output directory (and I have various logger outputs at various stages
>> showing the contents of an input file being processed, the response from
>> the web service, etc.).
>>
>> By the way, I've already solved my problem by using foreachRDD instead of
>> print (see my second message in this thread). Apparently forcing Spark to
>> materialize DAG via print() is not the way to go. (My interpretation might
>> be wrong, but this is what I've just seen in my case).
>>
>> --
>> Emre
>>
>>
>>
>>
>> On Mon, Feb 16, 2015 at 2:11 PM, Sean Owen  wrote:
>>
>>> How are you deciding whether files are processed or not? It doesn't seem
>>> possible from this code. Maybe it just seems so.
>>> On Feb 16, 2015 12:51 PM, "Emre Sevinc"  wrote:
>>>
 I've managed to solve this, but I still don't know exactly why my
 solution works:

 In my code I was trying to force the Spark to output via:

   jsonIn.print();

 jsonIn being a JavaDStream.

 When removed the code above, and added the code below to force the
 output operation, hence the execution:

 jsonIn.foreachRDD(new Function, Void>() {
   @Override
   public Void call(JavaRDD stringJavaRDD) throws Exception {
 stringJavaRDD.collect();
 return null;
   }
 });

 It works as I expect, processing all of the 20 files I give to it,
 instead of stopping at 16.

 --
 Emre


 On Mon, Feb 16, 2015 at 12:56 PM, Emre Sevinc 
 wrote:

> Hello,
>
> I have an application in Java that uses Spark Streaming 1.2.1 in the
> following manner:
>
>  - Listen to the input directory.
>  - If a new file is copied to that input directory process it.
>  - Process: contact a RESTful web service (running also locally and
> responsive), send the contents of the file, receive the response from the
> web service, write the results as a new file into the output directory
>  - batch interval : 30 seconds
>  - checkpoint interval: 150 seconds
>
> When I test the application locally with 1 or 2 files, it works
> perfectly fine as expected. I run it like:
>
> spark-submit --class myClass --verbose --master local[4]
> --deploy-mode client myApp.jar /in file:///out
>
> But then I've realized something strange when I copied 20 files to the
> INPUT directory: Spark Streaming detects all of the files, but it ends up
> processing *only 16 files*. And the remaining 4 are not processed at all.
>
> I've tried it with 19, 18, and then 17 files. Same result, only 16
> files end up in the output directory.
>
> Then I've tried it by copying 16 files at once to the input directory,
> and it can process all of the 16 files. That's why I call it magic number
> 16.
>
> When I mean it detects all of the files, I mean that in the logs I see
> the following lines when I copy 17 files:
>
>

Re: Magic number 16: Why doesn't Spark Streaming process more than 16 files?

2015-02-16 Thread Akhil Das
Instead of print you should do jsonIn.count().print(). Straight forward
approach is to use foreachRDD :)

Thanks
Best Regards

On Mon, Feb 16, 2015 at 6:48 PM, Emre Sevinc  wrote:

> Hello Sean,
>
> I did not understand your question very well, but what I do is checking
> the output directory (and I have various logger outputs at various stages
> showing the contents of an input file being processed, the response from
> the web service, etc.).
>
> By the way, I've already solved my problem by using foreachRDD instead of
> print (see my second message in this thread). Apparently forcing Spark to
> materialize DAG via print() is not the way to go. (My interpretation might
> be wrong, but this is what I've just seen in my case).
>
> --
> Emre
>
>
>
>
> On Mon, Feb 16, 2015 at 2:11 PM, Sean Owen  wrote:
>
>> How are you deciding whether files are processed or not? It doesn't seem
>> possible from this code. Maybe it just seems so.
>> On Feb 16, 2015 12:51 PM, "Emre Sevinc"  wrote:
>>
>>> I've managed to solve this, but I still don't know exactly why my
>>> solution works:
>>>
>>> In my code I was trying to force the Spark to output via:
>>>
>>>   jsonIn.print();
>>>
>>> jsonIn being a JavaDStream.
>>>
>>> When removed the code above, and added the code below to force the
>>> output operation, hence the execution:
>>>
>>> jsonIn.foreachRDD(new Function, Void>() {
>>>   @Override
>>>   public Void call(JavaRDD stringJavaRDD) throws Exception {
>>> stringJavaRDD.collect();
>>> return null;
>>>   }
>>> });
>>>
>>> It works as I expect, processing all of the 20 files I give to it,
>>> instead of stopping at 16.
>>>
>>> --
>>> Emre
>>>
>>>
>>> On Mon, Feb 16, 2015 at 12:56 PM, Emre Sevinc 
>>> wrote:
>>>
 Hello,

 I have an application in Java that uses Spark Streaming 1.2.1 in the
 following manner:

  - Listen to the input directory.
  - If a new file is copied to that input directory process it.
  - Process: contact a RESTful web service (running also locally and
 responsive), send the contents of the file, receive the response from the
 web service, write the results as a new file into the output directory
  - batch interval : 30 seconds
  - checkpoint interval: 150 seconds

 When I test the application locally with 1 or 2 files, it works
 perfectly fine as expected. I run it like:

 spark-submit --class myClass --verbose --master local[4]
 --deploy-mode client myApp.jar /in file:///out

 But then I've realized something strange when I copied 20 files to the
 INPUT directory: Spark Streaming detects all of the files, but it ends up
 processing *only 16 files*. And the remaining 4 are not processed at all.

 I've tried it with 19, 18, and then 17 files. Same result, only 16
 files end up in the output directory.

 Then I've tried it by copying 16 files at once to the input directory,
 and it can process all of the 16 files. That's why I call it magic number
 16.

 When I mean it detects all of the files, I mean that in the logs I see
 the following lines when I copy 17 files:


 ===
 2015-02-16 12:30:51 INFO  SpotlightDriver:70 - spark.executor.memory:
 "1G"
 2015-02-16 12:30:51 WARN  Utils:71 - Your hostname, emre-ubuntu
 resolves to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on
 interface eth0)
 2015-02-16 12:30:51 WARN  Utils:71 - Set SPARK_LOCAL_IP if you need to
 bind to another address
 2015-02-16 12:30:52 INFO  Slf4jLogger:80 - Slf4jLogger started
 2015-02-16 12:30:52 WARN  NativeCodeLoader:62 - Unable to load
 native-hadoop library for your platform... using builtin-java classes where
 applicable
 2015-02-16 12:30:53 INFO  WriteAheadLogManager  for
 ReceivedBlockHandlerMaster:59 - Recovered 2 write ahead log files from
 file:/tmp/receivedBlockMetadata
 2015-02-16 12:30:53 INFO  WriteAheadLogManager  for
 ReceivedBlockHandlerMaster:59 - Reading from the logs:
 file:/tmp/receivedBlockMetadata/log-1424086110599-1424086170599
 file:/tmp/receivedBlockMetadata/log-1424086200861-1424086260861
 ---
 Time: 142408626 ms
 ---

 2015-02-16 12:31:00 INFO  WriteAheadLogManager  for
 ReceivedBlockHandlerMaster:59 - Attempting to clear 0 old log files in
 file:/tmp/receivedBlockMetadata older than 142408596:
 2015-02-16 12:31:00 INFO  WriteAheadLogManager  for
 ReceivedBlockHandlerMaster:59 - Cleared log files in
 file:/tmp/receivedBlockMetadata older than 142408596
 2015-02-16 12:31:00 INFO  WriteAheadLogManager  for
 ReceivedBlockHandlerMaster:59 - Attempting to clear 0 old log files in
 file

Re: Magic number 16: Why doesn't Spark Streaming process more than 16 files?

2015-02-16 Thread Sean Owen
Materialization shouldn't be relevant. The collect by itself doesn't let
you detect whether it happened. Print should print some results to the
console but on different machines, so may not be a reliable way to see what
happened.

Yes I understand your real process uses foreachRDD and that's what you
should use. It sounds like that works. But you must always have been using
that right? What do you mean that you changed to use it?

Basically I'm not clear on what the real code does and what about the
output of that code tells you only 16 files were processed.
On Feb 16, 2015 1:18 PM, "Emre Sevinc"  wrote:

> Hello Sean,
>
> I did not understand your question very well, but what I do is checking
> the output directory (and I have various logger outputs at various stages
> showing the contents of an input file being processed, the response from
> the web service, etc.).
>
> By the way, I've already solved my problem by using foreachRDD instead of
> print (see my second message in this thread). Apparently forcing Spark to
> materialize DAG via print() is not the way to go. (My interpretation might
> be wrong, but this is what I've just seen in my case).
>
> --
> Emre
>
>
>
>
> On Mon, Feb 16, 2015 at 2:11 PM, Sean Owen  wrote:
>
>> How are you deciding whether files are processed or not? It doesn't seem
>> possible from this code. Maybe it just seems so.
>> On Feb 16, 2015 12:51 PM, "Emre Sevinc"  wrote:
>>
>>> I've managed to solve this, but I still don't know exactly why my
>>> solution works:
>>>
>>> In my code I was trying to force the Spark to output via:
>>>
>>>   jsonIn.print();
>>>
>>> jsonIn being a JavaDStream.
>>>
>>> When removed the code above, and added the code below to force the
>>> output operation, hence the execution:
>>>
>>> jsonIn.foreachRDD(new Function, Void>() {
>>>   @Override
>>>   public Void call(JavaRDD stringJavaRDD) throws Exception {
>>> stringJavaRDD.collect();
>>> return null;
>>>   }
>>> });
>>>
>>> It works as I expect, processing all of the 20 files I give to it,
>>> instead of stopping at 16.
>>>
>>> --
>>> Emre
>>>
>>>
>>> On Mon, Feb 16, 2015 at 12:56 PM, Emre Sevinc 
>>> wrote:
>>>
 Hello,

 I have an application in Java that uses Spark Streaming 1.2.1 in the
 following manner:

  - Listen to the input directory.
  - If a new file is copied to that input directory process it.
  - Process: contact a RESTful web service (running also locally and
 responsive), send the contents of the file, receive the response from the
 web service, write the results as a new file into the output directory
  - batch interval : 30 seconds
  - checkpoint interval: 150 seconds

 When I test the application locally with 1 or 2 files, it works
 perfectly fine as expected. I run it like:

 spark-submit --class myClass --verbose --master local[4]
 --deploy-mode client myApp.jar /in file:///out

 But then I've realized something strange when I copied 20 files to the
 INPUT directory: Spark Streaming detects all of the files, but it ends up
 processing *only 16 files*. And the remaining 4 are not processed at all.

 I've tried it with 19, 18, and then 17 files. Same result, only 16
 files end up in the output directory.

 Then I've tried it by copying 16 files at once to the input directory,
 and it can process all of the 16 files. That's why I call it magic number
 16.

 When I mean it detects all of the files, I mean that in the logs I see
 the following lines when I copy 17 files:


 ===
 2015-02-16 12:30:51 INFO  SpotlightDriver:70 - spark.executor.memory:
 "1G"
 2015-02-16 12:30:51 WARN  Utils:71 - Your hostname, emre-ubuntu
 resolves to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on
 interface eth0)
 2015-02-16 12:30:51 WARN  Utils:71 - Set SPARK_LOCAL_IP if you need to
 bind to another address
 2015-02-16 12:30:52 INFO  Slf4jLogger:80 - Slf4jLogger started
 2015-02-16 12:30:52 WARN  NativeCodeLoader:62 - Unable to load
 native-hadoop library for your platform... using builtin-java classes where
 applicable
 2015-02-16 12:30:53 INFO  WriteAheadLogManager  for
 ReceivedBlockHandlerMaster:59 - Recovered 2 write ahead log files from
 file:/tmp/receivedBlockMetadata
 2015-02-16 12:30:53 INFO  WriteAheadLogManager  for
 ReceivedBlockHandlerMaster:59 - Reading from the logs:
 file:/tmp/receivedBlockMetadata/log-1424086110599-1424086170599
 file:/tmp/receivedBlockMetadata/log-1424086200861-1424086260861
 ---
 Time: 142408626 ms
 ---

 2015-02-16 12:31:00 INFO  WriteAheadLogManager  for
 ReceivedBlockHandlerM

Re: Magic number 16: Why doesn't Spark Streaming process more than 16 files?

2015-02-16 Thread Emre Sevinc
Hello Sean,

I did not understand your question very well, but what I do is checking the
output directory (and I have various logger outputs at various stages
showing the contents of an input file being processed, the response from
the web service, etc.).

By the way, I've already solved my problem by using foreachRDD instead of
print (see my second message in this thread). Apparently forcing Spark to
materialize DAG via print() is not the way to go. (My interpretation might
be wrong, but this is what I've just seen in my case).

--
Emre




On Mon, Feb 16, 2015 at 2:11 PM, Sean Owen  wrote:

> How are you deciding whether files are processed or not? It doesn't seem
> possible from this code. Maybe it just seems so.
> On Feb 16, 2015 12:51 PM, "Emre Sevinc"  wrote:
>
>> I've managed to solve this, but I still don't know exactly why my
>> solution works:
>>
>> In my code I was trying to force the Spark to output via:
>>
>>   jsonIn.print();
>>
>> jsonIn being a JavaDStream.
>>
>> When removed the code above, and added the code below to force the output
>> operation, hence the execution:
>>
>> jsonIn.foreachRDD(new Function, Void>() {
>>   @Override
>>   public Void call(JavaRDD stringJavaRDD) throws Exception {
>> stringJavaRDD.collect();
>> return null;
>>   }
>> });
>>
>> It works as I expect, processing all of the 20 files I give to it,
>> instead of stopping at 16.
>>
>> --
>> Emre
>>
>>
>> On Mon, Feb 16, 2015 at 12:56 PM, Emre Sevinc 
>> wrote:
>>
>>> Hello,
>>>
>>> I have an application in Java that uses Spark Streaming 1.2.1 in the
>>> following manner:
>>>
>>>  - Listen to the input directory.
>>>  - If a new file is copied to that input directory process it.
>>>  - Process: contact a RESTful web service (running also locally and
>>> responsive), send the contents of the file, receive the response from the
>>> web service, write the results as a new file into the output directory
>>>  - batch interval : 30 seconds
>>>  - checkpoint interval: 150 seconds
>>>
>>> When I test the application locally with 1 or 2 files, it works
>>> perfectly fine as expected. I run it like:
>>>
>>> spark-submit --class myClass --verbose --master local[4]
>>> --deploy-mode client myApp.jar /in file:///out
>>>
>>> But then I've realized something strange when I copied 20 files to the
>>> INPUT directory: Spark Streaming detects all of the files, but it ends up
>>> processing *only 16 files*. And the remaining 4 are not processed at all.
>>>
>>> I've tried it with 19, 18, and then 17 files. Same result, only 16 files
>>> end up in the output directory.
>>>
>>> Then I've tried it by copying 16 files at once to the input directory,
>>> and it can process all of the 16 files. That's why I call it magic number
>>> 16.
>>>
>>> When I mean it detects all of the files, I mean that in the logs I see
>>> the following lines when I copy 17 files:
>>>
>>>
>>> ===
>>> 2015-02-16 12:30:51 INFO  SpotlightDriver:70 - spark.executor.memory:
>>> "1G"
>>> 2015-02-16 12:30:51 WARN  Utils:71 - Your hostname, emre-ubuntu resolves
>>> to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on interface
>>> eth0)
>>> 2015-02-16 12:30:51 WARN  Utils:71 - Set SPARK_LOCAL_IP if you need to
>>> bind to another address
>>> 2015-02-16 12:30:52 INFO  Slf4jLogger:80 - Slf4jLogger started
>>> 2015-02-16 12:30:52 WARN  NativeCodeLoader:62 - Unable to load
>>> native-hadoop library for your platform... using builtin-java classes where
>>> applicable
>>> 2015-02-16 12:30:53 INFO  WriteAheadLogManager  for
>>> ReceivedBlockHandlerMaster:59 - Recovered 2 write ahead log files from
>>> file:/tmp/receivedBlockMetadata
>>> 2015-02-16 12:30:53 INFO  WriteAheadLogManager  for
>>> ReceivedBlockHandlerMaster:59 - Reading from the logs:
>>> file:/tmp/receivedBlockMetadata/log-1424086110599-1424086170599
>>> file:/tmp/receivedBlockMetadata/log-1424086200861-1424086260861
>>> ---
>>> Time: 142408626 ms
>>> ---
>>>
>>> 2015-02-16 12:31:00 INFO  WriteAheadLogManager  for
>>> ReceivedBlockHandlerMaster:59 - Attempting to clear 0 old log files in
>>> file:/tmp/receivedBlockMetadata older than 142408596:
>>> 2015-02-16 12:31:00 INFO  WriteAheadLogManager  for
>>> ReceivedBlockHandlerMaster:59 - Cleared log files in
>>> file:/tmp/receivedBlockMetadata older than 142408596
>>> 2015-02-16 12:31:00 INFO  WriteAheadLogManager  for
>>> ReceivedBlockHandlerMaster:59 - Attempting to clear 0 old log files in
>>> file:/tmp/receivedBlockMetadata older than 142408596:
>>> 2015-02-16 12:31:00 INFO  WriteAheadLogManager  for
>>> ReceivedBlockHandlerMaster:59 - Cleared log files in
>>> file:/tmp/receivedBlockMetadata older than 142408596
>>> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
>>> process 

Re: Magic number 16: Why doesn't Spark Streaming process more than 16 files?

2015-02-16 Thread Sean Owen
How are you deciding whether files are processed or not? It doesn't seem
possible from this code. Maybe it just seems so.
On Feb 16, 2015 12:51 PM, "Emre Sevinc"  wrote:

> I've managed to solve this, but I still don't know exactly why my solution
> works:
>
> In my code I was trying to force the Spark to output via:
>
>   jsonIn.print();
>
> jsonIn being a JavaDStream.
>
> When removed the code above, and added the code below to force the output
> operation, hence the execution:
>
> jsonIn.foreachRDD(new Function, Void>() {
>   @Override
>   public Void call(JavaRDD stringJavaRDD) throws Exception {
> stringJavaRDD.collect();
> return null;
>   }
> });
>
> It works as I expect, processing all of the 20 files I give to it, instead
> of stopping at 16.
>
> --
> Emre
>
>
> On Mon, Feb 16, 2015 at 12:56 PM, Emre Sevinc 
> wrote:
>
>> Hello,
>>
>> I have an application in Java that uses Spark Streaming 1.2.1 in the
>> following manner:
>>
>>  - Listen to the input directory.
>>  - If a new file is copied to that input directory process it.
>>  - Process: contact a RESTful web service (running also locally and
>> responsive), send the contents of the file, receive the response from the
>> web service, write the results as a new file into the output directory
>>  - batch interval : 30 seconds
>>  - checkpoint interval: 150 seconds
>>
>> When I test the application locally with 1 or 2 files, it works perfectly
>> fine as expected. I run it like:
>>
>> spark-submit --class myClass --verbose --master local[4]
>> --deploy-mode client myApp.jar /in file:///out
>>
>> But then I've realized something strange when I copied 20 files to the
>> INPUT directory: Spark Streaming detects all of the files, but it ends up
>> processing *only 16 files*. And the remaining 4 are not processed at all.
>>
>> I've tried it with 19, 18, and then 17 files. Same result, only 16 files
>> end up in the output directory.
>>
>> Then I've tried it by copying 16 files at once to the input directory,
>> and it can process all of the 16 files. That's why I call it magic number
>> 16.
>>
>> When I mean it detects all of the files, I mean that in the logs I see
>> the following lines when I copy 17 files:
>>
>>
>> ===
>> 2015-02-16 12:30:51 INFO  SpotlightDriver:70 - spark.executor.memory: "1G"
>> 2015-02-16 12:30:51 WARN  Utils:71 - Your hostname, emre-ubuntu resolves
>> to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on interface
>> eth0)
>> 2015-02-16 12:30:51 WARN  Utils:71 - Set SPARK_LOCAL_IP if you need to
>> bind to another address
>> 2015-02-16 12:30:52 INFO  Slf4jLogger:80 - Slf4jLogger started
>> 2015-02-16 12:30:52 WARN  NativeCodeLoader:62 - Unable to load
>> native-hadoop library for your platform... using builtin-java classes where
>> applicable
>> 2015-02-16 12:30:53 INFO  WriteAheadLogManager  for
>> ReceivedBlockHandlerMaster:59 - Recovered 2 write ahead log files from
>> file:/tmp/receivedBlockMetadata
>> 2015-02-16 12:30:53 INFO  WriteAheadLogManager  for
>> ReceivedBlockHandlerMaster:59 - Reading from the logs:
>> file:/tmp/receivedBlockMetadata/log-1424086110599-1424086170599
>> file:/tmp/receivedBlockMetadata/log-1424086200861-1424086260861
>> ---
>> Time: 142408626 ms
>> ---
>>
>> 2015-02-16 12:31:00 INFO  WriteAheadLogManager  for
>> ReceivedBlockHandlerMaster:59 - Attempting to clear 0 old log files in
>> file:/tmp/receivedBlockMetadata older than 142408596:
>> 2015-02-16 12:31:00 INFO  WriteAheadLogManager  for
>> ReceivedBlockHandlerMaster:59 - Cleared log files in
>> file:/tmp/receivedBlockMetadata older than 142408596
>> 2015-02-16 12:31:00 INFO  WriteAheadLogManager  for
>> ReceivedBlockHandlerMaster:59 - Attempting to clear 0 old log files in
>> file:/tmp/receivedBlockMetadata older than 142408596:
>> 2015-02-16 12:31:00 INFO  WriteAheadLogManager  for
>> ReceivedBlockHandlerMaster:59 - Cleared log files in
>> file:/tmp/receivedBlockMetadata older than 142408596
>> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
>> process : 1
>> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
>> process : 1
>> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
>> process : 1
>> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
>> process : 1
>> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
>> process : 1
>> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
>> process : 1
>> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
>> process : 1
>> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
>> process : 1
>> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
>> process : 1
>> 2015-02-

Re: Magic number 16: Why doesn't Spark Streaming process more than 16 files?

2015-02-16 Thread Emre Sevinc
I've managed to solve this, but I still don't know exactly why my solution
works:

In my code I was trying to force the Spark to output via:

  jsonIn.print();

jsonIn being a JavaDStream.

When removed the code above, and added the code below to force the output
operation, hence the execution:

jsonIn.foreachRDD(new Function, Void>() {
  @Override
  public Void call(JavaRDD stringJavaRDD) throws Exception {
stringJavaRDD.collect();
return null;
  }
});

It works as I expect, processing all of the 20 files I give to it, instead
of stopping at 16.

--
Emre


On Mon, Feb 16, 2015 at 12:56 PM, Emre Sevinc  wrote:

> Hello,
>
> I have an application in Java that uses Spark Streaming 1.2.1 in the
> following manner:
>
>  - Listen to the input directory.
>  - If a new file is copied to that input directory process it.
>  - Process: contact a RESTful web service (running also locally and
> responsive), send the contents of the file, receive the response from the
> web service, write the results as a new file into the output directory
>  - batch interval : 30 seconds
>  - checkpoint interval: 150 seconds
>
> When I test the application locally with 1 or 2 files, it works perfectly
> fine as expected. I run it like:
>
> spark-submit --class myClass --verbose --master local[4]
> --deploy-mode client myApp.jar /in file:///out
>
> But then I've realized something strange when I copied 20 files to the
> INPUT directory: Spark Streaming detects all of the files, but it ends up
> processing *only 16 files*. And the remaining 4 are not processed at all.
>
> I've tried it with 19, 18, and then 17 files. Same result, only 16 files
> end up in the output directory.
>
> Then I've tried it by copying 16 files at once to the input directory, and
> it can process all of the 16 files. That's why I call it magic number 16.
>
> When I mean it detects all of the files, I mean that in the logs I see the
> following lines when I copy 17 files:
>
>
> ===
> 2015-02-16 12:30:51 INFO  SpotlightDriver:70 - spark.executor.memory: "1G"
> 2015-02-16 12:30:51 WARN  Utils:71 - Your hostname, emre-ubuntu resolves
> to a loopback address: 127.0.1.1; using 10.0.2.15 instead (on interface
> eth0)
> 2015-02-16 12:30:51 WARN  Utils:71 - Set SPARK_LOCAL_IP if you need to
> bind to another address
> 2015-02-16 12:30:52 INFO  Slf4jLogger:80 - Slf4jLogger started
> 2015-02-16 12:30:52 WARN  NativeCodeLoader:62 - Unable to load
> native-hadoop library for your platform... using builtin-java classes where
> applicable
> 2015-02-16 12:30:53 INFO  WriteAheadLogManager  for
> ReceivedBlockHandlerMaster:59 - Recovered 2 write ahead log files from
> file:/tmp/receivedBlockMetadata
> 2015-02-16 12:30:53 INFO  WriteAheadLogManager  for
> ReceivedBlockHandlerMaster:59 - Reading from the logs:
> file:/tmp/receivedBlockMetadata/log-1424086110599-1424086170599
> file:/tmp/receivedBlockMetadata/log-1424086200861-1424086260861
> ---
> Time: 142408626 ms
> ---
>
> 2015-02-16 12:31:00 INFO  WriteAheadLogManager  for
> ReceivedBlockHandlerMaster:59 - Attempting to clear 0 old log files in
> file:/tmp/receivedBlockMetadata older than 142408596:
> 2015-02-16 12:31:00 INFO  WriteAheadLogManager  for
> ReceivedBlockHandlerMaster:59 - Cleared log files in
> file:/tmp/receivedBlockMetadata older than 142408596
> 2015-02-16 12:31:00 INFO  WriteAheadLogManager  for
> ReceivedBlockHandlerMaster:59 - Attempting to clear 0 old log files in
> file:/tmp/receivedBlockMetadata older than 142408596:
> 2015-02-16 12:31:00 INFO  WriteAheadLogManager  for
> ReceivedBlockHandlerMaster:59 - Cleared log files in
> file:/tmp/receivedBlockMetadata older than 142408596
> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
> process : 1
> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
> process : 1
> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
> process : 1
> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
> process : 1
> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
> process : 1
> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
> process : 1
> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
> process : 1
> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
> process : 1
> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
> process : 1
> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
> process : 1
> 2015-02-16 12:31:30 INFO  FileInputFormat:280 - Total input paths to
> process : 1
> 2015-02-16 12:31:31 INFO  FileInputFormat:280 - Total input paths to
> process : 1
> 2015-02-16 12:31:31 INFO  FileInputFormat:280 - Total input pat