Yes. # tuples processed in a batch = sum of all the tuples received by all
the receivers.

In screen shot, there was a batch with 69.9K records, and there was a batch
which took 1 s 473 ms. These two batches can be the same, can be different
batches.

TD

On Wed, Feb 25, 2015 at 10:11 AM, Josh J <joshjd...@gmail.com> wrote:

> If I'm using the kafka receiver, can I assume the number of records
> processed in the batch is the sum of the number of records processed by the
> kafka receiver?
>
> So in the screen shot attached the max rate of tuples processed in a batch
> is 42.7K + 27.2K = 69.9K tuples processed in a batch with a max processing
> time of 1 second 473 ms?
>
> On Wed, Feb 25, 2015 at 8:48 AM, Akhil Das <ak...@sigmoidanalytics.com>
> wrote:
>
>> By throughput you mean Number of events processed etc?
>>
>> [image: Inline image 1]
>>
>> Streaming tab already have these statistics.
>>
>>
>>
>> Thanks
>> Best Regards
>>
>> On Wed, Feb 25, 2015 at 9:59 PM, Josh J <joshjd...@gmail.com> wrote:
>>
>>>
>>> On Wed, Feb 25, 2015 at 7:54 AM, Akhil Das <ak...@sigmoidanalytics.com>
>>> wrote:
>>>
>>>> For SparkStreaming applications, there is already a tab called
>>>> "Streaming" which displays the basic statistics.
>>>
>>>
>>> Would I just need to extend this tab to add the throughput?
>>>
>>
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>

Reply via email to