Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-27 Thread Rohit Karlupia
Let me be more specific:

With GC/CPU aware task scheduling, user doesn't have to worry about
specifying cores carefully. So if the user always specify cores = 100 or
1024 for every executor, he will still not get OOM  (under vast majority of
cases). Internally, the scheduler will vary the number of tasks assigned to
executors ensuring that executor doesn't runs into GC cycles or causes
useless context switches.  In short, as long as users configure cores per
executors on the higher side, it will be harmless in general and can
actually help in increasing the throughput of the system by utilising
unused memory or CPU capacity available for use.

*For example:* lets say we are using 64 GB machine with 8 cores. Lets say
we are using one big 54GB executors with 8 cores. This results in on
average 7GB of memory per task. It is possible that some tasks take more
than 7GB and some takes less than 7GB. Consider a case when one task takes
34GB of memory. Very likely such a stage will fail depending upon if the
rest 7 tasks scheduled at the same time need more than 20GB of memory  (54
- 34). The usual approach to solving this problem without changing the
application would be to sacrifice cores and increase memory per core. The
stable configuration in this case could be 2 cores for 54GB executor, which
will result in wasting of 6 cores "throughout" the application.

With GC/CPU aware task scheduling one can configure the same executors with
say 64 cores and the application is very likely to succeed. Being aware of
GC, the scheduler will stop scheduling tasks on the executor, making it
possible for the running task to consume all 54GB of memory. This ensures
that we only "sacrifice" cores, when necessary and not in general and not
for the whole duration of the application.  On the other hand, if the
scheduler finds out that inspite of running 8 concurrent tasks, we still
have memory and cpu to spare, it will schedule more tasks upto 64, as
configured. So we not only get stability against skew but we also get
higher throughput when possible.

Hope that helps.

thanks,
rohitk












On Tue, Mar 27, 2018 at 9:20 AM, Fawze Abujaber  wrote:

> Thanks for the update.
>
> What about cores per executor?
>
> On Tue, 27 Mar 2018 at 6:45 Rohit Karlupia  wrote:
>
>> Thanks Fawze!
>>
>> On the memory front, I am currently working on GC and CPU aware task
>> scheduling. I see wonderful results based on my tests so far.  Once the
>> feature is complete and available, spark will work with whatever memory is
>> provided (at least enough for the largest possible task). It will also
>> allow you to run say 64 concurrent tasks on 8 core machine, if the nature
>> of tasks doesn't leads to memory or CPU contention. Essentially why worry
>> about tuning memory when you can let spark take care of it automatically
>> based on memory pressure. Will post details when we are ready.  So yes we
>> are working on memory, but it will not be a tool but a transparent feature.
>>
>> thanks,
>> rohitk
>>
>>
>>
>>
>> On Tue, Mar 27, 2018 at 7:53 AM, Fawze Abujaber 
>> wrote:
>>
>>> Hi Rohit,
>>>
>>> I would like to thank you for the unlimited patience and support that
>>> you are providing here and behind the scene for all of us.
>>>
>>> The tool is amazing and easy to use and understand most of the metrics
>>> ...
>>>
>>> Thinking if we need to run it in cluster mode and all the time, i think
>>> we can skip it as one or few runs can give you the large picture of how the
>>> job is running with different configuration and it's not too much
>>> complicated to run it using spark-submit.
>>>
>>> I think it will be so helpful if the sparklens can also include how the
>>> job is running with different configuration of cores and memory, Spark job
>>> with 1 exec and 1 core will run different from spark job with 1  exec and 3
>>> cores and for sure the same compare with different exec memory.
>>>
>>> Overall, it is so good starting point, but it will be a GAME CHANGER
>>> getting these metrics on the tool.
>>>
>>> @Rohit , Huge THANY YOU
>>>
>>> On Mon, Mar 26, 2018 at 1:35 PM, Rohit Karlupia 
>>> wrote:
>>>
 Hi Shmuel,

 In general it is hard to pin point to exact code which is responsible
 for a specific stage. For example when using spark sql, depending upon the
 kind of joins, aggregations used in the the single line of query, we will
 have multiple stages in the spark application. I usually try to split the
 code into smaller chunks and also use the spark UI which has special
 section for SQL. It can also show specific backtraces, but as I explained
 earlier they might not be very helpful. Sparklens does help you ask the
 right questions, but is not mature enough to answer all of them.

 Understanding the report:

 *1) The first part of total aggregate metrics for the application.*

 Printing application 

Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-26 Thread Fawze Abujaber
Thanks for the update.

What about cores per executor?

On Tue, 27 Mar 2018 at 6:45 Rohit Karlupia  wrote:

> Thanks Fawze!
>
> On the memory front, I am currently working on GC and CPU aware task
> scheduling. I see wonderful results based on my tests so far.  Once the
> feature is complete and available, spark will work with whatever memory is
> provided (at least enough for the largest possible task). It will also
> allow you to run say 64 concurrent tasks on 8 core machine, if the nature
> of tasks doesn't leads to memory or CPU contention. Essentially why worry
> about tuning memory when you can let spark take care of it automatically
> based on memory pressure. Will post details when we are ready.  So yes we
> are working on memory, but it will not be a tool but a transparent feature.
>
> thanks,
> rohitk
>
>
>
>
> On Tue, Mar 27, 2018 at 7:53 AM, Fawze Abujaber  wrote:
>
>> Hi Rohit,
>>
>> I would like to thank you for the unlimited patience and support that you
>> are providing here and behind the scene for all of us.
>>
>> The tool is amazing and easy to use and understand most of the metrics ...
>>
>> Thinking if we need to run it in cluster mode and all the time, i think
>> we can skip it as one or few runs can give you the large picture of how the
>> job is running with different configuration and it's not too much
>> complicated to run it using spark-submit.
>>
>> I think it will be so helpful if the sparklens can also include how the
>> job is running with different configuration of cores and memory, Spark job
>> with 1 exec and 1 core will run different from spark job with 1  exec and 3
>> cores and for sure the same compare with different exec memory.
>>
>> Overall, it is so good starting point, but it will be a GAME CHANGER
>> getting these metrics on the tool.
>>
>> @Rohit , Huge THANY YOU
>>
>> On Mon, Mar 26, 2018 at 1:35 PM, Rohit Karlupia 
>> wrote:
>>
>>> Hi Shmuel,
>>>
>>> In general it is hard to pin point to exact code which is responsible
>>> for a specific stage. For example when using spark sql, depending upon the
>>> kind of joins, aggregations used in the the single line of query, we will
>>> have multiple stages in the spark application. I usually try to split the
>>> code into smaller chunks and also use the spark UI which has special
>>> section for SQL. It can also show specific backtraces, but as I explained
>>> earlier they might not be very helpful. Sparklens does help you ask the
>>> right questions, but is not mature enough to answer all of them.
>>>
>>> Understanding the report:
>>>
>>> *1) The first part of total aggregate metrics for the application.*
>>>
>>> Printing application meterics.
>>>
>>>  AggregateMetrics (Application Metrics) total measurements 1869
>>> NAMESUMMIN  
>>>  MAXMEAN
>>>  diskBytesSpilled0.0 KB 0.0 KB 
>>> 0.0 KB  0.0 KB
>>>  executorRuntime15.1 hh 3.0 ms 
>>> 4.0 mm 29.1 ss
>>>  inputBytesRead 26.1 GB 0.0 KB
>>> 43.8 MB 14.3 MB
>>>  jvmGCTime  11.0 mm 0.0 ms 
>>> 2.1 ss354.0 ms
>>>  memoryBytesSpilled314.2 GB 0.0 KB 
>>> 1.1 GB172.1 MB
>>>  outputBytesWritten  0.0 KB 0.0 KB 
>>> 0.0 KB  0.0 KB
>>>  peakExecutionMemory 0.0 KB 0.0 KB 
>>> 0.0 KB  0.0 KB
>>>  resultSize 12.9 MB 2.0 KB
>>> 40.9 KB  7.1 KB
>>>  shuffleReadBytesRead  107.7 GB 0.0 KB   
>>> 276.0 MB 59.0 MB
>>>  shuffleReadFetchWaitTime2.0 ms 0.0 ms 
>>> 0.0 ms  0.0 ms
>>>  shuffleReadLocalBlocks   2,318  0  
>>>68   1
>>>  shuffleReadRecordsRead   3,413,511,099  0  
>>> 8,251,926   1,826,383
>>>  shuffleReadRemoteBlocks291,126  0  
>>>   824 155
>>>  shuffleWriteBytesWritten  107.6 GB 0.0 KB   
>>> 257.6 MB 58.9 MB
>>>  shuffleWriteRecordsWritten   3,408,133,175  0  
>>> 7,959,055   1,823,506
>>>  shuffleWriteTime8.7 mm 0.0 ms 
>>> 1.8 ss278.2 ms
>>>  taskDuration   15.4 hh12.0 ms 
>>> 4.1 mm 29.7 ss
>>>
>>>
>>> *2) Here we show number of hosts used and executors per host. I have seen 
>>> users set executor memory to 33GB on a 64GB executor. Direct waste of 31GB 
>>> of memory.*
>>>

Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-26 Thread Rohit Karlupia
Thanks Fawze!

On the memory front, I am currently working on GC and CPU aware task
scheduling. I see wonderful results based on my tests so far.  Once the
feature is complete and available, spark will work with whatever memory is
provided (at least enough for the largest possible task). It will also
allow you to run say 64 concurrent tasks on 8 core machine, if the nature
of tasks doesn't leads to memory or CPU contention. Essentially why worry
about tuning memory when you can let spark take care of it automatically
based on memory pressure. Will post details when we are ready.  So yes we
are working on memory, but it will not be a tool but a transparent feature.

thanks,
rohitk




On Tue, Mar 27, 2018 at 7:53 AM, Fawze Abujaber  wrote:

> Hi Rohit,
>
> I would like to thank you for the unlimited patience and support that you
> are providing here and behind the scene for all of us.
>
> The tool is amazing and easy to use and understand most of the metrics ...
>
> Thinking if we need to run it in cluster mode and all the time, i think we
> can skip it as one or few runs can give you the large picture of how the
> job is running with different configuration and it's not too much
> complicated to run it using spark-submit.
>
> I think it will be so helpful if the sparklens can also include how the
> job is running with different configuration of cores and memory, Spark job
> with 1 exec and 1 core will run different from spark job with 1  exec and 3
> cores and for sure the same compare with different exec memory.
>
> Overall, it is so good starting point, but it will be a GAME CHANGER
> getting these metrics on the tool.
>
> @Rohit , Huge THANY YOU
>
> On Mon, Mar 26, 2018 at 1:35 PM, Rohit Karlupia  wrote:
>
>> Hi Shmuel,
>>
>> In general it is hard to pin point to exact code which is responsible for
>> a specific stage. For example when using spark sql, depending upon the kind
>> of joins, aggregations used in the the single line of query, we will have
>> multiple stages in the spark application. I usually try to split the code
>> into smaller chunks and also use the spark UI which has special section for
>> SQL. It can also show specific backtraces, but as I explained earlier they
>> might not be very helpful. Sparklens does help you ask the right questions,
>> but is not mature enough to answer all of them.
>>
>> Understanding the report:
>>
>> *1) The first part of total aggregate metrics for the application.*
>>
>> Printing application meterics.
>>
>>  AggregateMetrics (Application Metrics) total measurements 1869
>> NAMESUMMIN   
>> MAXMEAN
>>  diskBytesSpilled0.0 KB 0.0 KB 
>> 0.0 KB  0.0 KB
>>  executorRuntime15.1 hh 3.0 ms 
>> 4.0 mm 29.1 ss
>>  inputBytesRead 26.1 GB 0.0 KB
>> 43.8 MB 14.3 MB
>>  jvmGCTime  11.0 mm 0.0 ms 
>> 2.1 ss354.0 ms
>>  memoryBytesSpilled314.2 GB 0.0 KB 
>> 1.1 GB172.1 MB
>>  outputBytesWritten  0.0 KB 0.0 KB 
>> 0.0 KB  0.0 KB
>>  peakExecutionMemory 0.0 KB 0.0 KB 
>> 0.0 KB  0.0 KB
>>  resultSize 12.9 MB 2.0 KB
>> 40.9 KB  7.1 KB
>>  shuffleReadBytesRead  107.7 GB 0.0 KB   
>> 276.0 MB 59.0 MB
>>  shuffleReadFetchWaitTime2.0 ms 0.0 ms 
>> 0.0 ms  0.0 ms
>>  shuffleReadLocalBlocks   2,318  0   
>>   68   1
>>  shuffleReadRecordsRead   3,413,511,099  0  
>> 8,251,926   1,826,383
>>  shuffleReadRemoteBlocks291,126  0   
>>  824 155
>>  shuffleWriteBytesWritten  107.6 GB 0.0 KB   
>> 257.6 MB 58.9 MB
>>  shuffleWriteRecordsWritten   3,408,133,175  0  
>> 7,959,055   1,823,506
>>  shuffleWriteTime8.7 mm 0.0 ms 
>> 1.8 ss278.2 ms
>>  taskDuration   15.4 hh12.0 ms 
>> 4.1 mm 29.7 ss
>>
>>
>> *2) Here we show number of hosts used and executors per host. I have seen 
>> users set executor memory to 33GB on a 64GB executor. Direct waste of 31GB 
>> of memory.*
>>
>> Total Hosts 135
>>
>>
>> Host server86.cluster.com startTime 02:26:21:081 executors count 3
>> Host server164.cluster.com startTime 02:30:12:204 executors count 1
>> Host server28.cluster.com startTime 02:31:09:023 executors count 1
>> Host 

Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-26 Thread Fawze Abujaber
Hi Rohit,

I would like to thank you for the unlimited patience and support that you
are providing here and behind the scene for all of us.

The tool is amazing and easy to use and understand most of the metrics ...

Thinking if we need to run it in cluster mode and all the time, i think we
can skip it as one or few runs can give you the large picture of how the
job is running with different configuration and it's not too much
complicated to run it using spark-submit.

I think it will be so helpful if the sparklens can also include how the job
is running with different configuration of cores and memory, Spark job with
1 exec and 1 core will run different from spark job with 1  exec and 3
cores and for sure the same compare with different exec memory.

Overall, it is so good starting point, but it will be a GAME CHANGER
getting these metrics on the tool.

@Rohit , Huge THANY YOU

On Mon, Mar 26, 2018 at 1:35 PM, Rohit Karlupia  wrote:

> Hi Shmuel,
>
> In general it is hard to pin point to exact code which is responsible for
> a specific stage. For example when using spark sql, depending upon the kind
> of joins, aggregations used in the the single line of query, we will have
> multiple stages in the spark application. I usually try to split the code
> into smaller chunks and also use the spark UI which has special section for
> SQL. It can also show specific backtraces, but as I explained earlier they
> might not be very helpful. Sparklens does help you ask the right questions,
> but is not mature enough to answer all of them.
>
> Understanding the report:
>
> *1) The first part of total aggregate metrics for the application.*
>
> Printing application meterics.
>
>  AggregateMetrics (Application Metrics) total measurements 1869
> NAMESUMMIN   
> MAXMEAN
>  diskBytesSpilled0.0 KB 0.0 KB 
> 0.0 KB  0.0 KB
>  executorRuntime15.1 hh 3.0 ms 
> 4.0 mm 29.1 ss
>  inputBytesRead 26.1 GB 0.0 KB
> 43.8 MB 14.3 MB
>  jvmGCTime  11.0 mm 0.0 ms 
> 2.1 ss354.0 ms
>  memoryBytesSpilled314.2 GB 0.0 KB 
> 1.1 GB172.1 MB
>  outputBytesWritten  0.0 KB 0.0 KB 
> 0.0 KB  0.0 KB
>  peakExecutionMemory 0.0 KB 0.0 KB 
> 0.0 KB  0.0 KB
>  resultSize 12.9 MB 2.0 KB
> 40.9 KB  7.1 KB
>  shuffleReadBytesRead  107.7 GB 0.0 KB   
> 276.0 MB 59.0 MB
>  shuffleReadFetchWaitTime2.0 ms 0.0 ms 
> 0.0 ms  0.0 ms
>  shuffleReadLocalBlocks   2,318  0
>  68   1
>  shuffleReadRecordsRead   3,413,511,099  0  
> 8,251,926   1,826,383
>  shuffleReadRemoteBlocks291,126  0
> 824 155
>  shuffleWriteBytesWritten  107.6 GB 0.0 KB   
> 257.6 MB 58.9 MB
>  shuffleWriteRecordsWritten   3,408,133,175  0  
> 7,959,055   1,823,506
>  shuffleWriteTime8.7 mm 0.0 ms 
> 1.8 ss278.2 ms
>  taskDuration   15.4 hh12.0 ms 
> 4.1 mm 29.7 ss
>
>
> *2) Here we show number of hosts used and executors per host. I have seen 
> users set executor memory to 33GB on a 64GB executor. Direct waste of 31GB of 
> memory.*
>
> Total Hosts 135
>
>
> Host server86.cluster.com startTime 02:26:21:081 executors count 3
> Host server164.cluster.com startTime 02:30:12:204 executors count 1
> Host server28.cluster.com startTime 02:31:09:023 executors count 1
> Host server78.cluster.com startTime 02:26:08:844 executors count 5
> Host server124.cluster.com startTime 02:26:10:523 executors count 3
> Host server100.cluster.com startTime 02:30:24:073 executors count 1
> Done printing host timeline
> *3) Time at which executers were added. Not all executors are available at 
> the start of the application. *
>
> Printing executors timeline
> Total Hosts 135
> Total Executors 250
> At 02:26 executors added 52 & removed  0 currently available 52
> At 02:27 executors added 10 & removed  0 currently available 62
> At 02:28 executors added 13 & removed  0 currently available 75
> At 02:29 executors added 81 & removed  0 currently available 156
> At 02:30 executors added 48 & removed  0 currently available 204
> At 02:31 executors added 45 & removed  0 currently available 249
> At 02:32 executors added 1 & removed  0 currently available 

Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-26 Thread Rohit Karlupia
Hi Shmuel,

In general it is hard to pin point to exact code which is responsible for a
specific stage. For example when using spark sql, depending upon the kind
of joins, aggregations used in the the single line of query, we will have
multiple stages in the spark application. I usually try to split the code
into smaller chunks and also use the spark UI which has special section for
SQL. It can also show specific backtraces, but as I explained earlier they
might not be very helpful. Sparklens does help you ask the right questions,
but is not mature enough to answer all of them.

Understanding the report:

*1) The first part of total aggregate metrics for the application.*

Printing application meterics.

 AggregateMetrics (Application Metrics) total measurements 1869
NAMESUMMIN
  MAXMEAN
 diskBytesSpilled0.0 KB 0.0 KB
0.0 KB  0.0 KB
 executorRuntime15.1 hh 3.0 ms
4.0 mm 29.1 ss
 inputBytesRead 26.1 GB 0.0 KB
   43.8 MB 14.3 MB
 jvmGCTime  11.0 mm 0.0 ms
2.1 ss354.0 ms
 memoryBytesSpilled314.2 GB 0.0 KB
1.1 GB172.1 MB
 outputBytesWritten  0.0 KB 0.0 KB
0.0 KB  0.0 KB
 peakExecutionMemory 0.0 KB 0.0 KB
0.0 KB  0.0 KB
 resultSize 12.9 MB 2.0 KB
   40.9 KB  7.1 KB
 shuffleReadBytesRead  107.7 GB 0.0 KB
  276.0 MB 59.0 MB
 shuffleReadFetchWaitTime2.0 ms 0.0 ms
0.0 ms  0.0 ms
 shuffleReadLocalBlocks   2,318  0
68   1
 shuffleReadRecordsRead   3,413,511,099  0
 8,251,926   1,826,383
 shuffleReadRemoteBlocks291,126  0
   824 155
 shuffleWriteBytesWritten  107.6 GB 0.0 KB
  257.6 MB 58.9 MB
 shuffleWriteRecordsWritten   3,408,133,175  0
 7,959,055   1,823,506
 shuffleWriteTime8.7 mm 0.0 ms
1.8 ss278.2 ms
 taskDuration   15.4 hh12.0 ms
4.1 mm 29.7 ss


*2) Here we show number of hosts used and executors per host. I have
seen users set executor memory to 33GB on a 64GB executor. Direct
waste of 31GB of memory.*

Total Hosts 135


Host server86.cluster.com startTime 02:26:21:081 executors count 3
Host server164.cluster.com startTime 02:30:12:204 executors count 1
Host server28.cluster.com startTime 02:31:09:023 executors count 1
Host server78.cluster.com startTime 02:26:08:844 executors count 5
Host server124.cluster.com startTime 02:26:10:523 executors count 3
Host server100.cluster.com startTime 02:30:24:073 executors count 1
Done printing host timeline
*3) Time at which executers were added. Not all executors are
available at the start of the application. *

Printing executors timeline
Total Hosts 135
Total Executors 250
At 02:26 executors added 52 & removed  0 currently available 52
At 02:27 executors added 10 & removed  0 currently available 62
At 02:28 executors added 13 & removed  0 currently available 75
At 02:29 executors added 81 & removed  0 currently available 156
At 02:30 executors added 48 & removed  0 currently available 204
At 02:31 executors added 45 & removed  0 currently available 249
At 02:32 executors added 1 & removed  0 currently available 250


*4) How the stages within the jobs were scheduled. Helps you
understand which stages ran in parallel and which are dependent on
others.
*

Printing Application timeline
02:26:47:654  Stage 3 ended : maxTaskTime 3117 taskCount 1
02:26:47:708  Stage 4 started : duration 00m 02s
02:26:49:898  Stage 4 ended : maxTaskTime 226 taskCount 200
02:26:49:901 JOB 3 ended
02:26:56:234 JOB 4 started : duration 08m 28s
[  5 |||
  ]
[  6  |||
  ]
[  9   
  ]
[ 10 ||
  ]
[ 11
  ]
[ 12 ||
  ]
[ 13   
  ]
[ 14   |||
  ]
[ 15
|| ]
02:26:58:095  Stage 5 started : duration 00m 44s
02:27:42:816  Stage 5 ended : maxTaskTime 37214 taskCount 23
02:27:03:478  Stage 6 started : duration 02m 04s
02:29:07:517  Stage 6 ended : maxTaskTime 35578 taskCount 601
02:28:56:449  Stage 9 started : duration 00m 46s
02:29:42:625  Stage 9 ended : maxTaskTime 7196 

Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-25 Thread Shmuel Blitz
Hi Rohit,

Thanks for the analysis.

I can use repartition on the slow task. But how can I tell what part of the
code is in charge of the slow tasks?

It would be great if you could further explain the rest of the output.

Thanks in advance,
Shmuel

On Sun, Mar 25, 2018 at 12:46 PM, Rohit Karlupia  wrote:

> Thanks Shamuel for trying out sparklens!
>
> Couple of things that I noticed:
> 1) 250 executors is probably overkill for this job. It would run in same
> time with around 100.
> 2) Many of stages that take long time have only 200 tasks where as we have
> 750 cores available for the job. 200 is the default value for
> spark.sql.shuffle.partitions.  Alternatively you could try increasing the
> value of spark.sql.shuffle.partitions to latest 750.
>
> thanks,
> rohitk
>
> On Sun, Mar 25, 2018 at 1:25 PM, Shmuel Blitz  > wrote:
>
>> I ran it on a single job.
>> SparkLens has an overhead on the job duration. I'm not ready to enable it
>> by default on all our jobs.
>>
>> Attached is the output.
>>
>> Still trying to understand what exactly it means.
>>
>> On Sun, Mar 25, 2018 at 10:40 AM, Fawze Abujaber 
>> wrote:
>>
>>> Nice!
>>>
>>> Shmuel, Were you able to run on a cluster level or for a specific job?
>>>
>>> Did you configure it on the spark-default.conf?
>>>
>>> On Sun, 25 Mar 2018 at 10:34 Shmuel Blitz 
>>> wrote:
>>>
 Just to let you know, I have managed to run SparkLens on our cluster.

 I switched to the spark_1.6 branch, and also compiled against the
 specific image of Spark we are using (cdh5.7.6).

 Now I need to figure out what the output means... :P

 Shmuel

 On Fri, Mar 23, 2018 at 7:24 PM, Fawze Abujaber 
 wrote:

> Quick question:
>
> how to add the  --jars /path/to/sparklens_2.11-0.1.0.jar to the
> spark-default conf, should it be using:
>
> spark.driver.extraClassPath /path/to/sparklens_2.11-0.1.0.jar or i
> should use spark.jars option? anyone who could give an example how it
> should be, and if i the path for the jar should be an hdfs path as i'm
> using it in cluster mode.
>
>
>
>
> On Fri, Mar 23, 2018 at 6:33 AM, Fawze Abujaber 
> wrote:
>
>> Hi Shmuel,
>>
>> Did you compile the code against the right branch for Spark 1.6.
>>
>> I tested it and it looks working and now i'm testing the branch for a
>> wide tests, Please use the branch for Spark 1.6
>>
>> On Fri, Mar 23, 2018 at 12:43 AM, Shmuel Blitz <
>> shmuel.bl...@similarweb.com> wrote:
>>
>>> Hi Rohit,
>>>
>>> Thanks for sharing this great tool.
>>> I tried running a spark job with the tool, but it failed with an 
>>> *IncompatibleClassChangeError
>>> *Exception.
>>>
>>> I have opened an issue on Github.(https://github.com/qub
>>> ole/sparklens/issues/1)
>>>
>>> Shmuel
>>>
>>> On Thu, Mar 22, 2018 at 5:05 PM, Shmuel Blitz <
>>> shmuel.bl...@similarweb.com> wrote:
>>>
 Thanks.

 We will give this a try and report back.

 Shmuel

 On Thu, Mar 22, 2018 at 4:22 PM, Rohit Karlupia 
 wrote:

> Thanks everyone!
> Please share how it works and how it doesn't. Both help.
>
> Fawaze, just made few changes to make this work with spark 1.6.
> Can you please try building from branch *spark_1.6*
>
> thanks,
> rohitk
>
>
>
> On Thu, Mar 22, 2018 at 10:18 AM, Fawze Abujaber <
> fawz...@gmail.com> wrote:
>
>> It's super amazing  i see it was tested on spark 2.0.0 and
>> above, what about Spark 1.6 which is still part of Cloudera's main 
>> versions?
>>
>> We have a vast Spark applications with version 1.6.0
>>
>> On Thu, Mar 22, 2018 at 6:38 AM, Holden Karau <
>> hol...@pigscanfly.ca> wrote:
>>
>>> Super exciting! I look forward to digging through it this
>>> weekend.
>>>
>>> On Wed, Mar 21, 2018 at 9:33 PM ☼ R Nair (रविशंकर नायर) <
>>> ravishankar.n...@gmail.com> wrote:
>>>
 Excellent. You filled a missing link.

 Best,
 Passion

 On Wed, Mar 21, 2018 at 11:36 PM, Rohit Karlupia <
 roh...@qubole.com> wrote:

> Hi,
>
> Happy to announce the availability of Sparklens as open source
> project. It helps in understanding the  scalability limits of 
> spark
> applications and can be a useful guide on the path towards tuning
> applications for lower runtime or cost.
>
> Please clone from 

Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-25 Thread Rohit Karlupia
Thanks Shamuel for trying out sparklens!

Couple of things that I noticed:
1) 250 executors is probably overkill for this job. It would run in same
time with around 100.
2) Many of stages that take long time have only 200 tasks where as we have
750 cores available for the job. 200 is the default value for
spark.sql.shuffle.partitions.  Alternatively you could try increasing the
value of spark.sql.shuffle.partitions to latest 750.

thanks,
rohitk

On Sun, Mar 25, 2018 at 1:25 PM, Shmuel Blitz 
wrote:

> I ran it on a single job.
> SparkLens has an overhead on the job duration. I'm not ready to enable it
> by default on all our jobs.
>
> Attached is the output.
>
> Still trying to understand what exactly it means.
>
> On Sun, Mar 25, 2018 at 10:40 AM, Fawze Abujaber 
> wrote:
>
>> Nice!
>>
>> Shmuel, Were you able to run on a cluster level or for a specific job?
>>
>> Did you configure it on the spark-default.conf?
>>
>> On Sun, 25 Mar 2018 at 10:34 Shmuel Blitz 
>> wrote:
>>
>>> Just to let you know, I have managed to run SparkLens on our cluster.
>>>
>>> I switched to the spark_1.6 branch, and also compiled against the
>>> specific image of Spark we are using (cdh5.7.6).
>>>
>>> Now I need to figure out what the output means... :P
>>>
>>> Shmuel
>>>
>>> On Fri, Mar 23, 2018 at 7:24 PM, Fawze Abujaber 
>>> wrote:
>>>
 Quick question:

 how to add the  --jars /path/to/sparklens_2.11-0.1.0.jar to the
 spark-default conf, should it be using:

 spark.driver.extraClassPath /path/to/sparklens_2.11-0.1.0.jar or i
 should use spark.jars option? anyone who could give an example how it
 should be, and if i the path for the jar should be an hdfs path as i'm
 using it in cluster mode.




 On Fri, Mar 23, 2018 at 6:33 AM, Fawze Abujaber 
 wrote:

> Hi Shmuel,
>
> Did you compile the code against the right branch for Spark 1.6.
>
> I tested it and it looks working and now i'm testing the branch for a
> wide tests, Please use the branch for Spark 1.6
>
> On Fri, Mar 23, 2018 at 12:43 AM, Shmuel Blitz <
> shmuel.bl...@similarweb.com> wrote:
>
>> Hi Rohit,
>>
>> Thanks for sharing this great tool.
>> I tried running a spark job with the tool, but it failed with an 
>> *IncompatibleClassChangeError
>> *Exception.
>>
>> I have opened an issue on Github.(https://github.com/qub
>> ole/sparklens/issues/1)
>>
>> Shmuel
>>
>> On Thu, Mar 22, 2018 at 5:05 PM, Shmuel Blitz <
>> shmuel.bl...@similarweb.com> wrote:
>>
>>> Thanks.
>>>
>>> We will give this a try and report back.
>>>
>>> Shmuel
>>>
>>> On Thu, Mar 22, 2018 at 4:22 PM, Rohit Karlupia 
>>> wrote:
>>>
 Thanks everyone!
 Please share how it works and how it doesn't. Both help.

 Fawaze, just made few changes to make this work with spark 1.6. Can
 you please try building from branch *spark_1.6*

 thanks,
 rohitk



 On Thu, Mar 22, 2018 at 10:18 AM, Fawze Abujaber  wrote:

> It's super amazing  i see it was tested on spark 2.0.0 and
> above, what about Spark 1.6 which is still part of Cloudera's main 
> versions?
>
> We have a vast Spark applications with version 1.6.0
>
> On Thu, Mar 22, 2018 at 6:38 AM, Holden Karau <
> hol...@pigscanfly.ca> wrote:
>
>> Super exciting! I look forward to digging through it this weekend.
>>
>> On Wed, Mar 21, 2018 at 9:33 PM ☼ R Nair (रविशंकर नायर) <
>> ravishankar.n...@gmail.com> wrote:
>>
>>> Excellent. You filled a missing link.
>>>
>>> Best,
>>> Passion
>>>
>>> On Wed, Mar 21, 2018 at 11:36 PM, Rohit Karlupia <
>>> roh...@qubole.com> wrote:
>>>
 Hi,

 Happy to announce the availability of Sparklens as open source
 project. It helps in understanding the  scalability limits of spark
 applications and can be a useful guide on the path towards tuning
 applications for lower runtime or cost.

 Please clone from here: https://github.com/qubole/sparklens
 Old blogpost: https://www.qubole.c
 om/blog/introducing-quboles-spark-tuning-tool/

 thanks,
 rohitk

 PS: Thanks for the patience. It took couple of months to get
 back on this.





>>> --
>> Twitter: https://twitter.com/holdenkarau
>>
>
>

>>>
>>>

Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-25 Thread Shmuel Blitz
I ran it on a single job.
SparkLens has an overhead on the job duration. I'm not ready to enable it
by default on all our jobs.

Attached is the output.

Still trying to understand what exactly it means.

On Sun, Mar 25, 2018 at 10:40 AM, Fawze Abujaber  wrote:

> Nice!
>
> Shmuel, Were you able to run on a cluster level or for a specific job?
>
> Did you configure it on the spark-default.conf?
>
> On Sun, 25 Mar 2018 at 10:34 Shmuel Blitz 
> wrote:
>
>> Just to let you know, I have managed to run SparkLens on our cluster.
>>
>> I switched to the spark_1.6 branch, and also compiled against the
>> specific image of Spark we are using (cdh5.7.6).
>>
>> Now I need to figure out what the output means... :P
>>
>> Shmuel
>>
>> On Fri, Mar 23, 2018 at 7:24 PM, Fawze Abujaber 
>> wrote:
>>
>>> Quick question:
>>>
>>> how to add the  --jars /path/to/sparklens_2.11-0.1.0.jar to the
>>> spark-default conf, should it be using:
>>>
>>> spark.driver.extraClassPath /path/to/sparklens_2.11-0.1.0.jar or i
>>> should use spark.jars option? anyone who could give an example how it
>>> should be, and if i the path for the jar should be an hdfs path as i'm
>>> using it in cluster mode.
>>>
>>>
>>>
>>>
>>> On Fri, Mar 23, 2018 at 6:33 AM, Fawze Abujaber 
>>> wrote:
>>>
 Hi Shmuel,

 Did you compile the code against the right branch for Spark 1.6.

 I tested it and it looks working and now i'm testing the branch for a
 wide tests, Please use the branch for Spark 1.6

 On Fri, Mar 23, 2018 at 12:43 AM, Shmuel Blitz <
 shmuel.bl...@similarweb.com> wrote:

> Hi Rohit,
>
> Thanks for sharing this great tool.
> I tried running a spark job with the tool, but it failed with an 
> *IncompatibleClassChangeError
> *Exception.
>
> I have opened an issue on Github.(https://github.com/qub
> ole/sparklens/issues/1)
>
> Shmuel
>
> On Thu, Mar 22, 2018 at 5:05 PM, Shmuel Blitz <
> shmuel.bl...@similarweb.com> wrote:
>
>> Thanks.
>>
>> We will give this a try and report back.
>>
>> Shmuel
>>
>> On Thu, Mar 22, 2018 at 4:22 PM, Rohit Karlupia 
>> wrote:
>>
>>> Thanks everyone!
>>> Please share how it works and how it doesn't. Both help.
>>>
>>> Fawaze, just made few changes to make this work with spark 1.6. Can
>>> you please try building from branch *spark_1.6*
>>>
>>> thanks,
>>> rohitk
>>>
>>>
>>>
>>> On Thu, Mar 22, 2018 at 10:18 AM, Fawze Abujaber 
>>> wrote:
>>>
 It's super amazing  i see it was tested on spark 2.0.0 and
 above, what about Spark 1.6 which is still part of Cloudera's main 
 versions?

 We have a vast Spark applications with version 1.6.0

 On Thu, Mar 22, 2018 at 6:38 AM, Holden Karau  wrote:

> Super exciting! I look forward to digging through it this weekend.
>
> On Wed, Mar 21, 2018 at 9:33 PM ☼ R Nair (रविशंकर नायर) <
> ravishankar.n...@gmail.com> wrote:
>
>> Excellent. You filled a missing link.
>>
>> Best,
>> Passion
>>
>> On Wed, Mar 21, 2018 at 11:36 PM, Rohit Karlupia <
>> roh...@qubole.com> wrote:
>>
>>> Hi,
>>>
>>> Happy to announce the availability of Sparklens as open source
>>> project. It helps in understanding the  scalability limits of spark
>>> applications and can be a useful guide on the path towards tuning
>>> applications for lower runtime or cost.
>>>
>>> Please clone from here: https://github.com/qubole/sparklens
>>> Old blogpost: https://www.qubole.com/blog/introducing-quboles-sp
>>> ark-tuning-tool/
>>>
>>> thanks,
>>> rohitk
>>>
>>> PS: Thanks for the patience. It took couple of months to get
>>> back on this.
>>>
>>>
>>>
>>>
>>>
>> --
> Twitter: https://twitter.com/holdenkarau
>


>>>
>>
>>
>> --
>> Shmuel Blitz
>> Big Data Developer
>> Email: shmuel.bl...@similarweb.com
>> www.similarweb.com
>> 
>> 
>> 
>>
>
>
>
> --
> Shmuel Blitz
> Big Data Developer
> Email: shmuel.bl...@similarweb.com
> www.similarweb.com
> 
> 
> 
>


>>>
>>
>>
>> --
>> Shmuel Blitz
>> Big Data Developer
>> Email: shmuel.bl...@similarweb.com
>> www.similarweb.com
>> 

Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-25 Thread Fawze Abujaber
Nice!

Shmuel, Were you able to run on a cluster level or for a specific job?

Did you configure it on the spark-default.conf?

On Sun, 25 Mar 2018 at 10:34 Shmuel Blitz 
wrote:

> Just to let you know, I have managed to run SparkLens on our cluster.
>
> I switched to the spark_1.6 branch, and also compiled against the specific
> image of Spark we are using (cdh5.7.6).
>
> Now I need to figure out what the output means... :P
>
> Shmuel
>
> On Fri, Mar 23, 2018 at 7:24 PM, Fawze Abujaber  wrote:
>
>> Quick question:
>>
>> how to add the  --jars /path/to/sparklens_2.11-0.1.0.jar to the
>> spark-default conf, should it be using:
>>
>> spark.driver.extraClassPath /path/to/sparklens_2.11-0.1.0.jar or i
>> should use spark.jars option? anyone who could give an example how it
>> should be, and if i the path for the jar should be an hdfs path as i'm
>> using it in cluster mode.
>>
>>
>>
>>
>> On Fri, Mar 23, 2018 at 6:33 AM, Fawze Abujaber 
>> wrote:
>>
>>> Hi Shmuel,
>>>
>>> Did you compile the code against the right branch for Spark 1.6.
>>>
>>> I tested it and it looks working and now i'm testing the branch for a
>>> wide tests, Please use the branch for Spark 1.6
>>>
>>> On Fri, Mar 23, 2018 at 12:43 AM, Shmuel Blitz <
>>> shmuel.bl...@similarweb.com> wrote:
>>>
 Hi Rohit,

 Thanks for sharing this great tool.
 I tried running a spark job with the tool, but it failed with an 
 *IncompatibleClassChangeError
 *Exception.

 I have opened an issue on Github.(https://github.com/
 qubole/sparklens/issues/1)

 Shmuel

 On Thu, Mar 22, 2018 at 5:05 PM, Shmuel Blitz <
 shmuel.bl...@similarweb.com> wrote:

> Thanks.
>
> We will give this a try and report back.
>
> Shmuel
>
> On Thu, Mar 22, 2018 at 4:22 PM, Rohit Karlupia 
> wrote:
>
>> Thanks everyone!
>> Please share how it works and how it doesn't. Both help.
>>
>> Fawaze, just made few changes to make this work with spark 1.6. Can
>> you please try building from branch *spark_1.6*
>>
>> thanks,
>> rohitk
>>
>>
>>
>> On Thu, Mar 22, 2018 at 10:18 AM, Fawze Abujaber 
>> wrote:
>>
>>> It's super amazing  i see it was tested on spark 2.0.0 and
>>> above, what about Spark 1.6 which is still part of Cloudera's main 
>>> versions?
>>>
>>> We have a vast Spark applications with version 1.6.0
>>>
>>> On Thu, Mar 22, 2018 at 6:38 AM, Holden Karau 
>>> wrote:
>>>
 Super exciting! I look forward to digging through it this weekend.

 On Wed, Mar 21, 2018 at 9:33 PM ☼ R Nair (रविशंकर नायर) <
 ravishankar.n...@gmail.com> wrote:

> Excellent. You filled a missing link.
>
> Best,
> Passion
>
> On Wed, Mar 21, 2018 at 11:36 PM, Rohit Karlupia <
> roh...@qubole.com> wrote:
>
>> Hi,
>>
>> Happy to announce the availability of Sparklens as open source
>> project. It helps in understanding the  scalability limits of spark
>> applications and can be a useful guide on the path towards tuning
>> applications for lower runtime or cost.
>>
>> Please clone from here: https://github.com/qubole/sparklens
>> Old blogpost: https://www.qubole.com/blog/introducing-quboles-
>> spark-tuning-tool/
>>
>> thanks,
>> rohitk
>>
>> PS: Thanks for the patience. It took couple of months to get back
>> on this.
>>
>>
>>
>>
>>
> --
 Twitter: https://twitter.com/holdenkarau

>>>
>>>
>>
>
>
> --
> Shmuel Blitz
> Big Data Developer
> Email: shmuel.bl...@similarweb.com
> www.similarweb.com
> 
> 
> 
>



 --
 Shmuel Blitz
 Big Data Developer
 Email: shmuel.bl...@similarweb.com
 www.similarweb.com
 
 
 

>>>
>>>
>>
>
>
> --
> Shmuel Blitz
> Big Data Developer
> Email: shmuel.bl...@similarweb.com
> www.similarweb.com
> 
> 
> 
>


Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-25 Thread Shmuel Blitz
Just to let you know, I have managed to run SparkLens on our cluster.

I switched to the spark_1.6 branch, and also compiled against the specific
image of Spark we are using (cdh5.7.6).

Now I need to figure out what the output means... :P

Shmuel

On Fri, Mar 23, 2018 at 7:24 PM, Fawze Abujaber  wrote:

> Quick question:
>
> how to add the  --jars /path/to/sparklens_2.11-0.1.0.jar to the
> spark-default conf, should it be using:
>
> spark.driver.extraClassPath /path/to/sparklens_2.11-0.1.0.jar or i should
> use spark.jars option? anyone who could give an example how it should be,
> and if i the path for the jar should be an hdfs path as i'm using it in
> cluster mode.
>
>
>
>
> On Fri, Mar 23, 2018 at 6:33 AM, Fawze Abujaber  wrote:
>
>> Hi Shmuel,
>>
>> Did you compile the code against the right branch for Spark 1.6.
>>
>> I tested it and it looks working and now i'm testing the branch for a
>> wide tests, Please use the branch for Spark 1.6
>>
>> On Fri, Mar 23, 2018 at 12:43 AM, Shmuel Blitz <
>> shmuel.bl...@similarweb.com> wrote:
>>
>>> Hi Rohit,
>>>
>>> Thanks for sharing this great tool.
>>> I tried running a spark job with the tool, but it failed with an 
>>> *IncompatibleClassChangeError
>>> *Exception.
>>>
>>> I have opened an issue on Github.(https://github.com/qub
>>> ole/sparklens/issues/1)
>>>
>>> Shmuel
>>>
>>> On Thu, Mar 22, 2018 at 5:05 PM, Shmuel Blitz <
>>> shmuel.bl...@similarweb.com> wrote:
>>>
 Thanks.

 We will give this a try and report back.

 Shmuel

 On Thu, Mar 22, 2018 at 4:22 PM, Rohit Karlupia 
 wrote:

> Thanks everyone!
> Please share how it works and how it doesn't. Both help.
>
> Fawaze, just made few changes to make this work with spark 1.6. Can
> you please try building from branch *spark_1.6*
>
> thanks,
> rohitk
>
>
>
> On Thu, Mar 22, 2018 at 10:18 AM, Fawze Abujaber 
> wrote:
>
>> It's super amazing  i see it was tested on spark 2.0.0 and above,
>> what about Spark 1.6 which is still part of Cloudera's main versions?
>>
>> We have a vast Spark applications with version 1.6.0
>>
>> On Thu, Mar 22, 2018 at 6:38 AM, Holden Karau 
>> wrote:
>>
>>> Super exciting! I look forward to digging through it this weekend.
>>>
>>> On Wed, Mar 21, 2018 at 9:33 PM ☼ R Nair (रविशंकर नायर) <
>>> ravishankar.n...@gmail.com> wrote:
>>>
 Excellent. You filled a missing link.

 Best,
 Passion

 On Wed, Mar 21, 2018 at 11:36 PM, Rohit Karlupia  wrote:

> Hi,
>
> Happy to announce the availability of Sparklens as open source
> project. It helps in understanding the  scalability limits of spark
> applications and can be a useful guide on the path towards tuning
> applications for lower runtime or cost.
>
> Please clone from here: https://github.com/qubole/sparklens
> Old blogpost: https://www.qubole.com/blog/introducing-quboles-sp
> ark-tuning-tool/
>
> thanks,
> rohitk
>
> PS: Thanks for the patience. It took couple of months to get back
> on this.
>
>
>
>
>
 --
>>> Twitter: https://twitter.com/holdenkarau
>>>
>>
>>
>


 --
 Shmuel Blitz
 Big Data Developer
 Email: shmuel.bl...@similarweb.com
 www.similarweb.com
 
 
 

>>>
>>>
>>>
>>> --
>>> Shmuel Blitz
>>> Big Data Developer
>>> Email: shmuel.bl...@similarweb.com
>>> www.similarweb.com
>>> 
>>> 
>>> 
>>>
>>
>>
>


-- 
Shmuel Blitz
Big Data Developer
Email: shmuel.bl...@similarweb.com
www.similarweb.com

 


Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-23 Thread Fawze Abujaber
Quick question:

how to add the  --jars /path/to/sparklens_2.11-0.1.0.jar to the
spark-default conf, should it be using:

spark.driver.extraClassPath /path/to/sparklens_2.11-0.1.0.jar or i should
use spark.jars option? anyone who could give an example how it should be,
and if i the path for the jar should be an hdfs path as i'm using it in
cluster mode.




On Fri, Mar 23, 2018 at 6:33 AM, Fawze Abujaber  wrote:

> Hi Shmuel,
>
> Did you compile the code against the right branch for Spark 1.6.
>
> I tested it and it looks working and now i'm testing the branch for a wide
> tests, Please use the branch for Spark 1.6
>
> On Fri, Mar 23, 2018 at 12:43 AM, Shmuel Blitz <
> shmuel.bl...@similarweb.com> wrote:
>
>> Hi Rohit,
>>
>> Thanks for sharing this great tool.
>> I tried running a spark job with the tool, but it failed with an 
>> *IncompatibleClassChangeError
>> *Exception.
>>
>> I have opened an issue on Github.(https://github.com/qub
>> ole/sparklens/issues/1)
>>
>> Shmuel
>>
>> On Thu, Mar 22, 2018 at 5:05 PM, Shmuel Blitz <
>> shmuel.bl...@similarweb.com> wrote:
>>
>>> Thanks.
>>>
>>> We will give this a try and report back.
>>>
>>> Shmuel
>>>
>>> On Thu, Mar 22, 2018 at 4:22 PM, Rohit Karlupia 
>>> wrote:
>>>
 Thanks everyone!
 Please share how it works and how it doesn't. Both help.

 Fawaze, just made few changes to make this work with spark 1.6. Can you
 please try building from branch *spark_1.6*

 thanks,
 rohitk



 On Thu, Mar 22, 2018 at 10:18 AM, Fawze Abujaber 
 wrote:

> It's super amazing  i see it was tested on spark 2.0.0 and above,
> what about Spark 1.6 which is still part of Cloudera's main versions?
>
> We have a vast Spark applications with version 1.6.0
>
> On Thu, Mar 22, 2018 at 6:38 AM, Holden Karau 
> wrote:
>
>> Super exciting! I look forward to digging through it this weekend.
>>
>> On Wed, Mar 21, 2018 at 9:33 PM ☼ R Nair (रविशंकर नायर) <
>> ravishankar.n...@gmail.com> wrote:
>>
>>> Excellent. You filled a missing link.
>>>
>>> Best,
>>> Passion
>>>
>>> On Wed, Mar 21, 2018 at 11:36 PM, Rohit Karlupia 
>>> wrote:
>>>
 Hi,

 Happy to announce the availability of Sparklens as open source
 project. It helps in understanding the  scalability limits of spark
 applications and can be a useful guide on the path towards tuning
 applications for lower runtime or cost.

 Please clone from here: https://github.com/qubole/sparklens
 Old blogpost: https://www.qubole.com/blog/introducing-quboles-sp
 ark-tuning-tool/

 thanks,
 rohitk

 PS: Thanks for the patience. It took couple of months to get back
 on this.





>>> --
>> Twitter: https://twitter.com/holdenkarau
>>
>
>

>>>
>>>
>>> --
>>> Shmuel Blitz
>>> Big Data Developer
>>> Email: shmuel.bl...@similarweb.com
>>> www.similarweb.com
>>> 
>>> 
>>> 
>>>
>>
>>
>>
>> --
>> Shmuel Blitz
>> Big Data Developer
>> Email: shmuel.bl...@similarweb.com
>> www.similarweb.com
>> 
>> 
>> 
>>
>
>


Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-22 Thread Fawze Abujaber
Hi Shmuel,

Did you compile the code against the right branch for Spark 1.6.

I tested it and it looks working and now i'm testing the branch for a wide
tests, Please use the branch for Spark 1.6

On Fri, Mar 23, 2018 at 12:43 AM, Shmuel Blitz 
wrote:

> Hi Rohit,
>
> Thanks for sharing this great tool.
> I tried running a spark job with the tool, but it failed with an 
> *IncompatibleClassChangeError
> *Exception.
>
> I have opened an issue on Github.(https://github.com/
> qubole/sparklens/issues/1)
>
> Shmuel
>
> On Thu, Mar 22, 2018 at 5:05 PM, Shmuel Blitz  > wrote:
>
>> Thanks.
>>
>> We will give this a try and report back.
>>
>> Shmuel
>>
>> On Thu, Mar 22, 2018 at 4:22 PM, Rohit Karlupia 
>> wrote:
>>
>>> Thanks everyone!
>>> Please share how it works and how it doesn't. Both help.
>>>
>>> Fawaze, just made few changes to make this work with spark 1.6. Can you
>>> please try building from branch *spark_1.6*
>>>
>>> thanks,
>>> rohitk
>>>
>>>
>>>
>>> On Thu, Mar 22, 2018 at 10:18 AM, Fawze Abujaber 
>>> wrote:
>>>
 It's super amazing  i see it was tested on spark 2.0.0 and above,
 what about Spark 1.6 which is still part of Cloudera's main versions?

 We have a vast Spark applications with version 1.6.0

 On Thu, Mar 22, 2018 at 6:38 AM, Holden Karau 
 wrote:

> Super exciting! I look forward to digging through it this weekend.
>
> On Wed, Mar 21, 2018 at 9:33 PM ☼ R Nair (रविशंकर नायर) <
> ravishankar.n...@gmail.com> wrote:
>
>> Excellent. You filled a missing link.
>>
>> Best,
>> Passion
>>
>> On Wed, Mar 21, 2018 at 11:36 PM, Rohit Karlupia 
>> wrote:
>>
>>> Hi,
>>>
>>> Happy to announce the availability of Sparklens as open source
>>> project. It helps in understanding the  scalability limits of spark
>>> applications and can be a useful guide on the path towards tuning
>>> applications for lower runtime or cost.
>>>
>>> Please clone from here: https://github.com/qubole/sparklens
>>> Old blogpost: https://www.qubole.com/blog/introducing-quboles-sp
>>> ark-tuning-tool/
>>>
>>> thanks,
>>> rohitk
>>>
>>> PS: Thanks for the patience. It took couple of months to get back on
>>> this.
>>>
>>>
>>>
>>>
>>>
>> --
> Twitter: https://twitter.com/holdenkarau
>


>>>
>>
>>
>> --
>> Shmuel Blitz
>> Big Data Developer
>> Email: shmuel.bl...@similarweb.com
>> www.similarweb.com
>> 
>> 
>> 
>>
>
>
>
> --
> Shmuel Blitz
> Big Data Developer
> Email: shmuel.bl...@similarweb.com
> www.similarweb.com
> 
> 
> 
>


Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-22 Thread Shmuel Blitz
Hi Rohit,

Thanks for sharing this great tool.
I tried running a spark job with the tool, but it failed with an
*IncompatibleClassChangeError
*Exception.

I have opened an issue on Github.(
https://github.com/qubole/sparklens/issues/1)

Shmuel

On Thu, Mar 22, 2018 at 5:05 PM, Shmuel Blitz 
wrote:

> Thanks.
>
> We will give this a try and report back.
>
> Shmuel
>
> On Thu, Mar 22, 2018 at 4:22 PM, Rohit Karlupia  wrote:
>
>> Thanks everyone!
>> Please share how it works and how it doesn't. Both help.
>>
>> Fawaze, just made few changes to make this work with spark 1.6. Can you
>> please try building from branch *spark_1.6*
>>
>> thanks,
>> rohitk
>>
>>
>>
>> On Thu, Mar 22, 2018 at 10:18 AM, Fawze Abujaber 
>> wrote:
>>
>>> It's super amazing  i see it was tested on spark 2.0.0 and above,
>>> what about Spark 1.6 which is still part of Cloudera's main versions?
>>>
>>> We have a vast Spark applications with version 1.6.0
>>>
>>> On Thu, Mar 22, 2018 at 6:38 AM, Holden Karau 
>>> wrote:
>>>
 Super exciting! I look forward to digging through it this weekend.

 On Wed, Mar 21, 2018 at 9:33 PM ☼ R Nair (रविशंकर नायर) <
 ravishankar.n...@gmail.com> wrote:

> Excellent. You filled a missing link.
>
> Best,
> Passion
>
> On Wed, Mar 21, 2018 at 11:36 PM, Rohit Karlupia 
> wrote:
>
>> Hi,
>>
>> Happy to announce the availability of Sparklens as open source
>> project. It helps in understanding the  scalability limits of spark
>> applications and can be a useful guide on the path towards tuning
>> applications for lower runtime or cost.
>>
>> Please clone from here: https://github.com/qubole/sparklens
>> Old blogpost: https://www.qubole.com/blog/introducing-quboles-sp
>> ark-tuning-tool/
>>
>> thanks,
>> rohitk
>>
>> PS: Thanks for the patience. It took couple of months to get back on
>> this.
>>
>>
>>
>>
>>
> --
 Twitter: https://twitter.com/holdenkarau

>>>
>>>
>>
>
>
> --
> Shmuel Blitz
> Big Data Developer
> Email: shmuel.bl...@similarweb.com
> www.similarweb.com
> 
> 
> 
>



-- 
Shmuel Blitz
Big Data Developer
Email: shmuel.bl...@similarweb.com
www.similarweb.com

 


Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-22 Thread Shmuel Blitz
Thanks.

We will give this a try and report back.

Shmuel

On Thu, Mar 22, 2018 at 4:22 PM, Rohit Karlupia  wrote:

> Thanks everyone!
> Please share how it works and how it doesn't. Both help.
>
> Fawaze, just made few changes to make this work with spark 1.6. Can you
> please try building from branch *spark_1.6*
>
> thanks,
> rohitk
>
>
>
> On Thu, Mar 22, 2018 at 10:18 AM, Fawze Abujaber 
> wrote:
>
>> It's super amazing  i see it was tested on spark 2.0.0 and above,
>> what about Spark 1.6 which is still part of Cloudera's main versions?
>>
>> We have a vast Spark applications with version 1.6.0
>>
>> On Thu, Mar 22, 2018 at 6:38 AM, Holden Karau 
>> wrote:
>>
>>> Super exciting! I look forward to digging through it this weekend.
>>>
>>> On Wed, Mar 21, 2018 at 9:33 PM ☼ R Nair (रविशंकर नायर) <
>>> ravishankar.n...@gmail.com> wrote:
>>>
 Excellent. You filled a missing link.

 Best,
 Passion

 On Wed, Mar 21, 2018 at 11:36 PM, Rohit Karlupia 
 wrote:

> Hi,
>
> Happy to announce the availability of Sparklens as open source
> project. It helps in understanding the  scalability limits of spark
> applications and can be a useful guide on the path towards tuning
> applications for lower runtime or cost.
>
> Please clone from here: https://github.com/qubole/sparklens
> Old blogpost: https://www.qubole.com/blog/introducing-quboles-sp
> ark-tuning-tool/
>
> thanks,
> rohitk
>
> PS: Thanks for the patience. It took couple of months to get back on
> this.
>
>
>
>
>
 --
>>> Twitter: https://twitter.com/holdenkarau
>>>
>>
>>
>


-- 
Shmuel Blitz
Big Data Developer
Email: shmuel.bl...@similarweb.com
www.similarweb.com

 


Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-22 Thread Rohit Karlupia
Thanks everyone!
Please share how it works and how it doesn't. Both help.

Fawaze, just made few changes to make this work with spark 1.6. Can you
please try building from branch *spark_1.6*

thanks,
rohitk



On Thu, Mar 22, 2018 at 10:18 AM, Fawze Abujaber  wrote:

> It's super amazing  i see it was tested on spark 2.0.0 and above, what
> about Spark 1.6 which is still part of Cloudera's main versions?
>
> We have a vast Spark applications with version 1.6.0
>
> On Thu, Mar 22, 2018 at 6:38 AM, Holden Karau 
> wrote:
>
>> Super exciting! I look forward to digging through it this weekend.
>>
>> On Wed, Mar 21, 2018 at 9:33 PM ☼ R Nair (रविशंकर नायर) <
>> ravishankar.n...@gmail.com> wrote:
>>
>>> Excellent. You filled a missing link.
>>>
>>> Best,
>>> Passion
>>>
>>> On Wed, Mar 21, 2018 at 11:36 PM, Rohit Karlupia 
>>> wrote:
>>>
 Hi,

 Happy to announce the availability of Sparklens as open source project.
 It helps in understanding the  scalability limits of spark applications and
 can be a useful guide on the path towards tuning applications for lower
 runtime or cost.

 Please clone from here: https://github.com/qubole/sparklens
 Old blogpost: https://www.qubole.com/blog/introducing-quboles-sp
 ark-tuning-tool/

 thanks,
 rohitk

 PS: Thanks for the patience. It took couple of months to get back on
 this.





>>> --
>> Twitter: https://twitter.com/holdenkarau
>>
>
>


Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-21 Thread Fawze Abujaber
It's super amazing  i see it was tested on spark 2.0.0 and above, what
about Spark 1.6 which is still part of Cloudera's main versions?

We have a vast Spark applications with version 1.6.0

On Thu, Mar 22, 2018 at 6:38 AM, Holden Karau  wrote:

> Super exciting! I look forward to digging through it this weekend.
>
> On Wed, Mar 21, 2018 at 9:33 PM ☼ R Nair (रविशंकर नायर) <
> ravishankar.n...@gmail.com> wrote:
>
>> Excellent. You filled a missing link.
>>
>> Best,
>> Passion
>>
>> On Wed, Mar 21, 2018 at 11:36 PM, Rohit Karlupia 
>> wrote:
>>
>>> Hi,
>>>
>>> Happy to announce the availability of Sparklens as open source project.
>>> It helps in understanding the  scalability limits of spark applications and
>>> can be a useful guide on the path towards tuning applications for lower
>>> runtime or cost.
>>>
>>> Please clone from here: https://github.com/qubole/sparklens
>>> Old blogpost: https://www.qubole.com/blog/introducing-quboles-
>>> spark-tuning-tool/
>>>
>>> thanks,
>>> rohitk
>>>
>>> PS: Thanks for the patience. It took couple of months to get back on
>>> this.
>>>
>>>
>>>
>>>
>>>
>> --
> Twitter: https://twitter.com/holdenkarau
>


Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-21 Thread Holden Karau
Super exciting! I look forward to digging through it this weekend.

On Wed, Mar 21, 2018 at 9:33 PM ☼ R Nair (रविशंकर नायर) <
ravishankar.n...@gmail.com> wrote:

> Excellent. You filled a missing link.
>
> Best,
> Passion
>
> On Wed, Mar 21, 2018 at 11:36 PM, Rohit Karlupia 
> wrote:
>
>> Hi,
>>
>> Happy to announce the availability of Sparklens as open source project.
>> It helps in understanding the  scalability limits of spark applications and
>> can be a useful guide on the path towards tuning applications for lower
>> runtime or cost.
>>
>> Please clone from here: https://github.com/qubole/sparklens
>> Old blogpost:
>> https://www.qubole.com/blog/introducing-quboles-spark-tuning-tool/
>>
>> thanks,
>> rohitk
>>
>> PS: Thanks for the patience. It took couple of months to get back on
>> this.
>>
>>
>>
>>
>>
> --
Twitter: https://twitter.com/holdenkarau


Re: Open sourcing Sparklens: Qubole's Spark Tuning Tool

2018-03-21 Thread रविशंकर नायर
Excellent. You filled a missing link.

Best,
Passion

On Wed, Mar 21, 2018 at 11:36 PM, Rohit Karlupia  wrote:

> Hi,
>
> Happy to announce the availability of Sparklens as open source project. It
> helps in understanding the  scalability limits of spark applications and
> can be a useful guide on the path towards tuning applications for lower
> runtime or cost.
>
> Please clone from here: https://github.com/qubole/sparklens
> Old blogpost: https://www.qubole.com/blog/introducing-quboles-
> spark-tuning-tool/
>
> thanks,
> rohitk
>
> PS: Thanks for the patience. It took couple of months to get back on this.
>
>
>
>
>