Spark Summit East - Full Schedule Available

2016-01-18 Thread Scott walent
Join the Apache Spark community at the 2nd annual Spark Summit East from
February 16-18, 2016 in New York City.

We will kick things off with a Spark update from Matei Zaharia followed by
over 60 talks that were selected by the program committee. The agenda this
year includes enterprise talks from Microsoft, Bloomberg and Comcast as
well as the popular developer, data science, research and application
tracks.  See the full agenda at https://spark-summit.org/east-2016/schedule.


If you are new to Spark or looking to improve on your knowledge of the
technology, we are offering three levels of Spark Training: Spark
Essentials, Advanced Exploring Wikipedia with Spark, and Data Science with
Spark. Visit https://spark-summit.org/east-2016/schedule/spark-training for
details.

Space is limited and we anticipate selling out, so register now! Use promo
code "ApacheListEast" to save 20% when registering before January 29, 2016.
Register at https://spark-summit.org/register.

We look forward to seeing you there.

Scott and the Summit Organizers


Fwd: Elasticsearch sink for metrics

2016-01-18 Thread Pete Robbins
The issue I had was with the ElasticsearchReporter and how it maps eg a
Gauge in JSON. The "value" was typed to whatever the first Guage was, eg
int, which caused issues with some of my other guages which were double.

As I say I've just started looking at this and was wanting to see if this
was already implemented before continuing.

On 15 January 2016 at 09:18, Nick Pentreath 
wrote:

> I haven't come across anything, but could you provide more detail on what
> issues you're encountering?
>
>
>
> On Fri, Jan 15, 2016 at 11:09 AM, Pete Robbins 
> wrote:
>
>> Has anyone tried pushing Spark metrics into elasticsearch? We have other
>> metrics, eg some runtime information, going into ES and would like to be
>> able to combine this with the Spark metrics for visualization with Kibana.
>>
>> I experimented with a new sink using ES's ElasticsearchReporter for the
>> Coda Hale metrics but have a few issues with default mappings.
>>
>> Has anyone already implemented this before I start to dig deeper?
>>
>> Cheers,
>>
>>
>>
>


Unable to compile and test Spark in IntelliJ

2016-01-18 Thread Hyukjin Kwon
Hi all,

I usually have been working with Spark in IntelliJ.

Before this PR,
https://github.com/apache/spark/commit/7cd7f2202547224593517b392f56e49e4c94cabc
for
`[SPARK-12575][SQL] Grammar parity with existing SQL parser`. I was able to
just open the project and then run some tests with IntelliJ Run button.

However, it looks that PR adds some ANTLR files for parsing and I cannot
run the tests as I did. So, I ended up with doing this by mvn compile first
and then running some tests with IntelliJ.

I can still run some tests with sbt or maven in comment line but this is a
bit inconvenient. I just want to run some tests as I did in IntelliJ.

I followed this
https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Tools
several times but it still emits some exceptions such as

Error:(779, 34) not found: value SparkSqlParser
case ast if ast.tokenType == SparkSqlParser.TinyintLiteral =>
 ^

and I still should run mvn compile or mvn test first for them.

Is there any good way to run some Spark tests within IntelliJ as I did
before?

Thanks!