Starting a new Spark codebase, Python or Scala / Java?

2016-11-21 Thread Brandon White
Hello all, I will be starting a new Spark codebase and I would like to get opinions on using Python over Scala. Historically, the Scala API has always been the strongest interface to Spark. Is this still true? Are there still many benefits and additional features in the Scala API that are not

Re: Using spark package XGBoost

2016-08-14 Thread Brandon White
The XGBoost integration with Spark is currently only supported for RDDs, there is a ticket for dataframe and folks calm to be working on it. On Aug 14, 2016 8:15 PM, "Jacek Laskowski" wrote: > Hi, > > I've never worked with the library and speaking about sbt setup only. > > It

Setting spark.sql.shuffle.partitions Dynamically

2016-07-27 Thread Brandon White
Hello, My platform runs hundreds of Spark jobs every day each with its own datasize from 20mb to 20TB. This means that we need to set resources dynamically. One major pain point for doing this is spark.sql.shuffle.partitions, the number of partitions to use when shuffling data for joins or

Optimal Amount of Tasks Per size of data in memory

2016-07-20 Thread Brandon White
What is the best heuristic for setting the number of partitions/task on an RDD based on the size of the RDD in memory? The Spark docs say that the number of partitions/tasks should be 2-3x the number of CPU cores but this does not make sense for all data sizes. Sometimes, this number is way to

Size of cached dataframe

2016-07-15 Thread Brandon White
Is there any public API to get the size of a dataframe in cache? It's seen through the Spark UI but I don't see the API to access this information. Do I need to build it myself using a forked version of Spark?

Difference between Dataframe and RDD Persisting

2016-06-26 Thread Brandon White
What is the difference between persisting a dataframe and a rdd? When I persist my RDD, the UI says it takes 50G or more of memory. When I persist my dataframe, the UI says it takes 9G or less of memory. Does the dataframe not persist the actual content? Is it better / faster to persist a RDD

Re: What does it mean when a executor has negative active tasks?

2016-06-18 Thread Brandon White
e/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw > <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>* > > > > http://talebzadehmich.wordpress.com > > > > On 18 June 2016 at 17:50, Brandon White <bwwintheho.

Spark ML - Is it safe to schedule two trainings job at the same time or will worker state be corrupted?

2016-06-09 Thread Brandon White
For example, say I want to train two Linear Regressions and two GBD Tree Regressions. Using different threads, Spark allows you to submit jobs at the same time (see: http://spark.apache.org/docs/latest/job-scheduling.html). If I schedule two or more training jobs and they are running at the same

Re: BlockManager crashing applications

2016-05-08 Thread Brandon White
, 2016 5:55 PM, "Ashish Dubey" <ashish@gmail.com> wrote: Brandon, how much memory are you giving to your executors - did you check if there were dead executors in your application logs.. Most likely you require higher memory for executors.. Ashish On Sun, May 8, 2016 at 1:01 P

BlockManager crashing applications

2016-05-08 Thread Brandon White
Hello all, I am running a Spark application which schedules multiple Spark jobs. Something like: val df = sqlContext.read.parquet("/path/to/file") filterExpressions.par.foreach { expression => df.filter(expression).count() } When the block manager fails to fetch a block, it throws an

QueryExecution to String breaks with OOM

2016-05-02 Thread Brandon White
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2367) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130) at

Is DataFrame randomSplit Deterministic?

2016-05-01 Thread Brandon White
If I have the same data, the same ratios, and same sample seed, will I get the same splits every time?

Re: Dataframe saves for a large set but throws OOM for a small dataset

2016-04-30 Thread Brandon White
randomSplit instead of randomSample On Apr 30, 2016 1:51 PM, "Brandon White" <bwwintheho...@gmail.com> wrote: > val df = globalDf > val filteredDfs= filterExpressions.map { expr => > val filteredDf = df.filter(expr) > val samples = filteredDf.randomSample([.7, .

Dataframe saves for a large set but throws OOM for a small dataset

2016-04-30 Thread Brandon White
Hello, I am writing to datasets. One dataset is x2 larger than the other. Both datasets are written to parquet the exact same way using df.write.mode("Overwrite").parquet(outputFolder) The smaller dataset OOMs while the larger dataset writes perfectly fine. Here is the stack trace: Any ideas

DataFrame to DataSet without Predefined Class

2016-04-27 Thread Brandon White
I am reading parquet files into a dataframe. The schema varies depending on the data so I have no way to write a predefined class. Is there any way to go from DataFrame to DataSet without predefined a case class? Can I build a class from my dataframe schema?

How can I bucketize / group a DataFrame from parquet files?

2016-04-26 Thread Brandon White
I am creating a dataFrame from parquet files. The schema is based on the parquet files, I do not know it before hand. What I want to do is group the entire DF into buckets based on a column. val df = sqlContext.read.parquet("/path/to/files") val groupedBuckets: DataFrame[String, Array[Rows]] =

Re: subscribe

2015-08-22 Thread Brandon White
https://www.youtube.com/watch?v=umDr0mPuyQc On Sat, Aug 22, 2015 at 8:01 AM, Ted Yu yuzhih...@gmail.com wrote: See http://spark.apache.org/community.html Cheers On Sat, Aug 22, 2015 at 2:51 AM, Lars Hermes li...@hermes-it-consulting.de wrote: subscribe

Re: How to save a string to a text file ?

2015-08-14 Thread Brandon White
Convert it to a rdd then save the rdd to a file val str = dank memes sc.parallelize(List(str)).saveAsTextFile(str.txt) On Fri, Aug 14, 2015 at 7:50 PM, go canal goca...@yahoo.com.invalid wrote: Hello again, online resources have sample code for writing RDD to a file, but I have a simple

Re: subscribe

2015-08-10 Thread Brandon White
https://www.youtube.com/watch?v=H07zYvkNYL8 On Mon, Aug 10, 2015 at 10:55 AM, Ted Yu yuzhih...@gmail.com wrote: Please take a look at the first section of https://spark.apache.org/community Cheers On Mon, Aug 10, 2015 at 10:54 AM, Phil Kallos phil.kal...@gmail.com wrote: please

Spark SQL Hive - merge small files

2015-08-05 Thread Brandon White
Hello, I would love to have hive merge the small files in my managed hive context after every query. Right now, I am setting the hive configuration in my Spark Job configuration but hive is not managing the files. Do I need to set the hive fields in around place? How do you set Hive

Re: Spark SQL Hive - merge small files

2015-08-05 Thread Brandon White
So there is no good way to merge spark files in a manage hive table right now? On Wed, Aug 5, 2015 at 10:02 AM, Michael Armbrust mich...@databricks.com wrote: This feature isn't currently supported. On Wed, Aug 5, 2015 at 8:43 AM, Brandon White bwwintheho...@gmail.com wrote: Hello, I

Re: Schema evolution in tables

2015-08-04 Thread Brandon White
Sim did you find anything? :) On Sun, Jul 26, 2015 at 9:31 AM, sim s...@swoop.com wrote: The schema merging http://spark.apache.org/docs/latest/sql-programming-guide.html#schema-merging section of the Spark SQL documentation shows an example of schema evolution in a partitioned table. Is

Turn Off Compression for Textfiles

2015-08-04 Thread Brandon White
How do you turn off gz compression for saving as textfiles? Right now, I am reading ,gz files and it is saving them as .gz. I would love to not compress them when I save. 1) DStream.saveAsTextFiles() //no compression 2) RDD.saveAsTextFile() //no compression Any ideas?

Combining Spark Files with saveAsTextFile

2015-08-04 Thread Brandon White
What is the best way to make saveAsTextFile save as only a single file?

What happens when you create more DStreams then nodes in the cluster?

2015-07-31 Thread Brandon White
Since one input dstream creates one receiver and one receiver uses one executor / node. What happens if you create more Dstreams than nodes in the cluster? Say I have 30 Dstreams on a 15 node cluster. Will ~10 streams get assigned to ~10 executors / nodes then the other ~20 streams will be

Re: Has anybody ever tried running Spark Streaming on 500 text streams?

2015-07-31 Thread Brandon White
as completed. On Tue, Jul 28, 2015 at 4:05 PM, Brandon White bwwintheho...@gmail.com wrote: Thank you Tathagata. My main use case for the 500 streams is to append new elements into their corresponding Spark SQL tables. Every stream is mapped to a table so I'd like to use the streams

Re: unsubscribe

2015-07-30 Thread Brandon White
https://www.youtube.com/watch?v=JncgoPKklVE On Thu, Jul 30, 2015 at 1:30 PM, ziqiu...@accenture.com wrote: -- This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received

Does Spark Streaming need to list all the files in a directory?

2015-07-30 Thread Brandon White
Is this a known bottle neck for Spark Streaming textFileStream? Does it need to list all the current files in a directory before he gets the new files? Say I have 500k files in a directory, does it list them all in order to get the new files?

Re: unsubscribe

2015-07-28 Thread Brandon White
NO! On Tue, Jul 28, 2015 at 5:03 PM, Harshvardhan Chauhan ha...@gumgum.com wrote: -- *Harshvardhan Chauhan* | Software Engineer *GumGum* http://www.gumgum.com/ | *Ads that stick* 310-260-9666 | ha...@gumgum.com

Re: Has anybody ever tried running Spark Streaming on 500 text streams?

2015-07-28 Thread Brandon White
} Then there is only one foreachRDD executed in every batch that will process in parallel all the new files in each batch interval. TD On Tue, Jul 28, 2015 at 3:06 PM, Brandon White bwwintheho...@gmail.com wrote: val ssc = new StreamingContext(sc, Minutes(10)) //500 textFile streams watching

Has anybody ever tried running Spark Streaming on 500 text streams?

2015-07-28 Thread Brandon White
val ssc = new StreamingContext(sc, Minutes(10)) //500 textFile streams watching S3 directories val streams = streamPaths.par.map { path = ssc.textFileStream(path) } streams.par.foreach { stream = stream.foreachRDD { rdd = //do something } } ssc.start() Would something like this

Re: Programmatically launch several hundred Spark Streams in parallel

2015-07-24 Thread Brandon White
to write output? Dean Wampler, Ph.D. Author: Programming Scala, 2nd Edition http://shop.oreilly.com/product/0636920033073.do (O'Reilly) Typesafe http://typesafe.com @deanwampler http://twitter.com/deanwampler http://polyglotprogramming.com On Fri, Jul 24, 2015 at 11:23 AM, Brandon White

Spark SQL Table Caching

2015-07-21 Thread Brandon White
A few questions about caching a table in Spark SQL. 1) Is there any difference between caching the dataframe and the table? df.cache() vs sqlContext.cacheTable(tableName) 2) Do you need to warm up the cache before seeing the performance benefits? Is the cache LRU? Do you need to run some

DataFrame Union not passing optimizer assertion

2015-07-19 Thread Brandon White
Hello! So I am doing a union of two dataframes with the same schema but a different number of rows. However, I am unable to pass an assertion. I think it is this one here https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala

Nullpointer when saving as table with a timestamp column type

2015-07-17 Thread Brandon White
So I have a very simple dataframe that looks like df: [name:String, Place:String, time: time:timestamp] I build this java.sql.Timestamp from a string and it works really well expect when I call saveAsTable(tableName) on this df. Without the timestamp, it saves fine but with the timestamp, it

Running foreach on a list of rdds in parallel

2015-07-15 Thread Brandon White
Hello, I have a list of rdds List(rdd1, rdd2, rdd3,rdd4) I would like to save these rdds in parallel. Right now, it is running each operation sequentially. I tried using a rdd of rdd but that does not work. list.foreach { rdd = rdd.saveAsTextFile(/tmp/cache/) } Any ideas?

How do you access a cached Spark SQL Table from a JBDC connection?

2015-07-14 Thread Brandon White
Hello there, I have a JBDC connection setup to my Spark cluster but I cannot see the tables that I cache in memory. The only tables I can see are those that are in my Hive instance. I use a HiveContext to register a table and cache it in memory. How can I enable my JBDC connection to query this

Re: Spark Streaming - Inserting into Tables

2015-07-12 Thread Brandon White
? Thanks, Yin On Fri, Jul 10, 2015 at 11:55 AM, Brandon White bwwintheho...@gmail.com wrote: Why does this not work? Is insert into broken in 1.3.1? It does not throw any errors, fail, or throw exceptions. It simply does not work. val ssc = new StreamingContext(sc, Minutes(10)) val

Spark Streaming - Inserting into Tables

2015-07-10 Thread Brandon White
Why does this not work? Is insert into broken in 1.3.1? It does not throw any errors, fail, or throw exceptions. It simply does not work. val ssc = new StreamingContext(sc, Minutes(10)) val currentStream = ssc.textFileStream(ss3://textFileDirectory/) val dayBefore =

What is faster for SQL table storage, On-Heap or off-heap?

2015-07-09 Thread Brandon White
Is the read / aggregate performance better when caching Spark SQL tables on-heap with sqlContext.cacheTable() or off heap by saving it to Tachyon? Has anybody tested this? Any theories?

S3 vs HDFS

2015-07-09 Thread Brandon White
Are there any significant performance differences between reading text files from S3 and hdfs?

Re: Parallelizing multiple RDD / DataFrame creation in Spark

2015-07-08 Thread Brandon White
it run it in parallel though. Thanks Best Regards On Wed, Jul 8, 2015 at 5:34 AM, Brandon White bwwintheho...@gmail.com wrote: Say I have a spark job that looks like following: def loadTable1() { val table1 = sqlContext.jsonFile(ss3://textfiledirectory/) table1.cache

Re: Real-time data visualization with Zeppelin

2015-07-08 Thread Brandon White
Can you use a con job to update it every X minutes? On Wed, Jul 8, 2015 at 2:23 PM, Ganelin, Ilya ilya.gane...@capitalone.com wrote: Hi all – I’m just wondering if anyone has had success integrating Spark Streaming with Zeppelin and actually dynamically updating the data in near real-time.

Re: Spark query

2015-07-08 Thread Brandon White
Convert the column to a column of java Timestamps. Then you can do the following import java.sql.Timestamp import java.util.Calendar def date_trunc(timestamp:Timestamp, timeField:String) = { timeField match { case hour = val cal = Calendar.getInstance()

Parallelizing multiple RDD / DataFrame creation in Spark

2015-07-07 Thread Brandon White
Say I have a spark job that looks like following: def loadTable1() { val table1 = sqlContext.jsonFile(ss3://textfiledirectory/) table1.cache().registerTempTable(table1)} def loadTable2() { val table2 = sqlContext.jsonFile(ss3://testfiledirectory2/)

Why can I not insert into TempTables in Spark SQL?

2015-07-07 Thread Brandon White
Why does this not work? Is insert into broken in 1.3.1? val ssc = new StreamingContext(sc, Minutes(10)) val currentStream = ssc.textFileStream(ss3://textFileDirectory/) val dayBefore = sqlContext.jsonFile(ss3://textFileDirectory/) dayBefore.saveAsParquetFile(/tmp/cache/dayBefore.parquet) val

Grouping elements in a RDD

2015-06-20 Thread Brandon White
How would you do a .grouped(10) on a RDD, is it possible? Here is an example for a Scala list scala List(1,2,3,4).grouped(2).toList res1: List[List[Int]] = List(List(1, 2), List(3, 4)) Would like to group n elements.