Re: Replacing Jetty with TomCat
Mostly UI. However, we are also using Jetty as a file server I believe (for distributing files from the driver to workers). On Sun, Feb 15, 2015 at 9:24 PM, Niranda Perera wrote: > Hi Reynold, > > Thank you for the response. Could you please clarify the need of Jetty > server inside Spark? Is it used for Spark core functionality or is it there > for Spark jobs UI purposes? > > cheers > > On Mon, Feb 16, 2015 at 10:47 AM, Reynold Xin wrote: > >> Most likely no. We are using the embedded mode of Jetty, rather than >> using servlets. >> >> Even if it is possible, you probably wouldn't want to embed Spark in your >> application server ... >> >> >> On Sun, Feb 15, 2015 at 9:08 PM, Niranda Perera > > wrote: >> >>> Hi, >>> >>> We are thinking of integrating Spark server inside a product. Our current >>> product uses Tomcat as its webserver. >>> >>> Is it possible to switch the Jetty webserver in Spark to Tomcat >>> off-the-shelf? >>> >>> Cheers >>> >>> -- >>> Niranda >>> >> >> > > > -- > Niranda >
Re: Replacing Jetty with TomCat
Hi Reynold, Thank you for the response. Could you please clarify the need of Jetty server inside Spark? Is it used for Spark core functionality or is it there for Spark jobs UI purposes? cheers On Mon, Feb 16, 2015 at 10:47 AM, Reynold Xin wrote: > Most likely no. We are using the embedded mode of Jetty, rather than using > servlets. > > Even if it is possible, you probably wouldn't want to embed Spark in your > application server ... > > > On Sun, Feb 15, 2015 at 9:08 PM, Niranda Perera > wrote: > >> Hi, >> >> We are thinking of integrating Spark server inside a product. Our current >> product uses Tomcat as its webserver. >> >> Is it possible to switch the Jetty webserver in Spark to Tomcat >> off-the-shelf? >> >> Cheers >> >> -- >> Niranda >> > > -- Niranda
Re: Replacing Jetty with TomCat
Most likely no. We are using the embedded mode of Jetty, rather than using servlets. Even if it is possible, you probably wouldn't want to embed Spark in your application server ... On Sun, Feb 15, 2015 at 9:08 PM, Niranda Perera wrote: > Hi, > > We are thinking of integrating Spark server inside a product. Our current > product uses Tomcat as its webserver. > > Is it possible to switch the Jetty webserver in Spark to Tomcat > off-the-shelf? > > Cheers > > -- > Niranda >
Replacing Jetty with TomCat
Hi, We are thinking of integrating Spark server inside a product. Our current product uses Tomcat as its webserver. Is it possible to switch the Jetty webserver in Spark to Tomcat off-the-shelf? Cheers -- Niranda
Re: Spark & Hive
Spark SQL is not the same as Hive on Spark. Spark SQL is a query engine that is designed from ground up for Spark without the historic baggage of Hive. It also does more than SQL now -- it is meant for structured data processing (e.g. the new DataFrame API) and SQL. Spark SQL is mostly compatible with Hive, but 100% compatibility is not a goal (nor desired, since Hive has a lot of weird SQL semantics in the course of its evolution). Hive on Spark is meant to replace Hive's MapReduce runtime with Spark's. For more information, see this blog post: https://databricks.com/blog/2014/07/01/shark-spark-sql-hive-on-spark-and-the-future-of-sql-on-spark.html On Sun, Feb 15, 2015 at 3:03 AM, The Watcher wrote: > I'm a little confused around Hive & Spark, can someone shed some light ? > > Using Spark, I can access the Hive metastore and run Hive queries. Since I > am able to do this in stand-alone mode, it can't be using map-reduce to run > the Hive queries and I suppose it's building a query plan and executing it > all in Spark. > > So, is this the same as > > https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started > ? > If not, why not and aren't they likely to merge at some point ? > > If Spark really builds its own query plan, joins, etc without Hive's then > is everything that requires special SQL syntax in Hive supported : window > functions, cubes, rollups, skewed tables, etc > > Thanks >
Re: A Spark Compilation Question
In IntelliJ: - Open View -> Tool Windows -> Maven Projects - Right click on Spark Project External Flume Sink - Click Generate Sources and Update Folders This should generate source code from sparkflume.avdl. Vu~ -- View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/A-Spark-Compilation-Question-tp8402p10626.html Sent from the Apache Spark Developers List mailing list archive at Nabble.com. - To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org
Spark & Hive
I'm a little confused around Hive & Spark, can someone shed some light ? Using Spark, I can access the Hive metastore and run Hive queries. Since I am able to do this in stand-alone mode, it can't be using map-reduce to run the Hive queries and I suppose it's building a query plan and executing it all in Spark. So, is this the same as https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started ? If not, why not and aren't they likely to merge at some point ? If Spark really builds its own query plan, joins, etc without Hive's then is everything that requires special SQL syntax in Hive supported : window functions, cubes, rollups, skewed tables, etc Thanks