Congratulations!
On 2015/2/4 20:25, Nick Pentreath wrote:
Congrats and welcome Sean, Joseph and Cheng!
On Wed, Feb 4, 2015 at 2:10 PM, Sean Owen so...@cloudera.com wrote:
Thanks all, I appreciate the vote of trust. I'll do my best to help
keep JIRA and commits moving along, and am ramping
I notice that spark serialize each task with the dependencies (files and JARs
added to the SparkContext) ,
def serializeWithDependencies(
task: Task[_],
currentFiles: HashMap[String, Long],
currentJars: HashMap[String, Long],
serializer: SerializerInstance)
:
The sql run successfully? and what sql you running?
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/thrift-jdbc-server-probably-running-queries-as-hive-query-tp9267p9268.html
Sent from the Apache Spark Developers List mailing list archive at
hi, all
I suggest spark not use assembly jar as default run-time
dependency(spark-submit/spark-class depend on assembly jar),use a library of
all 3rd dependency jar like hadoop/hive/hbase more reasonable.
1 assembly jar packaged all 3rd jars into a big one, so we need rebuild this
jar if
not remove version
conflicts, just pushes them to run-time, which isn't good. The
assembly is also necessary because that's where shading happens. In
development, you want to run against exactly what will be used in a
real Spark distro.
On Tue, Sep 2, 2014 at 9:39 AM, scwf wangf...@huawei.com wrote:
hi
in a
real Spark distro.
On Tue, Sep 2, 2014 at 9:39 AM, scwf wangf...@huawei.com wrote:
hi, all
I suggest spark not use assembly jar as default run-time
dependency(spark-submit/spark-class depend on assembly jar),use a library of
all 3rd dependency jar like hadoop/hive/hbase more reasonable.
1
On Tue, Sep 2, 2014 at 2:12 AM, scwf wangf...@huawei.com
mailto:wangf...@huawei.com wrote:
Hi sean owen,
here are some problems when i used assembly jar
1 i put spark-assembly-*.jar to the lib directory of my application,
it
throw compile error
I have worked for a branch update the hive version to hive-0.13(by
org.apache.hive)---https://github.com/scwf/spark/tree/hive-0.13
I am wondering whether it's ok to make a PR now because hive-0.13 version is
not compatible with hive-0.12 and here i used org.apache.hive.
On 2014/7/29 8:22
|, capturing and printing
stdout/stderr of the forked process can be helpful for diagnosis. Currently the
only information we have at hand is the process exit code, it’s hard to
determine the reason why the forked process fails.
On Tue, Aug 19, 2014 at 1:27 PM, scwf wangf...@huawei.com
env: ubuntu 14.04 + spark master buranch
mvn -Pyarn -Phive -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean package
mvn -Pyarn -Phadoop-2.4 -Phive test
test error:
DriverSuite:
Spark assembly has been built with Hive, including Datanucleus jars on classpath
- driver should exit after
hi,Cody
i met this issue days before and i post a PR for this(
https://github.com/apache/spark/pull/1385)
it's very strange that if i synchronize conf it will deadlock but it is ok when
synchronize initLocalJobConfFuncOpt
Here's the entire jstack output.
On Mon, Jul 14, 2014 at 4:44 PM,
11 matches
Mail list logo