ust choose the master branch and 1.5.0, a correct hadoop version
>> (default to 2.2.0 though) and there you go :-)
>>
>>
>> On Wed, Sep 9, 2015 at 6:39 PM Ted Yu <yuzhih...@gmail.com> wrote:
>>
>>> Jerry:
>>> I just tried building hbase-spark module
Jerry:
I just tried building hbase-spark module with 1.5.0 and I see:
ls -l ~/.m2/repository/org/apache/spark/spark-core_2.10/1.5.0
total 21712
-rw-r--r-- 1 tyu staff 196 Sep 9 09:37 _maven.repositories
-rw-r--r-- 1 tyu staff 11081542 Sep 9 09:37 spark-core_2.10-1.5.0.jar
-rw-r--r--
Here is the example from Reynold (
http://search-hadoop.com/m/q3RTtfvs1P1YDK8d) :
scala> val data = sc.parallelize(1 to size, 5).map(x =>
(util.Random.nextInt(size /
repetitions),util.Random.nextDouble)).toDF("key", "value")
data: org.apache.spark.sql.DataFrame = [key: int, value: double]
scala>
patch into my
GSoC Jira issue you mentioned and then we can continue at there.
Before I do that stuff, I wanted to get Spark dev community's ideas to
solve my problem due to you may have faced such kind of problems before.
26 Ağu 2015 17:13 tarihinde Ted Yu yuzhih...@gmail.com yazdı:
I found GORA
The connection failure was to zookeeper.
Have you verified that localhost:2181 can serve requests ?
What version of hbase was Gora built against ?
Cheers
On Aug 26, 2015, at 1:50 AM, Furkan KAMACI furkankam...@gmail.com wrote:
Hi,
I start an Hbase cluster for my test class. I use that
works without any error. Hbase version is 0.98.8-hadoop2 and I
use Spark 1.3.1
Kind Regards,
Furkan KAMACI
26 Ağu 2015 12:08 tarihinde Ted Yu yuzhih...@gmail.com yazdı:
The connection failure was to zookeeper.
Have you verified that localhost:2181 can serve requests ?
What version
/HBaseContextSuite.scala
--If you want to look at the old stuff before it went into HBase
https://github.com/cloudera-labs/SparkOnHBase
Let me know if that helps
On Wed, Aug 26, 2015 at 5:40 AM, Ted Yu yuzhih...@gmail.com wrote:
Can you log the contents of the Configuration you pass from Spark
I pointed hbase-spark module (in HBase project) to 1.5.0-rc1 and was able
to build the module (with proper maven repo).
FYI
On Fri, Aug 21, 2015 at 2:17 PM, mkhaitman mark.khait...@chango.com wrote:
Just a heads up that this RC1 release is still appearing as
1.5.0-SNAPSHOT
(Not just me
See this thread:
http://search-hadoop.com/m/q3RTtdZv0d1btRHl/Spark+build+modulesubj=Building+Spark+Building+just+one+module+
On Aug 19, 2015, at 1:44 AM, canan chen ccn...@gmail.com wrote:
I want to work on one jira, but it is not easy to do unit test, because it
involves different
See first section on https://spark.apache.org/community
On Thu, Aug 13, 2015 at 9:44 AM, Naga Vij nvbuc...@gmail.com wrote:
subscribe
Thanks Josh for the initiative.
I think reducing the redundancy in QA bot posts would make discussion on GitHub
UI more focused.
Cheers
On Thu, Aug 13, 2015 at 7:21 PM, Josh Rosen rosenvi...@gmail.com wrote:
Prototype is at https://github.com/databricks/spark-pr-dashboard/pull/59
On Wed,
I tried accessing just now.
It took several seconds before the page showed up.
FYI
On Thu, Aug 13, 2015 at 7:56 PM, Cheng, Hao hao.ch...@intel.com wrote:
I found the https://spark-prs.appspot.com/ is super slow while open it in
a new window recently, not sure just myself or everybody
,
*From:* Ted Yu [mailto:yuzhih...@gmail.com]
*Sent:* Tuesday, August 11, 2015 3:28 PM
*To:* Yan Zhou.sc
*Cc:* Bing Xiao (Bing); dev@spark.apache.org; u...@spark.apache.org
*Subject:* Re: 答复: Package Release Annoucement: Spark SQL on HBase Astro
HBase will not have query engine
, …, etc., which allows for loosely-coupled query
engines
built on top of it.
Thanks,
发件人: Ted Yu [mailto:yuzhih...@gmail.com]
发送时间: 2015年8月11日 8:54
收件人: Bing Xiao (Bing)
抄送: dev@spark.apache.org; u...@spark.apache.org; Yan Zhou.sc
主题: Re: Package Release Annoucement: Spark SQL
Yan / Bing:
Mind taking a look at HBASE-14181
https://issues.apache.org/jira/browse/HBASE-14181 'Add Spark DataFrame
DataSource to HBase-Spark Module' ?
Thanks
On Wed, Jul 22, 2015 at 4:53 PM, Bing Xiao (Bing) bing.x...@huawei.com
wrote:
We are happy to announce the availability of the Spark
What is the JIRA number if a JIRA has been logged for this ?
Thanks
On Jan 20, 2015, at 11:30 AM, Cheng Lian lian.cs@gmail.com wrote:
Hey Yi,
I'm quite unfamiliar with Hadoop/HDFS auth mechanisms for now, but would like
to investigate this issue later. Would you please open an
When I tried to compile against hbase 1.1.1, I got:
[ERROR]
/home/hbase/ssoh/src/main/scala/org/apache/spark/sql/hbase/SparkSqlRegionObserver.scala:124:
overloaded method next needs result type
[ERROR] override def next(result: java.util.List[Cell], limit: Int) =
next(result)
Is there plan to
Please take a look at the first section of:
https://spark.apache.org/community
On Thu, Jul 30, 2015 at 9:23 PM, Sachin Aggarwal different.sac...@gmail.com
wrote:
--
Thanks Regards
Sachin Aggarwal
7760502772
zookeeper is not a direct dependency of Spark.
Can you give a bit more detail on how the election / discovery of master
works ?
Cheers
On Thu, Jul 30, 2015 at 7:41 PM, Christophe Schmitz cofcof...@gmail.com
wrote:
Hi there,
I am trying to run a 3 node spark cluster where each nodes contains
Hi,
I noticed that ReceiverTrackerSuite is failing in master Jenkins build for
both hadoop profiles.
The failure seems to start with:
https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven-with-YARN/3104/
FYI
I got a compilation error:
[INFO] /home/hbase/s-on-hbase/src/main/scala:-1: info: compiling
[INFO] Compiling 18 source files to /home/hbase/s-on-hbase/target/classes
at 1438099569598
[ERROR]
/home/hbase/s-on-hbase/src/main/scala/org/apache/spark/hbase/examples/simple/HBaseTableSimple.scala:36:
SparkDeploySchedulerBackend: Asked to remove
non-existent executor 2...
15/07/23 13:26:41 ERROR SparkDeploySchedulerBackend: Asked to remove
non-existent executor 2...
-- 原始邮件 --
*发件人:* Ted Yu;yuzhih...@gmail.com;
*发送时间:* 2015年7月26日(星期天) 晚上10:51
*收件人:* Pa
...@gmail.com
wrote:
Yep, I emailed TD about it; I think that we may need to make a change
to the
pull request builder to fix this. Pending that, we could just revert
the
commit that added this.
On Sun, Jul 19, 2015 at 5:32 PM, Ted Yu yuzhih...@gmail.com wrote:
Hi,
I noticed
Hi,
I noticed that KinesisStreamSuite fails for both hadoop profiles in master
Jenkins builds.
From
https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven-with-YARN/HADOOP_PROFILE=hadoop-2.4,label=centos/3011/console
:
KinesisStreamSuite:*** RUN ABORTED *** java.lang.AssertionError:
+1 to removing commit messages.
On Jul 18, 2015, at 1:35 AM, Sean Owen so...@cloudera.com wrote:
+1 to removing them. Sometimes there are 50+ commits because people
have been merging from master into their branch rather than rebasing.
On Sat, Jul 18, 2015 at 8:48 AM, Reynold Xin
What if you move your addition to before line 64 (in master branch there is
case for if e.checkInputDataTypes().isFailure):
case c: Cast if !c.resolved =
Cheers
On Wed, Jul 15, 2015 at 12:47 AM, Takeshi Yamamuro linguin@gmail.com
wrote:
Hi, devs
I found that the case of
Interesting read.
I did find a lot of Spark mails in Spam folder.
Thanks Mridul
On Jul 18, 2015, at 10:25 AM, Mridul Muralidharan mri...@gmail.com wrote:
https://plus.google.com/+LinusTorvalds/posts/DiG9qANf5PA
I have noticed a bunch of mails from dev@ and github going to spam -
Can you provide a bit more information such as:
release of Spark you use
snippet of your SparkSQL query
Thanks
On Thu, Jul 16, 2015 at 5:31 AM, nipun ibnipu...@gmail.com wrote:
I have a dataframe. I register it as a temp table and run a spark sql query
on it to get another dataframe. Now
I attached a patch for HADOOP-12235
BTW openstack was not mentioned in the first email from Gil.
My email and Gil's second email were sent around the same moment.
Cheers
On Wed, Jul 15, 2015 at 2:06 AM, Steve Loughran ste...@hortonworks.com
wrote:
On 14 Jul 2015, at 12:22, Ted Yu yuzhih
Looking at Jenkins, master branch compiles.
Can you try the following command ?
mvn -Phive -Phadoop-2.6 -DskipTests clean package
What version of Java are you using ?
Cheers
On Tue, Jul 14, 2015 at 2:23 AM, Gil Vernik g...@il.ibm.com wrote:
I just did checkout of the master and tried to
When I ran dev/run-tests , I got :
File ./dev/run-tests.py, line 68, in
__main__.identify_changed_files_from_git_commits
Failed example:
'root' in [x.name for x in determine_modules_for_files(
identify_changed_files_from_git_commits(50a0496a43,
target_ref=6765ef9))]
Exception raised:
Jenkins shows green builds.
What Java version did you use ?
Cheers
On Sun, Jul 12, 2015 at 3:49 AM, René Treffer rtref...@gmail.com wrote:
Hi *,
I'm currently trying to build master but it fails with
[error] Picked up JAVA_TOOL_OPTIONS:
-javaagent:/usr/share/java/jayatanaag.jar
[error]
/Spark-QA-Compile/ that the
Maven compilation is now broken in master.
On Thu, Jul 9, 2015 at 8:48 AM, Ted Yu yuzhih...@gmail.com wrote:
I guess the compilation issue didn't surface in QA run because sbt was
used:
[info] Building Spark (w/Hive 0.13.1) using SBT with these arguments
Looking at
https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven-with-YARN/HADOOP_PROFILE=hadoop-2.4,label=centos/2875/consoleFull
:
[error]
[error] while compiling:
Owen so...@cloudera.com wrote:
This is an error from scalac and not Spark. I find it happens
frequently for me but goes away on a clean build. *shrug*
On Thu, Jul 9, 2015 at 3:45 PM, Ted Yu yuzhih...@gmail.com wrote:
Looking at
https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven
streaming-flume-assembly/assembly
Cheers
On Thu, Jul 9, 2015 at 7:58 AM, Ted Yu yuzhih...@gmail.com wrote:
From
https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven-with-YARN/HADOOP_PROFILE=hadoop-2.4,label=centos/2875/consoleFull
:
+ build/mvn -DzincPort=3439 -DskipTests -Phadoop-2.4
are passing on Jenkins so I wonder if it's a maven version issue:
https://amplab.cs.berkeley.edu/jenkins/view/Spark-QA-Compile/
- Patrick
On Fri, Jul 3, 2015 at 3:14 PM, Ted Yu yuzhih...@gmail.com wrote:
Please take a look at SPARK-8781
(https://github.com/apache/spark/pull/7193
This is what I got (the last line was repeated non-stop):
[INFO] Replacing original artifact with shaded artifact.
[INFO] Replacing
/home/hbase/spark/bagel/target/spark-bagel_2.10-1.5.0-SNAPSHOT.jar with
/home/hbase/spark/bagel/target/spark-bagel_2.10-1.5.0-SNAPSHOT-shaded.jar
[INFO]
Patrick:
I used the following command:
~/apache-maven-3.3.1/bin/mvn -DskipTests -Phadoop-2.4 -Pyarn -Phive clean
package
The build doesn't seem to stop.
Here is tail of build output:
[INFO] Dependency-reduced POM written at:
/home/hbase/spark-1.4.1/bagel/dependency-reduced-pom.xml
[INFO]
Here is the command I used:
mvn -Phadoop-2.4 -Dhadoop.version=2.7.0 -Pyarn -Phive package
Java: 1.8.0_45
OS:
Linux x.com 2.6.32-504.el6.x86_64 #1 SMP Wed Oct 15 04:27:16 UTC 2014
x86_64 x86_64 x86_64 GNU/Linux
Cheers
On Mon, Jun 29, 2015 at 12:04 AM, Tathagata Das tathagata.das1...@gmail.com
The test passes when run alone on my machine as well.
Please run test suite.
Thanks
On Mon, Jun 29, 2015 at 2:01 PM, Tathagata Das tathagata.das1...@gmail.com
wrote:
@Ted, I ran the following two commands.
mvn -Phadoop-2.4 -Dhadoop.version=2.7.0 -Pyarn -Phive -DskipTests clean
package
mvn
that this uncovers a real bug. Even if it does I would not
block the release on it because many in the community are waiting for a few
important fixes. In general, there will always be outstanding issues in
Spark that we cannot address in every release.
-Andrew
2015-06-29 14:29 GMT-07:00 Ted Yu yuzhih
Spark-Master-Scala211-Compile build is green.
However it is not clear what the actual command is:
[EnvInject] - Variables injected successfully.
[Spark-Master-Scala211-Compile] $ /bin/bash /tmp/hudson8945334776362889961.sh
FYI
On Sun, Jun 28, 2015 at 6:02 PM, Alessandro Baretta
I got the following when running test suite:
[INFO] compiler plugin:
BasicArtifact(org.scalamacros,paradise_2.10.4,2.0.1,null)
^[[0m[^[[0minfo^[[0m] ^[[0mCompiling 2 Scala sources and 1 Java source to
/home/hbase/spark-1.4.1/streaming/target/scala-2.10/test-classes...^[[0m
^[[0m[^[[31merror^[[0m]
(OutcomeOf.scala:85)^[[0m
The error from previous email was due to absence
of StreamingContextSuite.scala
On Fri, Jun 26, 2015 at 1:27 PM, Ted Yu yuzhih...@gmail.com wrote:
I got the following when running test suite:
[INFO] compiler plugin:
BasicArtifact(org.scalamacros,paradise_2.10.4,2.0.1
Andrew Or put in this workaround :
diff --git a/pom.xml b/pom.xml
index 0b1aaad..d03d33b 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1438,6 +1438,8 @@
version2.3/version
configuration
shadedArtifactAttachedfalse/shadedArtifactAttached
+ !-- Work around MSHADE-148
bq. val result = fDB.mappartitions(testMP).collect
Not sure if you pasted the above code - there was a typo: method name
should be mapPartitions
Cheers
On Sat, May 30, 2015 at 9:44 AM, unioah uni...@gmail.com wrote:
Hi,
I try to aggregate the value in each partition internally.
For
I downloaded source tar ball and ran command similar to following with:
clean package -DskipTests
Then I ran the following command.
Fyi
On May 30, 2015, at 12:42 AM, Tathagata Das t...@databricks.com wrote:
Did was it a clean compilation?
TD
On Fri, May 29, 2015 at 10:48 PM, Ted
Hi,
I ran the following command on 1.4.0 RC3:
mvn -Phadoop-2.4 -Dhadoop.version=2.7.0 -Pyarn -Phive package
I saw the following failure:
^[[32mStreamingContextSuite:^[[0m
^[[32m- from no conf constructor^[[0m
^[[32m- from no conf + spark home^[[0m
^[[32m- from no conf + spark home + env^[[0m
, 2015 at 6:37 PM, Ted Yu yuzhih...@gmail.com wrote:
Pardon me.
Please use '8192k'
Cheers
On Sat, May 23, 2015 at 6:24 PM, Debasish Das debasish.da...@gmail.com
wrote:
Tried 8mb...still I am failing on the same error...
On Sat, May 23, 2015 at 6:10 PM, Ted Yu yuzhih...@gmail.com wrote
bq. it shuld be 8mb
Please use the above syntax.
Cheers
On Sat, May 23, 2015 at 6:04 PM, Debasish Das debasish.da...@gmail.com
wrote:
Hi,
I am on last week's master but all the examples that set up the following
.set(spark.kryoserializer.buffer, 8m)
are failing with the following error:
Pardon me.
Please use '8192k'
Cheers
On Sat, May 23, 2015 at 6:24 PM, Debasish Das debasish.da...@gmail.com
wrote:
Tried 8mb...still I am failing on the same error...
On Sat, May 23, 2015 at 6:10 PM, Ted Yu yuzhih...@gmail.com wrote:
bq. it shuld be 8mb
Please use the above syntax
INFRA-9646 has been resolved.
FYI
On Wed, May 13, 2015 at 6:00 PM, Patrick Wendell pwend...@gmail.com wrote:
Hi All - unfortunately the fix introduced another bug, which is that
fixVersion was not updated properly. I've updated the script and had
one other person test it.
So committers
What version of Java do you use ?
Can you run this command first ?
build/sbt clean
BTW please see [SPARK-7498] [MLLIB] add varargs back to setDefault
Cheers
On Fri, May 22, 2015 at 7:34 AM, Manoj Kumar manojkumarsivaraj...@gmail.com
wrote:
Hello,
I updated my master from upstream
lately:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/
Maybe PR builder doesn't build against hadoop 2.4 ?
Cheers
On Mon, May 11, 2015 at 1:11 PM, Ted Yu yuzhih...@gmail.com wrote:
Makes sense.
Having high determinism in these tests would make Jenkins build stable
, 2015 at 9:23 AM, Ted Yu yuzhih...@gmail.com wrote:
Jenkins build against hadoop 2.4 has been unstable recently:
https://amplab.cs.berkeley.edu/jenkins/view/Spark/job/Spark-Master-Maven-with-YARN/HADOOP_PROFILE=hadoop-2.4,label=centos/
I haven't found the test which hung / failed in recent Jenkins
] Running Spark tests with these arguments: -Pyarn -Phadoop-2.3
-Dhadoop.version=2.3.0 -Pkinesis-asl test
Is anyone testing individual pull requests against Hadoop 2.4 or 2.6
before the code is declared clean?
Fred
[image: Inactive hide details for Ted Yu ---05/15/2015 09:29:09
AM---Jenkins
Subproject tag should follow SPARK JIRA number.
e.g.
[SPARK-5277][SQL] ...
Cheers
On Wed, May 13, 2015 at 11:50 AM, Stephen Boesch java...@gmail.com wrote:
following up from Nicholas, it is
[SPARK-12345] Your PR description
where 12345 is the jira number.
One thing I tend to forget is
actually the worst if tests
fail sometimes but not others, because we can't reproduce them
deterministically. Using -M and -A actually tolerates flaky tests to a
certain extent, and I would prefer to instead increase the determinism in
these tests.
-Andrew
2015-05-08 17:56 GMT-07:00 Ted Yu yuzhih
In Row#equals():
while (i len) {
if (apply(i) != that.apply(i)) {
'!=' should be !apply(i).equals(that.apply(i)) ?
Cheers
On Mon, May 11, 2015 at 1:49 PM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
This is really strange.
# Spark 1.3.1
print type(results)
class
Looks like you're right:
https://amplab.cs.berkeley.edu/jenkins/view/Spark/job/Spark-1.3-Maven-with-YARN/HADOOP_PROFILE=hadoop-2.4,label=centos/427/console
[error]
Andrew:
Do you think the -M and -A options described here can be used in test runs ?
http://scalatest.org/user_guide/using_the_runner
Cheers
On Wed, May 6, 2015 at 5:41 PM, Andrew Or and...@databricks.com wrote:
Dear all,
I'm sure you have all noticed that the Spark tests have been fairly
From which site did you download the tar ball ?
Which package type did you choose (pre-built for which distro) ?
Thanks
On Wed, May 6, 2015 at 7:16 PM, Praveen Kumar Muthuswamy
muthusamy...@gmail.com wrote:
Hi
I have been trying to install latest spark verison and downloaded the .tgz
Looks like mismatch of jackson version.
Spark uses:
fasterxml.jackson.version2.4.4/fasterxml.jackson.version
FYI
On Wed, May 6, 2015 at 8:00 AM, A.M.Chan kaka_1...@163.com wrote:
Hey, guys. I meet this exception while testing SQL/Columns.
I didn't change the pom or the core project.
In
+1
On Sat, May 2, 2015 at 1:09 PM, Mridul Muralidharan mri...@gmail.com
wrote:
We could build on minimum jdk we support for testing pr's - which will
automatically cause build failures in case code uses newer api ?
Regards,
Mridul
On Fri, May 1, 2015 at 2:46 PM, Reynold Xin
Pramod:
Please remember to run Zinc so that the build is faster.
Cheers
On Fri, May 1, 2015 at 9:36 AM, Ulanov, Alexander alexander.ula...@hp.com
wrote:
Hi Pramod,
For cluster-like tests you might want to use the same code as in mllib's
LocalClusterSparkContext. You can rebuild only the
IMHO I would go with choice #1
Cheers
On Wed, Apr 29, 2015 at 10:03 PM, Reynold Xin r...@databricks.com wrote:
We definitely still have the name collision problem in SQL.
On Wed, Apr 29, 2015 at 10:01 PM, Punyashloka Biswal
punya.bis...@gmail.com
wrote:
Do we still have to keep the
+1 on ending support for Java 6.
BTW from https://www.java.com/en/download/faq/java_7.xml :
After April 2015, Oracle will no longer post updates of Java SE 7 to its
public download sites.
On Thu, Apr 30, 2015 at 1:34 PM, Punyashloka Biswal punya.bis...@gmail.com
wrote:
I'm in favor of ending
Looks like this has been taken care of:
commit beeafcfd6ee1e460c4d564cd1515d8781989b422
Author: Patrick Wendell patr...@databricks.com
Date: Thu Apr 30 20:33:36 2015 -0700
Revert [SPARK-5213] [SQL] Pluggable SQL Parser Support
On Thu, Apr 30, 2015 at 7:58 PM, zhazhan
But it is hard to know how long customers stay with their most recent
download.
Cheers
On Thu, Apr 30, 2015 at 2:26 PM, Sree V sree_at_ch...@yahoo.com.invalid
wrote:
If there is any possibility of getting the download counts,then we can use
it as EOS criteria as well.Say, if download counts
I found:
https://issues.apache.org/jira/browse/SPARK-6573
On Apr 20, 2015, at 4:29 AM, Peter Rudenko petro.rude...@gmail.com wrote:
Sounds very good. Is there a jira for this? Would be cool to have in 1.4,
because currently cannot use dataframe.describe function with NaN values,
need to
The image didn't go through.
I think you were referring to:
override def map[R: ClassTag](f: Row = R): RDD[R] = rdd.map(f)
Cheers
On Fri, Apr 17, 2015 at 6:07 AM, Olivier Girardot
o.girar...@lateral-thoughts.com wrote:
Hi everyone,
I had an issue trying to use Spark SQL from Java (8 or
with spilling, bypass merge-sort
Any pointers ?
Thanking you.
With Regards
Sree
On Thursday, April 16, 2015 12:01 PM, Ted Yu yuzhih...@gmail.com
wrote:
You can get some idea by looking at the builds here:
https://amplab.cs.berkeley.edu/jenkins/view/Spark/job/Spark-1.2-Maven
From SparkUI.scala :
def getUIPort(conf: SparkConf): Int = {
conf.getInt(spark.ui.port, SparkUI.DEFAULT_PORT)
}
Better retrieve effective UI port before probing.
Cheers
On Sat, Apr 11, 2015 at 2:38 PM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
So basically, to tell if the
Take a look at the maven-shade-plugin in pom.xml.
Here is the snippet for org.spark-project.jetty :
relocation
patternorg.eclipse.jetty/pattern
shadedPatternorg.spark-project.jetty/shadedPattern
includes
bq. writing the output (to Amazon S3) failed
What's the value of fs.s3.maxRetries ?
Increasing the value should help.
Cheers
On Wed, Apr 1, 2015 at 8:34 AM, Romi Kuntsman r...@totango.com wrote:
What about communication errors and not corrupted files?
Both when reading input and when writing
Sounds good to me.
On Tue, Mar 31, 2015 at 6:12 PM, sequoiadb mailing-list-r...@sequoiadb.com
wrote:
Hey,
start-slaves.sh script is able to read from slaves file and start slaves
node in multiple boxes.
However in standalone mode if I want to use multiple masters, I’ll have to
start
Issues are tracked on Apache JIRA:
https://issues.apache.org/jira/browse/SPARK/?selectedTab=com.atlassian.jira.jira-projects-plugin:summary-panel
Cheers
On Wed, Mar 25, 2015 at 1:51 PM, Igor Costa igorco...@apache.org wrote:
Hi there Guys.
I want to be more collaborative to Spark, but I have
Please take a look
at core/src/main/scala/org/apache/spark/SparkStatusTracker.scala, around
line 58:
def getActiveStageIds(): Array[Int] = {
Cheers
On Fri, Mar 20, 2015 at 3:59 PM, xing ehomec...@gmail.com wrote:
getStageInfo in self._jtracker.getStageInfo below seems not
When I enter http://spark.apache.org/docs/latest/ into Chrome address bar,
I saw 1.3.0
Cheers
On Sun, Mar 15, 2015 at 11:12 AM, Patrick Wendell pwend...@gmail.com
wrote:
Cheng - what if you hold shift+refresh? For me the /latest link
correctly points to 1.3.0
On Sun, Mar 15, 2015 at 10:40
Looks like github is functioning again (I no longer encounter this problem
when pushing to hbase repo).
Do you want to give it a try ?
Cheers
On Tue, Mar 10, 2015 at 6:54 PM, Michael Armbrust mich...@databricks.com
wrote:
FYI: https://issues.apache.org/jira/browse/INFRA-9259
bq. to be able to run my tests in sbt, though, it makes the development
iterations much faster.
Was the preference for sbt due to long maven build time ?
Have you started Zinc on your machine ?
Cheers
On Fri, Feb 27, 2015 at 11:10 AM, Imran Rashid iras...@cloudera.com wrote:
Has anyone else
a full rebuild of those
projects even when I haven't touched them.
On Fri, Feb 27, 2015 at 1:14 PM, Ted Yu yuzhih...@gmail.com wrote:
bq. to be able to run my tests in sbt, though, it makes the development
iterations much faster.
Was the preference for sbt due to long maven build time
nicholas.cham...@gmail.com wrote:
lol yeah, I changed the path for the email... turned out to be the issue
itself.
On Wed Feb 11 2015 at 2:43:09 PM Ted Yu yuzhih...@gmail.com wrote:
I see.
'/path/to/spark-1.2.1-bin-hadoop2.4' didn't contain space :-)
On Wed, Feb 11, 2015 at 2:41 PM, Nicholas
I downloaded 1.2.1 tar ball for hadoop 2.4
I got:
ls lib/
datanucleus-api-jdo-3.2.6.jar datanucleus-rdbms-3.2.9.jar
spark-assembly-1.2.1-hadoop2.4.0.jar
datanucleus-core-3.2.10.jarspark-1.2.1-yarn-shuffle.jar
spark-examples-1.2.1-hadoop2.4.0.jar
FYI
On Wed, Feb 11, 2015 at 2:27 PM,
spark-assembly-1.2.1-hadoop2.4.0.jar
spark-examples-1.2.1-hadoop2.4.0.jar
So that looks correct… Hmm.
Nick
On Wed Feb 11 2015 at 2:34:51 PM Ted Yu yuzhih...@gmail.com wrote:
I downloaded 1.2.1 tar ball for hadoop 2.4
I got:
ls lib/
datanucleus-api-jdo-3.2.6.jar datanucleus-rdbms
Congratulations, Cheng, Joseph and Sean.
On Tue, Feb 3, 2015 at 2:53 PM, Nicholas Chammas nicholas.cham...@gmail.com
wrote:
Congratulations guys!
On Tue Feb 03 2015 at 2:36:12 PM Matei Zaharia matei.zaha...@gmail.com
wrote:
Hi all,
The PMC recently voted to add three new committers:
Have you read / followed this ?
https://cwiki.apache.org/confluence/display/SPARK
/Useful+Developer+Tools#UsefulDeveloperTools-BuildingSparkinIntelliJIDEA
Cheers
On Sat, Jan 31, 2015 at 8:01 PM, Yafeng Guo daniel.yafeng@gmail.com
wrote:
Hi,
I'm setting up a dev environment with Intellij
How many profiles (hadoop / hive /scala) would this development environment
support ?
Cheers
On Tue, Jan 20, 2015 at 4:13 PM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
What do y'all think of creating a standardized Spark development
environment, perhaps encoded as a Vagrantfile, and
Please tale a look at SPARK-4048 and SPARK-5108
Cheers
On Sat, Jan 17, 2015 at 10:26 PM, Gil Vernik g...@il.ibm.com wrote:
Hi,
I took a source code of Spark 1.2.0 and tried to build it together with
hadoop-openstack.jar ( To allow Spark an access to OpenStack Swift )
I used Hadoop 2.6.0.
/
De : Ted Yu [yuzhih...@gmail.com]
Envoyé : jeudi 8 janvier 2015 17:43
À : Tony Reix
Cc : dev@spark.apache.org
Objet : Re: Results of tests
Here it is:
[centos] $
/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.0.5/bin/mvn
-DHADOOP_PROFILE=hadoop-2.4 -Dlabel=centos
might be able to
integrate the PySpark tests here, too (I think it's just a matter of
getting the Python test runner to generate the correct test result XML
output).
On Fri, Jan 9, 2015 at 10:47 AM, Ted Yu yuzhih...@gmail.com wrote:
For a build which uses JUnit, we would see a summary
=centos/testReport/
? (I'm not authorized to look at the configuration part)
Thx !
Tony
--
*De :* Ted Yu [yuzhih...@gmail.com]
*Envoyé :* jeudi 8 janvier 2015 16:11
*À :* Tony Reix
*Cc :* dev@spark.apache.org
*Objet :* Re: Results of tests
Please take a look
converters would be part
of external projects that can be listed with http://spark-packages.org/ I
see your project is already listed there.
—
Sent from Mailbox https://www.dropbox.com/mailbox
On Mon, Jan 5, 2015 at 5:37 PM, Ted Yu yuzhih...@gmail.com wrote:
In my opinion this would be useful
In my opinion this would be useful - there was another thread where returning
only the value of first column in the result was mentioned.
Please create a SPARK JIRA and a pull request.
Cheers
On Mon, Jan 5, 2015 at 6:42 AM, tgbaggio gen.tan...@gmail.com wrote:
Hi,
In HBaseConverter.scala
I extracted org/apache/hadoop/hive/common/CompressionUtils.class from the
jar and used hexdump to view the class file.
Bytes 6 and 7 are 00 and 33, respectively.
According to http://en.wikipedia.org/wiki/Java_class_file, the jar was
produced using Java 7.
FYI
On Tue, Dec 30, 2014 at 8:09 PM,
Can you try this command ?
sbt/sbt -Pyarn -Phadoop-2.4 -Dhadoop.version=2.6.0 -Phive assembly
On Fri, Dec 26, 2014 at 6:15 PM, Alessandro Baretta alexbare...@gmail.com
wrote:
I am building spark with sbt off of branch 1.2. I'm using the following
command:
sbt/sbt -Pyarn -Phadoop-2.3
Andy:
I saw two emails from you from yesterday.
See this thread: http://search-hadoop.com/m/JW1q5opRsY1
Cheers
On Fri, Dec 19, 2014 at 12:51 PM, Andy Konwinski andykonwin...@gmail.com
wrote:
Yesterday, I changed the domain name in the mailing list archive settings
to remove .incubator so
bq. I may move on to trying Maven.
Maven is my favorite :-)
On Sat, Dec 6, 2014 at 10:54 AM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
Ted,
I posted some updates
Have you seen this thread http://search-hadoop.com/m/JW1q5xxSAa2 ?
Test categorization in HBase is done through maven-surefire-plugin
Cheers
On Thu, Dec 4, 2014 at 4:05 PM, Nicholas Chammas nicholas.cham...@gmail.com
wrote:
fwiw, when we did this work in HBase, we categorized the tests. Then
201 - 300 of 331 matches
Mail list logo