I have worked with various ASF projects for 4+ years now. Sure, ASF
projects can delete code as they feel fit. But this is the first time I
have really seen code being "moved out" of a project without discussion. I
am sure you can do this without violating ASF policy, but the explanation
for that
The credentials file approach (using keytab for spark apps) will only
update HDFS tokens. YARN's AMRM tokens should be taken care of by YARN
internally.
Steve - correct me if I am wrong here: If the AMRM tokens are disappearing
it might be a YARN bug (does the AMRM token have a 7 day limit as
+1, much better than having a new PR each time to fix something for scala-2.11
every time a patch breaks it.
Thanks,
Hari Shreedharan
> On Oct 9, 2015, at 11:47 AM, Michael Armbrust <mich...@databricks.com> wrote:
>
> How about just fixing the warning? I get it; it doesn't
+1. Build looks good, ran a couple apps on YARN
Thanks,
Hari
On Fri, Jun 5, 2015 at 10:52 AM, Yin Huai yh...@databricks.com wrote:
Sean,
Can you add -Phive -Phive-thriftserver and try those Hive tests?
Thanks,
Yin
On Fri, Jun 5, 2015 at 5:19 AM, Sean Owen so...@cloudera.com wrote:
/03e263f5b527cf574f4ffcd5cd886f7723e3756e
- Patrick
On Mon, Apr 6, 2015 at 2:31 PM, Mark Hamstra m...@clearstorydata.com wrote:
Is that correct, or is the JIRA just out of sync, since TD's PR was merged?
https://github.com/apache/spark/pull/5008
On Mon, Apr 6, 2015 at 11:10 AM, Hari Shreedharan
hshreedha
Here you are:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28571/consoleFull
On Fri, Mar 13, 2015 at 11:58 AM, shane knapp skn...@berkeley.edu wrote:
link to a build, please?
On Fri, Mar 13, 2015 at 11:53 AM, Hari Shreedharan
hshreedha...@cloudera.com wrote:
Looks
+1. Jira cleanup would be good. Please let me know if I can help in some way!
Thanks, Hari
On Fri, Feb 6, 2015 at 11:56 AM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
Do we need some new components to be added to the JIRA project?
Like:
-
scheduler
-
YARN
-
Congrats Cheng, Joseph and Owen! Well done!
Thanks, Hari
On Tue, Feb 3, 2015 at 2:55 PM, Ted Yu yuzhih...@gmail.com wrote:
Congratulations, Cheng, Joseph and Sean.
On Tue, Feb 3, 2015 at 2:53 PM, Nicholas Chammas nicholas.cham...@gmail.com
wrote:
Congratulations guys!
On Tue Feb 03
the
above. I'm just suggesting providing an api that doesn't get in the way
of
exactly-once.
On Fri, Dec 19, 2014 at 3:57 PM, Hari Shreedharan
hshreedha...@cloudera.com
wrote:
Can you explain your basic algorithm for the once-only-delivery? It is
quite a bit of very Kafka-specific code
failure, restart from
the last committed offsets.
Yes, this approach is biased towards the etl-like use cases rather
than near-realtime-analytics use cases.
On Thu, Dec 18, 2014 at 3:27 PM, Hari Shreedharan
hshreedha...@cloudera.com
wrote:
I get what you are saying. But getting
, at 12:06 PM, Hari Shreedharan
hshreedha...@cloudera.com wrote:
Hi Dibyendu,
Thanks for the details on the implementation. But I still do not believe
that it is no duplicates - what they achieve is that the same batch is
processed exactly the same way every time (but see it may
of the idea, chances are it's something people need.
On Thu, Dec 18, 2014 at 1:44 PM, Hari Shreedharan hshreedha...@cloudera.com
wrote:
Hi Cody,
I am an absolute +1 on SPARK-3146. I think we can implement something
pretty simple and lightweight for that one.
For the Kafka DStream skipping the WAL
Seems like a comment on that page mentions a fix, which would add yet another
profile though — specifically telling mvn that if it is an apple jdk, use the
classes.jar as the tools.jar as well, since Apple-packaged JDK 6 bundled them
together.
Link:
Yep, you’d need to shade jars to ensure all your dependencies are in the
classpath.
Thanks,
Hari
On Wed, Nov 12, 2014 at 3:23 AM, Ted Malaska ted.mala...@cloudera.com
wrote:
Hey this is Ted
Are you using Shade when you build your jar and are you using the bigger
jar? Looks like classes
Looks like that port is not available because another app is using that port.
Can you take a look at netstat -a and use a port that is free?
Thanks,
Hari
On Fri, Nov 7, 2014 at 2:05 PM, Jeniba Johnson
jeniba.john...@lntinfotech.com wrote:
Hi,
I have installed spark-1.1.0 and apache flume
First, can you try a different port?
TIME_WAIT is basically a timeout for a socket to be completely decommissioned
for the port to be available for binding. Once you wait for a few minutes and
if you still see a startup issue, can you also send the error logs? From what I
can see, the port
Did you start a Flume agent to push data to the relevant port?
Thanks,
Hari
On Fri, Nov 7, 2014 at 2:05 PM, Jeniba Johnson
jeniba.john...@lntinfotech.com wrote:
Hi,
I have installed spark-1.1.0 and apache flume 1.4 for running streaming
example FlumeEventCount. Previously the code was
and lightweight as
proposed here. Really the proposal on the table is just to codify the
current de-facto process to make sure we stick by it as we scale. If
we want to add more formality to it or strictness, we can do it later.
- Patrick
On Thu, Nov 6, 2014 at 3:29 PM, Hari Shreedharan
hshreedha
I have seen this on sbt sometimes. I usually do an sbt clean and that fixes it.
Thanks,
Hari
On Tue, Nov 4, 2014 at 3:13 PM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
FWIW, the official build instructions are here:
https://github.com/apache/spark#building-spark
On Tue, Nov 4, 2014
requests
with either one.
- Patrick
On Fri, Oct 24, 2014 at 1:39 PM, Hari Shreedharan
hshreedha...@cloudera.com wrote:
Over the last few months, it seems like we have selected Maven to be the
official build system for Spark.
I realize that removing the sbt build may not be easy, but it might
.
- Patrick
On Fri, Oct 24, 2014 at 1:55 PM, Hari Shreedharan
hshreedha...@cloudera.com wrote:
I have zinc server running on my mac, and I see maven compilation to be
much
better than before I had it running. Is the sbt build still faster
(sorry,
long time since I did a build with sbt
The sbt executable that is in the spark repo can be used to build sbt without
any other set up (it will download the sbt jars etc).
Thanks,
Hari
On Mon, Oct 20, 2014 at 5:16 PM, Sean Owen so...@cloudera.com wrote:
Maven is at least built in to OS X (well, with dev tools). You don't
even
Sean - I think only the ones in 1726 are enough. It is weird that any
class that uses the test-jar actually requires the streaming jar to be
added explicitly. Shouldn't maven take care of this?
I posted some comments on the PR.
--
Thanks,
Hari
Sean Owen mailto:so...@cloudera.com
August
Jay running sbt compile or assembly should generate the sources.
On Monday, August 11, 2014, Devl Devel devl.developm...@gmail.com wrote:
Hi
So far I've been managing to build Spark from source but since a change in
spark-streaming-flume I have no idea how to generate classes (e.g.
Add this to your .bash_profile (or .bashrc) - that will fix it.
export _JAVA_OPTIONS=-Djava.awt.headless=true
Hari
On Sun, Jul 20, 2014 at 1:56 PM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
I just created SPARK-2602
https://issues.apache.org/jira/browse/SPARK-2602 to
track this
25 matches
Mail list logo