plicated
problems.
On Wed, Jun 18, 2014 at 4:05 PM, Jeremy Lee
wrote:
> Ah, right. So only the launch script has changed. Everything else is still
> essentially binary compatible?
>
> Well, that makes it too easy! Thanks!
>
>
> On Wed, Jun 18, 2014 at 2:35 PM, Patrick Wendel
;
> On Tue, Jun 17, 2014 at 9:29 PM, Jeremy Lee
> wrote:
> > I am about to spin up some new clusters, so I may give that a go... any
> > special instructions for making them work? I assume I use the "
> > --spark-git-repo=" option on the spark-ec2 command. Is it as
nches of Spark. We're likely to
> > make a 1.0.1 release soon (this patch being one of the main reasons),
> > but if you are itching for this sooner, you can just checkout the head
> > of branch-1.0 and you will be able to use r3.XXX instances.
> >
> > - Patrick
>
ce if there had been some
progress on that issue. Let me know if I can help with testing and whatnot.
--
Jeremy Lee BCompSci(Hons)
The Unorthodox Engineers
fused about sample code from three
versions ago.
I'm even thinking of learning maven, if it means I never have to use sbt
again. Does it mean that?
--
Jeremy Lee BCompSci(Hons)
The Unorthodox Engineers
I shut down my first (working) cluster and brought up a fresh one... and
It's been a bit of a horror and I need to sleep now. Should I be worried
about these errors? Or did I just have the old log4j.config tuned so I
didn't see them?
I
14/06/08 16:32:52 ERROR scheduler.JobScheduler: Error running
I read it more carefully, and window() might actually work for some other
stuff like logs. (assuming I can have multiple windows with entirely
different attributes on a single stream..)
Thanks for that!
On Sun, Jun 8, 2014 at 11:11 PM, Jeremy Lee
wrote:
> Yes.. but from what I underst
ces (I gave them tiny EBS) and I think I crashed them.
On Sat, Jun 7, 2014 at 10:23 PM, Gino Bustelo wrote:
> Have you thought of using window?
>
> Gino B.
>
> > On Jun 6, 2014, at 11:49 PM, Jeremy Lee
> wrote:
> >
> >
> > It's going well enough that
on events?
What's the best practise for keeping persistent data for a streaming app?
(Across restarts) And to clean up on termination?
--
Jeremy Lee BCompSci(Hons)
The Unorthodox Engineers
le, but nothing is in it other than a "_temporary" subdir.
>
> I'm sure I'm confused here, but not sure where. Help?
>
--
Jeremy Lee BCompSci(Hons)
The Unorthodox Engineers
Nope, sorry, nevermind!
I looked at the source, and it was pretty obvious that it didn't implement
that yet, so I've ripped the classes out and am mutating them into a new
receivers right now...
... starting to get the hang of this.
On Fri, Jun 6, 2014 at 1:07 PM, Jeremy Lee
wrot
Otherwise, yes, this is now the fun part!
--
Jeremy Lee BCompSci(Hons)
The Unorthodox Engineers
active network connections. discuss.)
On Thu, Jun 5, 2014 at 5:46 PM, Nick Pentreath
wrote:
> Great - well we do hope we hear from you, since the user list is for
> interesting success stories and anecdotes, as well as blog posts etc too :)
>
>
> On Thu, Jun 5, 2014 at 9:40 A
e very easy (mvn
> package). I can send a Pom.xml for a skeleton project if you need
> —
> Sent from Mailbox <https://www.dropbox.com/mailbox>
>
>
> On Thu, Jun 5, 2014 at 6:59 AM, Jeremy Lee > wrote:
>
>> Hmm.. That's not working so well for me. First, I
even get it to
build.
*sigh*
Is it going to be easier to just copy the external/ source code into my own
project? Because I will... especially if creating "Uberjars" takes this
long every... single... time...
On Thu, Jun 5, 2014 at 8:52 AM, Jeremy Lee
wrote:
> Thanks Patrick!
>
>
airly controversial on there, and it got me thinking.
>>>>
>>>> Scala appears to be the preferred language to work with in Spark, and
>>>> Spark itself is written in Scala, right?
>>>>
>>>> I know that often times a successful project evolves gradually out of
>>>> something small, and that the choice of programming language may not always
>>>> have been made consciously at the outset.
>>>>
>>>> But pretending that it was, why is Scala the preferred language of
>>>> Spark?
>>>>
>>>> Nick
>>>>
>>>>
>>>> --
>>>> View this message in context: Why Scala?
>>>> <http://apache-spark-user-list.1001560.n3.nabble.com/Why-Scala-tp6536.html>
>>>> Sent from the Apache Spark User List mailing list archive
>>>> <http://apache-spark-user-list.1001560.n3.nabble.com/> at Nabble.com
>>>> <http://nabble.com/>.
>>>>
>>>
>>>
>>>
>>
>>
>
--
Jeremy Lee BCompSci(Hons)
The Unorthodox Engineers
0-1.0.jar
> >>
> >> Seems redundant to me since I thought that the JAR as argument is
> copied to
> >> driver and made available. But this solved it for me so perhaps give it
> a
> >> try?
> >>
> >>
> >>
> >> On Wed, Jun
earning scala... Great Turing's Ghost, it's the dream
> language we've theorized about for years! I hadn't realized!
>
> Indeed, glad you’re enjoying it.
>
"Enjoying", not yet alas, I'm sure I'll get there. But I do understand the
implications of a mixed functional-imperative language with closures and
lambdas. That is serious voodoo.
--
Jeremy Lee BCompSci(Hons)
The Unorthodox Engineers
list the jars to be distributed. (is that
deprecated?)
One part of the documentation says:
"Once you have an assembled jar you can call the bin/spark-submit script
as shown here while passing your jar."
but another says:
"application-jar: Path to a bundled jar including your application and all
dependencies. The URL must be globally visible inside of your cluster, for
instance, an hdfs:// path or a file:// path that is present on all nodes."
I suppose both could be correct if you take a certain point of view.
--
Jeremy Lee BCompSci(Hons)
The Unorthodox Engineers
s! I hadn't realized!
On Mon, Jun 2, 2014 at 12:05 PM, Matei Zaharia
wrote:
> FYI, I opened https://issues.apache.org/jira/browse/SPARK-1990 to track
> this.
>
> Matei
>
>
> On Jun 1, 2014, at 6:14 PM, Jeremy Lee
> wrote:
>
> Sort of.. there were two separ
enough version of python.
>
> Spark-ec2 itself has a flag "-a" that allows you to give a specific
> AMI. This flag is just an internal tool that we use for testing when
> we spin new AMI's. Users can't set that to an arbitrary AMI because we
> tightly control things lik
4 INFO master.Master:
>> akka.tcp://spark@ip-10-100-75-70.ec2.internal:38485 got disassociated,
>> removing it.
>> 14/05/30 18:05:54 ERROR remote.EndpointWriter: AssociationError
>> [akka.tcp://sparkMaster@ip-10-100-184-45.ec2.internal:7077]
>> -> [akka.tcp://spark@ip-10-100-75-70.ec2.internal:38485]: Error
>> [Association failed with
>> [akka.tcp://spark@ip-10-100-75-70.ec2.internal:38485]] [
>> akka.remote.EndpointAssociationException: Association failed with [
>> akka.tcp://spark@ip-10-100-75-70.ec2.internal:38485]
>> Caused by:
>> akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2:
>> Connection refused: ip-10-100-75-70.ec2.internal/10.100.75.70:38485
>>
>>
>>
>
--
Jeremy Lee BCompSci(Hons)
The Unorthodox Engineers
t; Spot instance requests are not supported for this AMI.
>>
>> SuSE Linux Enterprise Server 11 sp3 (HVM) - ami-1a88bb5f
>> Not tested - costs 10x more for spot instances, not economically viable.
>>
>> Ubuntu Server 14.04 LTS (HVM) - ami-f64f77b3
>> Provisions ser
nt.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
> at
>
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>
>
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-EC2-tp6638.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
--
Jeremy Lee BCompSci(Hons)
The Unorthodox Engineers
;...can have a spark cluster up and running in
five minutes." But it's been three days for me so far. I'm about to bite
the bullet and start building my own AMI's from scratch... if anyone can
save me from that, I'd be most grateful.
--
Jeremy Lee BCompSci(Hons)
The Unorthodox Engineers
;hvm"
Clearly a masterpiece of hacking. :-) I haven't tested all of them. The r3
set seems to act like i2.
On Sun, Jun 1, 2014 at 12:45 AM, Jeremy Lee
wrote:
> Hi there, Patrick. Thanks for the reply...
>
> It wouldn't surprise me that AWS Ubuntu has Python 2.7. Ub
hon 2.7+. I regularly run them from the AWS Ubuntu 12.04 AMI... that
> might be a good place to start. But if there is a straightforward way to
> make them compatible with 2.6 we should do that.
>
> For r3.large, we can add that to the script. It's a newer type. Any
> interest in con
re Dead-On-Arrival when run
according to the instructions. Sorry.
Any suggestions on how to proceed? I'll keep trying to fix the webserver,
but (a) changes to httpd.conf get blown away by "resume", and (b) anything
I do has to be redone every time I provision another cluster. Ugh.
--
Jeremy Lee BCompSci(Hons)
The Unorthodox Engineers
28 matches
Mail list logo