>
> On Thu, Jun 21, 2018 at 4:51 PM, Chawla,Sumit
>>> wrote:
>>>
Hi
I have been trying to this simple operation. I want to land all
values with one key in same partition, and not have any different key in
the same partition. Is this possible? I am getting b and c
In researching and discussing these issues with Cloudera and others, we've
been told that only one mechanism is supported for starting Spark jobs: the
*spark-submit* scripts.
Is this new? We've been submitting jobs directly from a programatically
created spark context (instead of through
see discussions about Spark not really liking multiple contexts in the
same JVM
Speaking of this - is there a standard way of writing unit tests that
require a SparkContext?
We've ended up copying out the code of SharedSparkContext to our own
testing hierarchy, but it occurs to me someone
Thanks, Marcelo
Instantiating SparkContext directly works. Well, sorta: it has
limitations. For example, see discussions about Spark not really liking
multiple contexts in the same JVM. It also does not work in cluster
deploy mode.
That's fine - when one is doing something out of
, in https://github.com/apache/spark/pull/5565, and
would very much appreciate comments.
Thanks,
Nathan
On Thu, Dec 19, 2013 at 12:42 AM, Reynold Xin r...@apache.org wrote:
On Wed, Dec 18, 2013 at 12:17 PM, Nathan Kronenfeld
nkronenf...@oculusinfo.com wrote:
Since
Could I get someone to look at PR 5140 please? It's been languishing more
than two weeks.
--
Nathan Kronenfeld
Senior Visualization Developer
Oculus Info Inc
2 Berkeley Street, Suite 600,
Toronto, Ontario M5A 4J5
Phone: +1-416-203-3003 x 238
Email: nkronenf...@oculusinfo.com
a hole in the wall - does anyone know what I can do
to fix this?
Thanks,
-Nathan
--
Nathan Kronenfeld
Senior Visualization Developer
Oculus Info Inc
2 Berkeley Street, Suite 600,
Toronto, Ontario M5A 4J5
Phone: +1-416-203-3003 x 238
Email: nkronenf...@oculusinfo.com
that might be changing the version of Jetty used by Spark?
It depends a lot on how you are building things.
Good to specify exactly how your'e building here.
On Thu, Jul 17, 2014 at 3:43 PM, Nathan Kronenfeld
nkronenf...@oculusinfo.com wrote:
I'm trying to compile the latest code
er, that line being in toDebugString, where it really shouldn't affect
anything (no signature changes or the like)
On Thu, Jul 17, 2014 at 10:58 AM, Nathan Kronenfeld
nkronenf...@oculusinfo.com wrote:
My full build command is:
./sbt/sbt -Dhadoop.version=2.0.0-mr1-cdh4.6.0 clean assembly
repo.
Is this already done in some other repo about which I don't know, perhaps?
I know it would save us a lot of time and grief simply to be able to point
a project we build at the right version, and not have to rebuild and deploy
spark manually.
--
Nathan Kronenfeld
Senior Visualization
11 matches
Mail list logo