Hi,
Yes, I managed to create a register custom metrics by creating an
implementation of org.apache.spark.metrics.source.Source and registering
it to the metrics subsystem.
Source is [Spark] private, so you need to create it under a org.apache.spark
package. In my case, I'm dealing with Spark
Hi,
In HBaseConverter.scala
https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/pythonconverters/HBaseConverters.scala
, the python converter HBaseResultToStringConverter return only the value of
first column in the result. In my opinion, it limits
Hi Gerard,
Thanks for the answer! I had a good look at it, but I couldn't figure out
whether one can use that to emit metrics from your application code.
Suppose I wanted to monitor the rate of bytes I produce, like so:
stream
.map { input =
val bytes = produce(input)
We are planning to support HBase as a native data source to Spark SQL in 1.3
(SPARK-3880).
More details will come soon.
-Original Message-
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Monday, January 05, 2015 7:37 AM
To: tgbaggio
Cc: dev@spark.apache.org
Subject: Re: python
HBaseConverter is in Spark source tree. Therefore I think it makes sense
for this improvement to be accepted so that the example is more useful.
Cheers
On Mon, Jan 5, 2015 at 7:54 AM, Nick Pentreath nick.pentre...@gmail.com
wrote:
Hey
These converters are actually just intended to be
The pull request builder and SCM-polling builds appear to be working fine,
but the links in pull request comments won't work because the AMP Lab
webserver is still down. In the meantime, though, you can continue to
access Jenkins through https://hadrian.ist.berkeley.edu/jenkins/
On Mon, Jan 5,
Hey François,
Well, at a high-level here is what I thought about the diagram.
- ReceiverSupervisor handles only one Receiver.
- BlockGenerator is part of ReceiverSupervisor not ReceivedBlockHandler
- The blocks are inserted in BlockManager and if activated,
WriteAheadLogManager in parallel, not
Hi Spark devs,
I'm interested in having a committer look at a PR [1] for Mesos, but
there's not an entry for Mesos in the maintainers specialties on the wiki
[2]. Which Spark committers have expertise in the Mesos features?
Thanks!
Andrew
[1] https://github.com/apache/spark/pull/3074
[2]
Absolutely; as I mentioned by all means submit a PR - I just wanted to point
out that any specific converter is not officially supported, although the
interface is of course.
I'm happy to review a PR just ping me when ready.
—
Sent from Mailbox
On Mon, Jan 5, 2015 at 7:06 PM, Ted Yu
Hello,
I'm using Spark 1.2.0 and when running an application, if I go into the UI
and then in the job tab (/jobs/) the jobs duration are relevant and the
posted durations looks ok.
However when I open the history (history/app-xyz/jobs/) for that job,
the duration are wrong showing milliseconds
Thanks for reporting this - it definitely sounds like a bug. Please
open a JIRA for it. My guess is that we define the start or end time
of the job based on the current time instead of looking at data
encoded in the underlying event stream. That would cause it to not
work properly when loading
UC Berkeley had some major maintenance done this past weekend, and long
story short, not everything came back. our primary webserver's NFS is down
and that means we're not serving websites, meaning that the redirect to
jenkins is failing.
jenkins is still up, and building some jobs, but we will
FYI, ApacheCon North America call for papers is up.
Matei
Begin forwarded message:
Date: January 5, 2015 at 9:40:41 AM PST
From: Rich Bowen rbo...@rcbowen.com
Reply-To: dev d...@community.apache.org
To: dev d...@community.apache.org
Subject: ApacheCon North America 2015 Call For Papers
Hey
These converters are actually just intended to be examples of how to set up a
custom converter for a specific input format. The converter interface is there
to provide flexibility where needed, although with the new SparkSQL data store
interface the intention is that most common use
In my opinion this would be useful - there was another thread where returning
only the value of first column in the result was mentioned.
Please create a SPARK JIRA and a pull request.
Cheers
On Mon, Jan 5, 2015 at 6:42 AM, tgbaggio gen.tan...@gmail.com wrote:
Hi,
In HBaseConverter.scala
15 matches
Mail list logo