Yeah from what I remember it was set defensively. I don't know of a good
way to check if the master is up though. I guess we could poll the Master
Web UI and see if we get a 200/ok response
Shivaram
On Fri, Apr 10, 2015 at 8:24 PM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
Check this
+1 (non-binding)
On Sat, Apr 11, 2015 at 11:48 AM Krishna Sankar ksanka...@gmail.com wrote:
+1. All tests OK (same as RC2)
Cheers
k/
On Fri, Apr 10, 2015 at 11:05 PM, Patrick Wendell pwend...@gmail.com
wrote:
Please vote on releasing the following candidate as Apache Spark version
So basically, to tell if the master is ready to accept slaves, just poll
http://master-node:4040 for an HTTP 200 response?
On Sat, Apr 11, 2015 at 2:42 PM Shivaram Venkataraman
shiva...@eecs.berkeley.edu wrote:
Yeah from what I remember it was set defensively. I don't know of a good
way to
Welcome, Dmitriy, to the Spark dev list!
On Sat, Apr 11, 2015 at 1:14 AM, Dmitriy Setrakyan dsetrak...@apache.org
wrote:
Hello Everyone,
I am one of the committers to Apache Ignite and have noticed some talks on
this dev list about integrating Ignite In-Memory File System (IgniteFS)
with
Hi Dmitriy,
Thanks for the input, I think as per my previous email it would be good to
have a bridge project that for example, creates a IgniteFS RDD, similar to
the JDBC or HDFS one in which we can extract blocks and populate RDD
partitions, I'll post this proposal on your list.
Thanks
Devl
Hello Everyone,
I am one of the committers to Apache Ignite and have noticed some talks on
this dev list about integrating Ignite In-Memory File System (IgniteFS)
with Spark. We definitely like the idea. If you have any questions about
Apache Ignite at all, feel free to forward them to the Ignite
+1
On Fri, Apr 10, 2015 at 11:07 PM -0700, Patrick Wendell pwend...@gmail.com
wrote:
Please vote on releasing the following candidate as Apache Spark version 1.3.1!
The tag to be voted on is v1.3.1-rc2 (commit 3e83913):
Hi,
Suppose I create a dataRDD which extends RDD[Row], and each row is
GenericMutableRow(Array(Int, Array[Byte])). A same Array[Byte] object is
reused among rows but has different content each time. When I convert it to
a dataFrame and save it as Parquet File, the file's row group statistic(max
+1 same result as last time.
On Sat, Apr 11, 2015 at 7:05 AM, Patrick Wendell pwend...@gmail.com wrote:
Please vote on releasing the following candidate as Apache Spark version
1.3.1!
The tag to be voted on is v1.3.1-rc2 (commit 3e83913):
From SparkUI.scala :
def getUIPort(conf: SparkConf): Int = {
conf.getInt(spark.ui.port, SparkUI.DEFAULT_PORT)
}
Better retrieve effective UI port before probing.
Cheers
On Sat, Apr 11, 2015 at 2:38 PM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
So basically, to tell if the
10 matches
Mail list logo