Hi,
I'd like to allow using log4j2 in executor code.
As spark contains dependencies to log4j 1.2, I would like to support spark
build with log4j2 instead of log4j 1.2.
To accomplish that, I suggest creating a new profile for log4j2 in
spark-parent.
The default profile (log4j12), would include
Hello Everyone,
I'm a senior year computer engineering student in Turkey.
My main area of interests are cloud computing and machine learning.
I've been working on Apache Spark using Scala API for a few months. My projects
involved the use of MLib for a movie recommendation system and a stock
How do these proposals affect PySpark? I think compatibility with PySpark
through Py4J should be considered.
On Mon, Mar 9, 2015 at 8:39 PM, Patrick Wendell pwend...@gmail.com wrote:
Does this matter for our own internal types in Spark? I don't think
any of these types are designed to be used
Looks like github is functioning again (I no longer encounter this problem
when pushing to hbase repo).
Do you want to give it a try ?
Cheers
On Tue, Mar 10, 2015 at 6:54 PM, Michael Armbrust mich...@databricks.com
wrote:
FYI: https://issues.apache.org/jira/browse/INFRA-9259
(I have been able to push over the last few hours and see the commits in github)
On Wed, Mar 11, 2015 at 2:38 PM, Ted Yu yuzhih...@gmail.com wrote:
Looks like github is functioning again (I no longer encounter this problem
when pushing to hbase repo).
Do you want to give it a try ?
Cheers
I'm trying to understand the block allocation mechanism Spark uses to
generate batch jobs and a JobSet.
The JobGenerator.generateJobs tries to allocate received blocks to batch,
effectively in ReceivedBlockTracker.allocateBlocksToBatch creates
a streamIdToBlocks, where steam ID's (Int) mapped to