[jira] [Commented] (SPARK-8960) Style cleanup of spark_ec2.py

2015-07-11 Thread Matthew Goodman (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14623513#comment-14623513
 ] 

Matthew Goodman commented on SPARK-8960:


http://mail-archives.apache.org/mod_mbox/spark-dev/201507.mbox/browser

I am a bit surprised there ins't more traffic on this topic.

> Style cleanup of spark_ec2.py
> -
>
> Key: SPARK-8960
> URL: https://issues.apache.org/jira/browse/SPARK-8960
> Project: Spark
>  Issue Type: Task
>  Components: EC2
>Affects Versions: 1.4.0
>Reporter: Daniel Darabos
>Priority: Trivial
>
> The spark_ec2.py script could use some cleanup I think. There are simple 
> style issues like mixing single and double quotes, but also some rather 
> un-Pythonic constructs (e.g. 
> https://github.com/apache/spark/pull/6336#commitcomment-12088624 that sparked 
> this JIRA). Whenever I read it, I always find something that is too minor for 
> a pull request/JIRA, but I'd fix it if it was my code. Perhaps we can address 
> such issues in this JIRA.
> The intention is not to introduce any behavioral changes. It's hard to verify 
> this without testing, so perhaps we should also add some kind of test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-8960) Style cleanup of spark_ec2.py

2015-07-11 Thread Matthew Goodman (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-8960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14623481#comment-14623481
 ] 

Matthew Goodman commented on SPARK-8960:


I would be happy to help with this.  I was a bit confused, as it never seems as 
the thread about sparc/spark-ec2/mesos parts ever seemed to converge on 
anything.  I will chime in there now.

> Style cleanup of spark_ec2.py
> -
>
> Key: SPARK-8960
> URL: https://issues.apache.org/jira/browse/SPARK-8960
> Project: Spark
>  Issue Type: Task
>  Components: EC2
>Affects Versions: 1.4.0
>Reporter: Daniel Darabos
>Priority: Trivial
>
> The spark_ec2.py script could use some cleanup I think. There are simple 
> style issues like mixing single and double quotes, but also some rather 
> un-Pythonic constructs (e.g. 
> https://github.com/apache/spark/pull/6336#commitcomment-12088624 that sparked 
> this JIRA). Whenever I read it, I always find something that is too minor for 
> a pull request/JIRA, but I'd fix it if it was my code. Perhaps we can address 
> such issues in this JIRA.
> The intention is not to introduce any behavioral changes. It's hard to verify 
> this without testing, so perhaps we should also add some kind of test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7909) spark-ec2 and associated tools not py3 ready

2015-06-01 Thread Matthew Goodman (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14567685#comment-14567685
 ] 

Matthew Goodman commented on SPARK-7909:


Awesome, thanks for all the help on this.  There is one (possibly unrelated) 
issue remains, which is that httpd seems to fail to startup, giving the 
following traceback:

{code:title=HTTPD Failure Traceback|borderStyle=solid}
Starting httpd: httpd: Syntax error on line 154 of /etc/httpd/conf/httpd.conf: 
Cannot load /etc/httpd/modules/mod_authz_core.so into server: 
/etc/httpd/modules/mod_authz_core.so: cannot open shared object file: No such 
file or directory
{code}

Should I send in a PR [for this 
change|https://github.com/3Scan/spark-ec2/commit/3416dd07c492b0cddcc98c4fa83f9e4284ed8fc9]?
  

> spark-ec2 and associated tools not py3 ready
> 
>
> Key: SPARK-7909
> URL: https://issues.apache.org/jira/browse/SPARK-7909
> Project: Spark
>  Issue Type: Improvement
>  Components: EC2
> Environment: ec2 python3
>Reporter: Matthew Goodman
>
> At present there is not a possible permutation of tools that supports Python3 
> on both the launching computer and running cluster.  There are a couple 
> problems involved:
>  - There is no prebuilt spark binary with python3 support.
>  - spark-ec2/spark/init.sh contains inline py3 unfriendly print statements
>  - Config files for cluster processes don't seem to make it to all nodes in a 
> working format.
> I have fixes for some of this, but the config and running context debugging 
> remains elusive to me.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7909) spark-ec2 and associated tools not py3 ready

2015-05-28 Thread Matthew Goodman (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564166#comment-14564166
 ] 

Matthew Goodman commented on SPARK-7909:


Using the prebuilt binaries from the links provided yields a working cluster.  
Is there a timeline for when the spark 1.4.0 binaries make the s3 bucket?  I 
can add the link to the spark/init.sh script, but it will bounce until the 
binary is actually place in the bucket.

In either case I suspect the naming convention will be similar, so would a PR 
for the changes outlined above be a good step at this stage?

> spark-ec2 and associated tools not py3 ready
> 
>
> Key: SPARK-7909
> URL: https://issues.apache.org/jira/browse/SPARK-7909
> Project: Spark
>  Issue Type: Improvement
>  Components: EC2
> Environment: ec2 python3
>Reporter: Matthew Goodman
>
> At present there is not a possible permutation of tools that supports Python3 
> on both the launching computer and running cluster.  There are a couple 
> problems involved:
>  - There is no prebuilt spark binary with python3 support.
>  - spark-ec2/spark/init.sh contains inline py3 unfriendly print statements
>  - Config files for cluster processes don't seem to make it to all nodes in a 
> working format.
> I have fixes for some of this, but the config and running context debugging 
> remains elusive to me.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7909) spark-ec2 and associated tools not py3 ready

2015-05-28 Thread Matthew Goodman (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563609#comment-14563609
 ] 

Matthew Goodman commented on SPARK-7909:


There are 11 folders in /root/spark/work/app-20150528200603-/, all with the 
same traceback below different only in time of error
{code:title=Spark worker Traceback|borderStyle=solid}
15/05/28 20:06:04 INFO executor.CoarseGrainedExecutorBackend: Registered signal 
handlers for [TERM, HUP, INT]
Exception in thread "main" java.lang.ExceptionInInitializerError
at 
org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:146)
at 
org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:245)
at 
org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)
Caused by: java.lang.RuntimeException: 
java.lang.reflect.InvocationTargetException
at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
at org.apache.hadoop.security.Groups.(Groups.java:55)
at 
org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:182)
at 
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:235)
at 
org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:249)
at 
org.apache.spark.deploy.SparkHadoopUtil.(SparkHadoopUtil.scala:50)
at 
org.apache.spark.deploy.SparkHadoopUtil$.(SparkHadoopUtil.scala:353)
at 
org.apache.spark.deploy.SparkHadoopUtil$.(SparkHadoopUtil.scala)
... 3 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
... 10 more
Caused by: java.lang.UnsatisfiedLinkError: 
org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
at 
org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native Method)
at 
org.apache.hadoop.security.JniBasedUnixGroupsMapping.(JniBasedUnixGroupsMapping.java:49)
at 
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.(JniBasedUnixGroupsMappingWithFallback.java:38)
... 15 more
{code}

My launch script is as follows:
{code:title=Spark Launch Call|borderStyle=solid}
bash spark-ec2 --spark-version=ab62d73ddb973c25de043e8e9ade7800adf244e8 
--spark-ec2-git-repo=https://github.com/3scan/spark-ec2 
--spark-ec2-git-branch=branch-1.4 --key-pair=blahblahblah 
--identity-file=blahblahblah.pem --region us-west-2 --user-data 
/home/meawoppl/repos/3scan-analysis/spark/linux-bootstrap.sh login test-cluster
{code}

I am going to try the prebuilt spark next.  I suspect this is surrounding the 
compiled/checked out version that I am running?  Not sure.


> spark-ec2 and associated tools not py3 ready
> 
>
> Key: SPARK-7909
> URL: https://issues.apache.org/jira/browse/SPARK-7909
> Project: Spark
>  Issue Type: Improvement
>  Components: EC2
> Environment: ec2 python3
>Reporter: Matthew Goodman
>
> At present there is not a possible permutation of tools that supports Python3 
> on both the launching computer and running cluster.  There are a couple 
> problems involved:
>  - There is no prebuilt spark binary with python3 support.
>  - spark-ec2/spark/init.sh contains inline py3 unfriendly print statements
>  - Config files for cluster processes don't seem to make it to all nodes in a 
> working format.
> I have fixes for some of this, but the config and running context debugging 
> remains elusive to me.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7909) spark-ec2 and associated tools not py3 ready

2015-05-28 Thread Matthew Goodman (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14563488#comment-14563488
 ] 

Matthew Goodman commented on SPARK-7909:


Thanks for the leads.  I had some major problems with the JIRA yesterday, was I 
alone in that?

[~shivaram] Where are the worker logs stored?

[~davies] I didn't see any of the RC binaries for spark in the EC2 in the s3 
bucket:
https://github.com/mesos/spark-ec2/blob/branch-1.4/spark/init.sh
http://s3.amazonaws.com/spark-related-packages/

Is there somewhere else they are stored?  

I recall seeing the RC branches in github, but that appears to have just 
dissapeared, and the branch-1.4 no longer has a RC.  Is the release happening 
today or some such? 

The first thing I did was [add the 1.3.1| 
https://github.com/3Scan/spark-ec2/commit/08d210dc8d44c07383e46fcd303c8f0c20828bcf]
 build that I found there, and try that.  That didn't feature any of the py3 
support, and crashes fast and early.  Next I had it build from a checkout of 
the current master, which is where the above errors come from. To get a source 
build to work, I had to make [this change | 
https://github.com/3Scan/spark-ec2/commit/3416dd07c492b0cddcc98c4fa83f9e4284ed8fc9],
 and at least one other . . . to be determined following sorting out the above 
trace.  

> spark-ec2 and associated tools not py3 ready
> 
>
> Key: SPARK-7909
> URL: https://issues.apache.org/jira/browse/SPARK-7909
> Project: Spark
>  Issue Type: Improvement
>  Components: EC2
> Environment: ec2 python3
>Reporter: Matthew Goodman
>
> At present there is not a possible permutation of tools that supports Python3 
> on both the launching computer and running cluster.  There are a couple 
> problems involved:
>  - There is no prebuilt spark binary with python3 support.
>  - spark-ec2/spark/init.sh contains inline py3 unfriendly print statements
>  - Config files for cluster processes don't seem to make it to all nodes in a 
> working format.
> I have fixes for some of this, but the config and running context debugging 
> remains elusive to me.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7806) spark-ec2 launch script fails for Python3

2015-05-27 Thread Matthew Goodman (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562132#comment-14562132
 ] 

Matthew Goodman commented on SPARK-7806:


I started one here.  https://issues.apache.org/jira/browse/SPARK-7909

I feel like I am a half hour from another PR, but I am having some issues 
getting everything spun up nicely.  

> spark-ec2 launch script fails for Python3
> -
>
> Key: SPARK-7806
> URL: https://issues.apache.org/jira/browse/SPARK-7806
> Project: Spark
>  Issue Type: Bug
>  Components: EC2, PySpark
>Affects Versions: 1.3.1
> Environment: All platforms.  
>Reporter: Matthew Goodman
>Assignee: Matthew Goodman
>Priority: Minor
> Fix For: 1.4.0
>
>
> Depending on the options used the spark-ec2 script will terminate 
> ungracefully.  
> Relevant buglets include:
>  - urlopen() returning bytes vs. string
>  - floor division change for partition calculation
>  - filter() iteration behavior change in module calculation
> I have a fixed version that I wish to contribute.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7909) spark-ec2 and associated tools not py3 ready

2015-05-27 Thread Matthew Goodman (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14562121#comment-14562121
 ] 

Matthew Goodman commented on SPARK-7909:


I have almost everything working, but I am getting hung-up on getting the 
pyspark binary to launch things correctly.  When logging into ec2:

{code:title=PySpark Output Loop|borderStyle=solid}
root@ip-172-31-6-84 ~]$ ./spark/bin/pyspark
Python 3.4.3 |Continuum Analytics, Inc.| (default, Mar  6 2015, 12:03:53) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/05/28 00:58:28 INFO SparkContext: Running Spark version 1.4.0-SNAPSHOT
15/05/28 00:58:28 WARN NativeCodeLoader: Unable to load native-hadoop library 
for your platform... using builtin-java classes where applicable
15/05/28 00:58:28 INFO SecurityManager: Changing view acls to: root
15/05/28 00:58:28 INFO SecurityManager: Changing modify acls to: root
15/05/28 00:58:28 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users with view permissions: Set(root); users with 
modify permissions: Set(root)
15/05/28 00:58:29 INFO Slf4jLogger: Slf4jLogger started
15/05/28 00:58:29 INFO Remoting: Starting remoting
15/05/28 00:58:29 INFO Remoting: Remoting started; listening on addresses 
:[akka.tcp://sparkDriver@172.31.6.84:59125]
15/05/28 00:58:30 INFO Utils: Successfully started service 'sparkDriver' on 
port 59125.
15/05/28 00:58:30 INFO SparkEnv: Registering MapOutputTracker
15/05/28 00:58:30 INFO SparkEnv: Registering BlockManagerMaster
15/05/28 00:58:30 INFO DiskBlockManager: Created local directory at 
/mnt/spark/spark-985d5a6c-150e-40ad-875f-351733a40276/blockmgr-e36c9174-ff48-42e1-bbd0-c2b0649ab751
15/05/28 00:58:30 INFO DiskBlockManager: Created local directory at 
/mnt2/spark/spark-fb2a7e42-2998-4ad2-be5f-d25472727d57/blockmgr-660e39f5-3561-4bec-a042-7cab1ea8cf54
15/05/28 00:58:30 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
15/05/28 00:58:30 INFO HttpFileServer: HTTP File server directory is 
/mnt/spark/spark-985d5a6c-150e-40ad-875f-351733a40276/httpd-40a7ad26-25d9-482b-bc7f-68d9f126b32d
15/05/28 00:58:30 INFO HttpServer: Starting HTTP Server
15/05/28 00:58:30 INFO Server: jetty-8.y.z-SNAPSHOT
15/05/28 00:58:30 INFO AbstractConnector: Started SocketConnector@0.0.0.0:46564
15/05/28 00:58:30 INFO Utils: Successfully started service 'HTTP file server' 
on port 46564.
15/05/28 00:58:30 INFO SparkEnv: Registering OutputCommitCoordinator
15/05/28 00:58:30 INFO Server: jetty-8.y.z-SNAPSHOT
15/05/28 00:58:30 INFO AbstractConnector: Started 
SelectChannelConnector@0.0.0.0:4040
15/05/28 00:58:30 INFO Utils: Successfully started service 'SparkUI' on port 
4040.
15/05/28 00:58:30 INFO SparkUI: Started SparkUI at 
http://ec2-52-24-65-198.us-west-2.compute.amazonaws.com:4040
15/05/28 00:58:30 INFO AppClient$ClientActor: Connecting to master 
akka.tcp://sparkmas...@ec2-52-24-65-198.us-west-2.compute.amazonaws.com:7077/user/Master...
15/05/28 00:58:31 INFO SparkDeploySchedulerBackend: Connected to Spark cluster 
with app ID app-20150528005831-0005
15/05/28 00:58:31 INFO AppClient$ClientActor: Executor added: 
app-20150528005831-0005/0 on worker-20150527230803-172.31.13.150-50730 
(172.31.13.150:50730) with 2 cores
15/05/28 00:58:31 INFO SparkDeploySchedulerBackend: Granted executor ID 
app-20150528005831-0005/0 on hostPort 172.31.13.150:50730 with 2 cores, 6.0 GB 
RAM
15/05/28 00:58:31 INFO AppClient$ClientActor: Executor updated: 
app-20150528005831-0005/0 is now LOADING
15/05/28 00:58:31 INFO AppClient$ClientActor: Executor updated: 
app-20150528005831-0005/0 is now RUNNING
15/05/28 00:58:31 INFO Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 34430.
15/05/28 00:58:31 INFO NettyBlockTransferService: Server created on 34430
15/05/28 00:58:31 INFO BlockManagerMaster: Trying to register BlockManager
15/05/28 00:58:31 INFO BlockManagerMasterEndpoint: Registering block manager 
172.31.6.84:34430 with 265.4 MB RAM, BlockManagerId(driver, 172.31.6.84, 34430)
15/05/28 00:58:31 INFO BlockManagerMaster: Registered BlockManager
15/05/28 00:58:31 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready 
for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
Welcome to
    __
 / __/__  ___ _/ /__
_\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 1.4.0-SNAPSHOT
  /_/

Using Python version 3.4.3 (default, Mar  6 2015 12:03:53)
SparkContext available as sc, SQLContext available as sqlContext.
>>> 15/05/28 00:58:32 INFO AppClient$ClientActor: Executor updated: 
>>> app-20150528005831-0005/0 is now EXITED (Command exited with code 1)
15/05/28 00:58:32 INFO SparkDeploySchedulerBackend: Executor 
app-2015052800

[jira] [Created] (SPARK-7909) spark-ec2 and associated tools not py3 ready

2015-05-27 Thread Matthew Goodman (JIRA)
Matthew Goodman created SPARK-7909:
--

 Summary: spark-ec2 and associated tools not py3 ready
 Key: SPARK-7909
 URL: https://issues.apache.org/jira/browse/SPARK-7909
 Project: Spark
  Issue Type: Improvement
  Components: EC2
 Environment: ec2 python3
Reporter: Matthew Goodman


At present there is not a possible permutation of tools that supports Python3 
on both the launching computer and running cluster.  There are a couple 
problems involved:
 - There is no prebuilt spark binary with python3 support.
 - spark-ec2/spark/init.sh contains inline py3 unfriendly print statements
 - Config files for cluster processes don't seem to make it to all nodes in a 
working format.

I have fixes for some of this, but the config and running context debugging 
remains elusive to me.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7806) spark-ec2 launch script fails for Python3

2015-05-27 Thread Matthew Goodman (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14561657#comment-14561657
 ] 

Matthew Goodman commented on SPARK-7806:


There are a couple lingering issues server side, that I am triaging today: 
 - The config for the http server seems slightly broken, though this may be 
unrelated.  
 - Ganglia has a similar issue, but it may be part of the other config issue
 - The per-module init.sh and setup.sh seem to function subtly wrong on the 
master node.

Also notably, the macro-expand of the spark config "templates" seem very 
brittle and duplicated in a couple of places.
Should I reopen this issue or start a new one?

> spark-ec2 launch script fails for Python3
> -
>
> Key: SPARK-7806
> URL: https://issues.apache.org/jira/browse/SPARK-7806
> Project: Spark
>  Issue Type: Bug
>  Components: EC2, PySpark
>Affects Versions: 1.3.1
> Environment: All platforms.  
>Reporter: Matthew Goodman
>Assignee: Matthew Goodman
>Priority: Minor
> Fix For: 1.4.0
>
>
> Depending on the options used the spark-ec2 script will terminate 
> ungracefully.  
> Relevant buglets include:
>  - urlopen() returning bytes vs. string
>  - floor division change for partition calculation
>  - filter() iteration behavior change in module calculation
> I have a fixed version that I wish to contribute.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-7806) spark-ec2 launch script fails for Python3

2015-05-26 Thread Matthew Goodman (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559502#comment-14559502
 ] 

Matthew Goodman edited comment on SPARK-7806 at 5/26/15 6:04 PM:
-

This is mostly fixed by the PR above.  There needs to be a single line change 
in the deploy_scripts.py which needs a wrapping of its single print call here: 
https://github.com/mesos/spark-ec2/blob/branch-1.4/deploy_templates.py#L88




was (Author: meawoppl):
This is mostly fixed by the PR above.  There needs to be a single line change 
in the deploy_scripts.py which needs a wrapping of its single print call here: 
https://github.com/mesos/spark-ec2/blob/branch-1.4/deploy_templates.py#88



> spark-ec2 launch script fails for Python3
> -
>
> Key: SPARK-7806
> URL: https://issues.apache.org/jira/browse/SPARK-7806
> Project: Spark
>  Issue Type: Bug
>  Components: EC2, PySpark
>Affects Versions: 1.3.1
> Environment: All platforms.  
>Reporter: Matthew Goodman
>Priority: Minor
>
> Depending on the options used the spark-ec2 script will terminate 
> ungracefully.  
> Relevant buglets include:
>  - urlopen() returning bytes vs. string
>  - floor division change for partition calculation
>  - filter() iteration behavior change in module calculation
> I have a fixed version that I wish to contribute.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7806) spark-ec2 launch script fails for Python3

2015-05-26 Thread Matthew Goodman (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559502#comment-14559502
 ] 

Matthew Goodman commented on SPARK-7806:


This is mostly fixed by the PR above.  There needs to be a single line change 
in the deploy_scripts.py which needs a wrapping of its single print call here: 
https://github.com/mesos/spark-ec2/blob/branch-1.4/deploy_templates.py



> spark-ec2 launch script fails for Python3
> -
>
> Key: SPARK-7806
> URL: https://issues.apache.org/jira/browse/SPARK-7806
> Project: Spark
>  Issue Type: Bug
>  Components: EC2, PySpark
>Affects Versions: 1.3.1
> Environment: All platforms.  
>Reporter: Matthew Goodman
>Priority: Minor
>
> Depending on the options used the spark-ec2 script will terminate 
> ungracefully.  
> Relevant buglets include:
>  - urlopen() returning bytes vs. string
>  - floor division change for partition calculation
>  - filter() iteration behavior change in module calculation
> I have a fixed version that I wish to contribute.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-7806) spark-ec2 launch script fails for Python3

2015-05-26 Thread Matthew Goodman (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559502#comment-14559502
 ] 

Matthew Goodman edited comment on SPARK-7806 at 5/26/15 6:03 PM:
-

This is mostly fixed by the PR above.  There needs to be a single line change 
in the deploy_scripts.py which needs a wrapping of its single print call here: 
https://github.com/mesos/spark-ec2/blob/branch-1.4/deploy_templates.py#88




was (Author: meawoppl):
This is mostly fixed by the PR above.  There needs to be a single line change 
in the deploy_scripts.py which needs a wrapping of its single print call here: 
https://github.com/mesos/spark-ec2/blob/branch-1.4/deploy_templates.py



> spark-ec2 launch script fails for Python3
> -
>
> Key: SPARK-7806
> URL: https://issues.apache.org/jira/browse/SPARK-7806
> Project: Spark
>  Issue Type: Bug
>  Components: EC2, PySpark
>Affects Versions: 1.3.1
> Environment: All platforms.  
>Reporter: Matthew Goodman
>Priority: Minor
>
> Depending on the options used the spark-ec2 script will terminate 
> ungracefully.  
> Relevant buglets include:
>  - urlopen() returning bytes vs. string
>  - floor division change for partition calculation
>  - filter() iteration behavior change in module calculation
> I have a fixed version that I wish to contribute.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-7806) spark-ec2 launch script fails for Python3

2015-05-21 Thread Matthew Goodman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Goodman updated SPARK-7806:
---
Description: 
Depending on the options used the spark-ec2 script will terminate ungracefully. 
 

Relevant buglets include:
 - urlopen() returning bytes vs. string
 - floor division change for partition calculation
 - filter() iteration behavior change in module calculation

I have a fixed version that I wish to contribute.  

  was:
Depending on the options used the spark-ec2 script will terminate ungracefully. 
 
I have a fixed version that I wish to contribute.  
Relevant buglets include:
 - urlopen() returning bytes vs. string
 - floor division change for partition calculation
 - filter() iteration behavior change in module calculation


> spark-ec2 launch script fails for Python3
> -
>
> Key: SPARK-7806
> URL: https://issues.apache.org/jira/browse/SPARK-7806
> Project: Spark
>  Issue Type: Bug
>  Components: EC2, PySpark
>Affects Versions: 1.3.1
> Environment: All platforms.  
>Reporter: Matthew Goodman
>Priority: Minor
>
> Depending on the options used the spark-ec2 script will terminate 
> ungracefully.  
> Relevant buglets include:
>  - urlopen() returning bytes vs. string
>  - floor division change for partition calculation
>  - filter() iteration behavior change in module calculation
> I have a fixed version that I wish to contribute.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-7806) spark-ec2 launch script fails for Python3

2015-05-21 Thread Matthew Goodman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Goodman updated SPARK-7806:
---
Description: 
Depending on the options used the spark-ec2 script will terminate ungracefully. 
 
I have a fixed version that I wish to contribute.  
Relevant buglets include:
 - urlopen() returning bytes vs. string
 - floor division change for partition calculation
 - filter() iteration behavior change in module calculation

> spark-ec2 launch script fails for Python3
> -
>
> Key: SPARK-7806
> URL: https://issues.apache.org/jira/browse/SPARK-7806
> Project: Spark
>  Issue Type: Bug
>  Components: EC2, PySpark
>Affects Versions: 1.3.1
> Environment: All platforms.  
>Reporter: Matthew Goodman
>Priority: Minor
>
> Depending on the options used the spark-ec2 script will terminate 
> ungracefully.  
> I have a fixed version that I wish to contribute.  
> Relevant buglets include:
>  - urlopen() returning bytes vs. string
>  - floor division change for partition calculation
>  - filter() iteration behavior change in module calculation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-7806) spark-ec2 launch script fails for Python3

2015-05-21 Thread Matthew Goodman (JIRA)
Matthew Goodman created SPARK-7806:
--

 Summary: spark-ec2 launch script fails for Python3
 Key: SPARK-7806
 URL: https://issues.apache.org/jira/browse/SPARK-7806
 Project: Spark
  Issue Type: Bug
  Components: EC2, PySpark
Affects Versions: 1.3.1
 Environment: All platforms.  
Reporter: Matthew Goodman
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org