Thanks & Regards, Meethu M
-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
[
https://issues.apache.org/jira/browse/SPARK-25452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628433#comment-16628433
]
Meethu Mathew commented on SPARK-25452:
---
This is not duplicate of -SPARK-24829.-
!image-2018
[
https://issues.apache.org/jira/browse/SPARK-25452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Meethu Mathew updated SPARK-25452:
--
Attachment: image-2018-09-26-14-14-47-504.png
> Query with where clause is giving unexpec
n, its
working with more than one decimal place numbers
[image: image.png]
Is this a bug?
Regards,
Meethu Mathew
Meethu Mathew created ZEPPELIN-3126:
---
Summary: More than 2 notebooks in R failing with error sparkr
intrepreter not responding
Key: ZEPPELIN-3126
URL: https://issues.apache.org/jira/browse/ZEPPELIN-3126
/usr/lib/R/bin/exec/R
--no-save --no-restore -f /tmp/zeppelin_sparkr-4152305170353311178.R --args
1642312173 58063 /home/meethu/spark-1.6.1-bin-hadoop2.6/R/lib 10601
meethu6745 6470 0 12:10 pts/100:00:00 /usr/lib/R/bin/exec/R
--no-save --no-restore -f /tmp/zeppelin_sparkr-5046601627391341672
hird model run using the sparkr interpreter,
the error is thrown. We suspect this as a limitation with zeppelin.
Please help to solve this issue
Regards,
Meethu Mathew
Hi Moon,
Yes its fixed in 0.7.1. Thank you
Regards,
Meethu Mathew
On Wed, Apr 26, 2017 at 10:42 PM, moon soo Lee <m...@apache.org> wrote:
> Some bugs related to interpreter process management has been fixed in
> 0.7.1 release [1]. Could you try 0.7.1 or master branch and see
, it creates another
SparkContext and then the previous SparkContext will become a dead process
and exist.
Is it a bug of zeppelin or is there any other proper way to unbind the
zeppelin framework?
Zeppelin version is 0.7.0
Regards,
Meethu Mathew
aining['msg'])
This python code is working and I am getting result. In version 0.7.0, I am
getting output without using the unicode function.
Hope the problem is clear now.
Regards,
Meethu Mathew
On Fri, Apr 21, 2017 at 3:07 AM, Felix Cheung <felixcheun...@hotmail.com>
wrote:
> And are t
in range(128)
All these code is working in 0.7.0 version. There is no change in the
dataset and code. Is there any change in the encoding type in the new
version of zeppelin?
Regards,
Meethu Mathew
8, I
tried
hc = HiveContext.getOrCreate(sc)
but still its returning
.
My pyspark shell and jupyter notebook is returning
without doing anything.
How to get
in the zeppelin notebook ?
Regards,
Meethu Mathew
?
Regards,
Meethu Mathew
Meethu Mathew created ZEPPELIN-2313:
---
Summary: Run-a-paragraph-synchronously response documented
incorrectly
Key: ZEPPELIN-2313
URL: https://issues.apache.org/jira/browse/ZEPPELIN-2313
Project
Meethu Mathew created ZEPPELIN-2312:
---
Summary: Allow to Undo edits in a paragraph once its executed and
undo deleted paragraph
Key: ZEPPELIN-2312
URL: https://issues.apache.org/jira/browse/ZEPPELIN-2312
Meethu Mathew created ZEPPELIN-2305:
---
Summary: overall experience on auto-completion need to improve.
Key: ZEPPELIN-2305
URL: https://issues.apache.org/jira/browse/ZEPPELIN-2305
Project: Zeppelin
.
Please improve the suggestion functionality.
Regards,
Meethu Mathew
mmons-csv-1.4.jar --files
/home/me/models/Churn/package/build/dist/fly_libs-1.1-py2.7.egg"
Any progress in this ticket ZEPPELIN-2136
<https://issues.apache.org/jira/browse/ZEPPELIN-2136> ?
Regards,
Meethu Mathew
Hi,
The output of following code prints unexpected dots in the result if there
is a comment in the code. Is it a bug with zeppelin?
*Code :*
%python
v = [1,2,3]
#comment 1
#comment
print v
*output*
... ... [1, 2, 3]
Regards,
Meethu Mathew
Hi,
I have noticed the same problem
Regards,
Meethu Mathew
On Mon, Mar 13, 2017 at 9:56 AM, Xiaohui Liu <hero...@gmail.com> wrote:
> Hi,
>
> We used 0.7.1-snapshot with our Mesos cluster, almost all our needed
> features (ldap login, notebook acl control, livy/pyspark/r
}/webapps/webapp and it worked.
But the files or folders added in this folder which is
the ZEPPELIN_WAR_TEMPDIR is deleted after a restart.
How can I add images in the mark down interpreter without using other
webservers?
Regards,
Meethu Mathew
Meethu Mathew created ZEPPELIN-2141:
---
Summary: sc.addPyFile("hdfs://path/to file) in zeppelin causing
UnKnownHostException
Key: ZEPPELIN-2141
URL: https://issues.apache.org/jira/browse/ZEPPELIN
Meethu Mathew created ZEPPELIN-2136:
---
Summary: --files in SPARK_SUBMIT_OPTIONS not working
Key: ZEPPELIN-2136
URL: https://issues.apache.org/jira/browse/ZEPPELIN-2136
Project: Zeppelin
Hi,
Add HADOOP_HOME=/path/to/hadoop/folder in /etc/default/mesos-slave in all
mesos agents and restart mesos
Regards,
Meethu Mathew
On Thu, Nov 10, 2016 at 4:57 PM, Yu Wei <yu20...@hotmail.com> wrote:
> Hi Guys,
>
> I failed to launch spark jobs on mesos. Actually I su
Meethu Mathew created ZEPPELIN-1562:
---
Summary: Wrong documentation in 'Run a paragraph synchronously'
rest api
Key: ZEPPELIN-1562
URL: https://issues.apache.org/jira/browse/ZEPPELIN-1562
Project
[
https://issues.apache.org/jira/browse/SPARK-12755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15282452#comment-15282452
]
Meethu Mathew commented on SPARK-12755:
---
Hi,
I am facing similar issues again in 1.6.1 standalone
[
https://issues.apache.org/jira/browse/SPARK-11227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266237#comment-15266237
]
Meethu Mathew commented on SPARK-11227:
---
I am also facing the same issue when HA is setup
[
https://issues.apache.org/jira/browse/SPARK-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15121332#comment-15121332
]
Meethu Mathew commented on SPARK-8402:
--
[~mengxr] [~josephkb] This ticket is in idle state for a long
[
https://issues.apache.org/jira/browse/SPARK-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Meethu Mathew updated SPARK-8402:
-
Summary: Add DP means clustering to MLlib (was: DP means clustering )
> Add DP means cluster
[
https://issues.apache.org/jira/browse/SPARK-6612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15050263#comment-15050263
]
Meethu Mathew commented on SPARK-6612:
--
[~mengxr] This issue is resolved. But it seems Apache Spark
[
https://issues.apache.org/jira/browse/SPARK-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15028657#comment-15028657
]
Meethu Mathew commented on SPARK-2572:
--
[~srowen] We are facing this issue with Mesos fine grained
Hi all,Can somebody point me to the implementation of predict() in
LogisticRegressionModel of spark mllib? I could find a predictPoint() in the
class LogisticRegressionModel, but where is predict()?
Thanks & Regards, Meethu M
Hi,
We are using Mesos fine grained mode because we can have multiple instances of
spark to share machines and each application get resources dynamically
allocated. Thanks & Regards, Meethu M
On Wednesday, 4 November 2015 5:24 AM, Reynold Xin
wrote:
If you
Hi,
We are using Mesos fine grained mode because we can have multiple instances of
spark to share machines and each application get resources dynamically
allocated. Thanks & Regards, Meethu M
On Wednesday, 4 November 2015 5:24 AM, Reynold Xin
wrote:
If you
[
https://issues.apache.org/jira/browse/SPARK-6724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14958374#comment-14958374
]
Meethu Mathew commented on SPARK-6724:
--
I am not able to take this PR forward. Can somebody take
Hi,
In the https://cwiki.apache.org/confluence/display/SPARK/Wiki+Homepage the
current release window has not been changed from 1.5. Can anybody give an
idea of the expected dates for 1.6 version?
Regards,
Meethu Mathew
Senior Engineer
Flytxt
Try coalesce(1) before writing Thanks & Regards, Meethu M
On Tuesday, 15 September 2015 6:49 AM, java8964
wrote:
#yiv1620377612 #yiv1620377612 --.yiv1620377612hmmessage
P{margin:0px;padding:0px;}#yiv1620377612
[
https://issues.apache.org/jira/browse/SPARK-6724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740160#comment-14740160
]
Meethu Mathew commented on SPARK-6724:
--
[~josephkb] I will take a look into it and update the PR
[
https://issues.apache.org/jira/browse/SPARK-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Meethu Mathew updated SPARK-8402:
-
Description:
At present, all the clustering algorithms in MLlib require the number of
clusters
[
https://issues.apache.org/jira/browse/SPARK-6724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14722999#comment-14722999
]
Meethu Mathew commented on SPARK-6724:
--
[~josephkb] Could you plz give your opinion
On Wed, Aug 12, 2015 at 3:08 PM, Burak Yavuz brk...@gmail.com wrote:
Are you running from master? Could you delete line 222 of
make-distribution.sh?We updated when we build sparkr.zip. I'll submit a fix for
it for 1.5 and master.
Burak
On Wed, Aug 12, 2015 at 3:31 AM, MEETHU MATHEW meethu2
Hi,Try using coalesce(1) before calling saveAsTextFile() Thanks Regards,
Meethu M
On Wednesday, 5 August 2015 7:53 AM, Brandon White
bwwintheho...@gmail.com wrote:
What is the best way to make saveAsTextFile save as only a single file?
Hi,
I am getting the assertion error while trying to run build/sbt unidoc same as
you described in Building scaladoc using build/sbt unidoc failure .Could you
tell me how you get it working ?
| |
| | | | | |
| Building scaladoc using build/sbt unidoc failureHello,I am trying to build
[
https://issues.apache.org/jira/browse/SPARK-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14591197#comment-14591197
]
Meethu Mathew commented on SPARK-8402:
--
Could you please assign the ticket to me
, Evan R. Sparks, Andre Wibisono.
I have raised a JIRA ticket at
https://issues.apache.org/jira/browse/SPARK-8402
Suggestions and guidance are welcome.
Regards,
Meethu Mathew
Senior Engineer
Flytxt
www.flytxt.com | Visit our blog http://blog.flytxt.com/ | Follow us
http://www.twitter.com/flytxt
Meethu Mathew created SPARK-8402:
Summary: DP means clustering
Key: SPARK-8402
URL: https://issues.apache.org/jira/browse/SPARK-8402
Project: Spark
Issue Type: New Feature
[
https://issues.apache.org/jira/browse/SPARK-8402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14589392#comment-14589392
]
Meethu Mathew commented on SPARK-8402:
--
Could anyone please assign this ticket to me
[
https://issues.apache.org/jira/browse/SPARK-8018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578751#comment-14578751
]
Meethu Mathew commented on SPARK-8018:
--
Should I add a new test for this in the test
Hi,
I added
createDependencyReducedPom in my pom.xml and the problem is solved.
!-- Work around MSHADE-148 --
+
createDependencyReducedPomfalse/createDependencyReducedPom
Thank you @Steve and @Ted
Regards,
Meethu Mathew
Senior Engineer
Flytxt
On Thu, Jun 4, 2015 at 9:51
[
https://issues.apache.org/jira/browse/SPARK-8018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572489#comment-14572489
]
Meethu Mathew commented on SPARK-8018:
--
[~josephkb][~mengxr] Thank you
issue.
Regards,
Meethu Mathew
Senior Engineer
Flytxt
[
https://issues.apache.org/jira/browse/SPARK-8018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14572489#comment-14572489
]
Meethu Mathew edited comment on SPARK-8018 at 6/4/15 10:11 AM
Try using coalesce Thanks Regards,
Meethu M
On Wednesday, 3 June 2015 11:26 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) deepuj...@gmail.com
wrote:
I am running a series of spark functions with 9000 executors and its resulting
in 9000+ files that is execeeding the namespace file count qutota.
How can Spark
[
https://issues.apache.org/jira/browse/SPARK-8018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14570620#comment-14570620
]
Meethu Mathew commented on SPARK-8018:
--
[~josephkb] For initialization using
version of Spark, mostly we specify a hadoop version (which is not the
default one). In this case, make-distribution.sh should be supplied the
same maven options we used for building spark. This is not specified in
the documentation. Please correct me , if I am wrong.
Regards,
Meethu Mathew
operation in multiple threads within
a function or you want run multiple jobs using multiple threads? I am
wondering why python thread module can't be used? Or you have already gave
it a try?
On 18 May 2015 16:39, MEETHU MATHEW meethu2...@yahoo.co.in wrote:
Hi Akhil,
The python wrapper
...@sigmoidanalytics.com
wrote:
Did you happened to have a look at the spark job server? Someone wrote a
python wrapper around it, give it a try.
ThanksBest Regards
On Thu, May 14, 2015 at 11:10 AM, MEETHU MATHEW meethu2...@yahoo.co.in wrote:
Hi all,
Quote Inside a given Spark application (SparkContext instance
Hi,I think you cant supply an initial set of centroids to kmeans Thanks
Regards,
Meethu M
On Friday, 15 May 2015 12:37 AM, Suman Somasundar
suman.somasun...@oracle.com wrote:
!--#yiv5602900621 _filtered #yiv5602900621 {font-family:Cambria
Math;panose-1:2 4 5 3 5 4 6 3 2 4;}
[
https://issues.apache.org/jira/browse/SPARK-7651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14544910#comment-14544910
]
Meethu Mathew commented on SPARK-7651:
--
[~josephkb] Yea, I wil fix it asap.
PySpark
[
https://issues.apache.org/jira/browse/SPARK-7651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14544924#comment-14544924
]
Meethu Mathew commented on SPARK-7651:
--
Could you please tell me where I should make
[
https://issues.apache.org/jira/browse/SPARK-7651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14544929#comment-14544929
]
Meethu Mathew commented on SPARK-7651:
--
Ok thank you
PySpark GMM predict
[
https://issues.apache.org/jira/browse/SPARK-7651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14544930#comment-14544930
]
Meethu Mathew commented on SPARK-7651:
--
Ok thank you
PySpark GMM predict
Hi all,
Quote Inside a given Spark application (SparkContext instance), multiple
parallel jobs can run simultaneously if they were submitted from separate
threads.
How to run multiple jobs in one SPARKCONTEXT using separate threads in pyspark?
I found some examples in scala and java, but
*
*
** ** ** ** ** ** Hi,
Is it really necessary to run **mvn --projects assembly/ -DskipTests
install ? Could you please explain why this is needed?
I got the changes after running mvn --projects streaming/ -DskipTests
package.
Regards,
Meethu
On Monday 04 May 2015 02:20 PM,
Hi all,
I started spark-shell in spark-1.3.0 and did some actions. The UI was showing 8
cores under the running applications tab. But when I exited the spark-shell
using exit, the application is moved to completed applications tab and the
number of cores is 0. Again when I exited the
[
https://issues.apache.org/jira/browse/SPARK-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14379333#comment-14379333
]
Meethu Mathew commented on SPARK-6485:
--
As you had mentioned here https
[
https://issues.apache.org/jira/browse/SPARK-6227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14358529#comment-14358529
]
Meethu Mathew commented on SPARK-6227:
--
[~mengxr] Please give your inputs
[
https://issues.apache.org/jira/browse/SPARK-6227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356428#comment-14356428
]
Meethu Mathew commented on SPARK-6227:
--
Interested to work on this ticket.Could
Hi,
I am trying to run examples of spark(master branch from git) from
Intellij(14.0.2) but facing errors. These are the steps I followed:
1. git clone the master branch of apache spark.2. Build it using mvn
-DskipTests clean install3. In Intellij select Import Projects and choose the
POM.xml
Hi,
I am not able to read from HDFS(Intel distribution hadoop,Hadoop version is
1.0.3) from spark-shell(spark version is 1.2.1). I built spark using the
commandmvn -Dhadoop.version=1.0.3 clean package and started spark-shell and
read a HDFS file using sc.textFile() and the exception is
WARN
Hi,
The mail id given in
https://cwiki.apache.org/confluence/display/SPARK/Powered+By+Spark seems
to be failing. Can anyone tell me how to get added to Powered By Spark list?
--
Regards,
*Meethu*
[
https://issues.apache.org/jira/browse/SPARK-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306593#comment-14306593
]
Meethu Mathew commented on SPARK-5609:
--
Please assign the ticket to me
[
https://issues.apache.org/jira/browse/SPARK-5609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306593#comment-14306593
]
Meethu Mathew edited comment on SPARK-5609 at 2/5/15 4:03 AM
Hi,
The test suites in the Kmeans class in clustering.py is not updated to
take the seed value and hence it is failing.
Shall I make the changes and submit it along with my PR( Python API for
Gaussian Mixture Model) or create a JIRA ?
Regards,
Meethu
Hi,
Sorry it was my mistake. My code was not properly built.
Regards,
Meethu
_http://www.linkedin.com/home?trk=hb_tab_home_top_
On Thursday 22 January 2015 10:39 AM, Meethu Mathew wrote:
Hi,
The test suites in the Kmeans class in clustering.py is not updated to
take the seed value
[
https://issues.apache.org/jira/browse/SPARK-5012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14286942#comment-14286942
]
Meethu Mathew commented on SPARK-5012:
--
[~tgaloppo] Thank you..Will update this PR
[
https://issues.apache.org/jira/browse/SPARK-5012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14279811#comment-14279811
]
Meethu Mathew commented on SPARK-5012:
--
Once SPARK-5019 is resolved, we will make
Hi all,
In the python object to java conversion done in the method _py2java in
spark/python/pyspark/mllib/common.py, why we are doing individual
conversion using MpaConverter,ListConverter? The same can be acheived
using
bytearray(PickleSerializer().dumps(obj))
obj =
[
https://issues.apache.org/jira/browse/SPARK-5012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14273561#comment-14273561
]
Meethu Mathew commented on SPARK-5012:
--
I added a new class GaussianMixtureModel
PM, Davies Liu wrote:
On Sun, Jan 11, 2015 at 10:21 PM, Meethu Mathew
meethu.mat...@flytxt.com wrote:
Hi,
This is the code I am running.
mu = (Vectors.dense([0.8786, -0.7855]),Vectors.dense([-0.1863, 0.7799]))
membershipMatrix = callMLlibFunc(findPredict, rdd.map(_convert_to_vector),
mu)
What's
here?
On Sun, Jan 11, 2015 at 9:28 PM, Meethu Mathew meethu.mat...@flytxt.com wrote:
Hi,
Thanks Davies .
I added a new class GaussianMixtureModel in clustering.py and the method
predict in it and trying to pass numpy array from this method.I converted it
to DenseVector and its solved now
, but now the exception is
'list' object has no attribute '_get_object_id'
and when I give a tuple input (Vectors.dense([0.8786,
-0.7855]),Vectors.dense([-0.1863, 0.7799])) exception is like
'numpy.ndarray' object has no attribute '_get_object_id'
Regards,
*Meethu Mathew*
*Engineer
[
https://issues.apache.org/jira/browse/SPARK-5012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261923#comment-14261923
]
Meethu Mathew commented on SPARK-5012:
--
The python implementation of the algorithm
[
https://issues.apache.org/jira/browse/SPARK-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261936#comment-14261936
]
Meethu Mathew commented on SPARK-5015:
--
Instead of using random seed , using
[
https://issues.apache.org/jira/browse/SPARK-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14261946#comment-14261946
]
Meethu Mathew commented on SPARK-5015:
--
We would try to experiment with both
use the numpy functions. Will it take too much time?
I have found some scripts that are not from Mllib and was created by other
developers(credits to Meethu Mathew from Flytxt, thanks for giving me
insights!:))
Many thanks and look forward to getting feedbacks!
Best, Danqing
GMMSpark.py (7K
[
https://issues.apache.org/jira/browse/SPARK-4156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14242486#comment-14242486
]
Meethu Mathew commented on SPARK-4156:
--
[~tgaloppo] The current version of the code
Hi,Try this.Change spark-mllib to spark-mllib_2.10
libraryDependencies ++=Seq( org.apache.spark % spark-core_2.10 % 1.1.1
org.apache.spark % spark-mllib_2.10 % 1.1.1 )
Thanks Regards,
Meethu M
On Friday, 12 December 2014 12:22 PM, amin mohebbi
aminn_...@yahoo.com.INVALID wrote:
PM, MEETHU MATHEW
meethu2...@yahoo.co.in wrote:
Hi,I have a similar problem.I modified the code in mllib and examples.I did
mvn install -pl mllib mvn install -pl examples
But when I run the program in examples using run-example,the older version of
mllib (before the changes were made
[
https://issues.apache.org/jira/browse/SPARK-4156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14231226#comment-14231226
]
Meethu Mathew commented on SPARK-4156:
--
We had run the GMM code on two public
[
https://issues.apache.org/jira/browse/SPARK-4156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14232639#comment-14232639
]
Meethu Mathew commented on SPARK-4156:
--
we considered only diagonal covariance matrix
Hi,I have a similar problem.I modified the code in mllib and examples.I did mvn
install -pl mllib mvn install -pl examples
But when I run the program in examples using run-example,the older version of
mllib (before the changes were made) is getting executed.How to get the changes
made in mllib
[
https://issues.apache.org/jira/browse/SPARK-3588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14224091#comment-14224091
]
Meethu Mathew commented on SPARK-3588:
--
[~mengxr] We have completed the pyspark
Hi Ashutosh,
Please edit the README file.I think the following function call is
changed now.
|model = OutlierWithAVFModel.outliers(master:String, input dir:String ,
percentage:Double||)
|
Regards,
*Meethu Mathew*
*Engineer*
*Flytxt*
_http://www.linkedin.com/home?trk=hb_tab_home_top_
from a file? sc.textFile will simply give us an RDD, how to make it
a Vector[String]?
Could you plz share any code snippet of this conversion if you have..
Regards,
Meethu Mathew
On Friday 14 November 2014 10:02 AM, Meethu Mathew wrote:
Hi Ashutosh,
Please edit the README file.I think
Hi,
I was also trying Ispark..But I couldnt even start the notebook..I am getting
the following error.
ERROR:tornado.access:500 POST /api/sessions (127.0.0.1) 10.15ms
referer=http://localhost:/notebooks/Scala/Untitled0.ipynb
How did you start the notebook?
Thanks Regards,
Meethu M
Hi,
This question was asked earlier and I did it in the way specified..I am
getting java.lang.ClassNotFoundException..
Can somebody explain all the steps required to build a spark app using IntelliJ
(latest version)starting from creating the project to running it..I searched a
lot but couldnt
Try to set --total-executor-cores to limit how many total cores it can use.
Thanks Regards,
Meethu M
On Thursday, 2 October 2014 2:39 AM, Akshat Aranya aara...@gmail.com wrote:
I guess one way to do so would be to run 1 worker per node, like say, instead
of running 1 worker and giving
Hi all,
My code was working fine in spark 1.0.2 ,but after upgrading to 1.1.0, its
throwing exceptions and tasks are getting failed.
The code contains some map and filter transformations followed by groupByKey
(reduceByKey in another code ). What I could find out is that the code works
fine
[
https://issues.apache.org/jira/browse/SPARK-3588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154434#comment-14154434
]
Meethu Mathew commented on SPARK-3588:
--
Ok. We will start implementing the Scala
1 - 100 of 133 matches
Mail list logo