[
https://issues.apache.org/jira/browse/SPARK-3434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14167152#comment-14167152
]
Burak Yavuz commented on SPARK-3434:
[~ConcreteVitamin], any updates? Anything I
Branch: refs/heads/QA_4_2
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: 2079e9cd9abf4d76e50494ce4bf8f7c1d4999164
https://github.com/phpmyadmin/phpmyadmin/commit/2079e9cd9abf4d76e50494ce4bf8f7c1d4999164
Author: Burak Yavuz
Date: 2014-10-06 (Mon, 06 Oct 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/localized_docs
Commit: 1169c49661f124d4d617d1316d62404d598d30bf
https://github.com/phpmyadmin/localized_docs/commit/1169c49661f124d4d617d1316d62404d598d30bf
Author: Burak Yavuz
Date: 2014-10-02 (Thu, 02 Oct 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: 5288df43097df61237fe4d9320a56b0886ed11db
https://github.com/phpmyadmin/phpmyadmin/commit/5288df43097df61237fe4d9320a56b0886ed11db
Author: Burak Yavuz
Date: 2014-10-02 (Thu, 02 Oct 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/localized_docs
Commit: 38df143ca748c7a5236c70cb0c715ea948195184
https://github.com/phpmyadmin/localized_docs/commit/38df143ca748c7a5236c70cb0c715ea948195184
Author: Burak Yavuz
Date: 2014-10-02 (Thu, 02 Oct 2014
Hi,
It appears that the step size is too high that the model is diverging with the
added noise.
Could you try by setting the step size to be 0.1 or 0.01?
Best,
Burak
- Original Message -
From: "Krishna Sankar"
To: user@spark.apache.org
Sent: Wednesday, October 1, 2014 12:43:20 PM
Subj
[
https://issues.apache.org/jira/browse/SPARK-3631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14143484#comment-14143484
]
Burak Yavuz commented on SPARK-3631:
Thanks for setting this up [~aash]! [~pwen
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/localized_docs
Commit: 1c004d7e341e8e0d4b5c17dcdc64181220725193
https://github.com/phpmyadmin/localized_docs/commit/1c004d7e341e8e0d4b5c17dcdc64181220725193
Author: Burak Yavuz
Date: 2014-09-22 (Mon, 22 Sep 2014
Hi Gilberto,
Could you please attach the driver logs as well, so that we can pinpoint what's
going wrong? Could you also add the flag
`--driver-memory 4g` while submitting your application and try that as well?
Best,
Burak
- Original Message -
From: "Gilberto Lira"
To: user@spark.apach
Hi,
I believe it's because you're trying to use a Function of an RDD, in an RDD,
which is not possible. Instead of using a
`Function>`, could you try Function, and
`public Void call(Float arg0) throws Exception { `
and
`System.out.println(arg0)`
instead. I'm not perfectly sure of the semantics i
Hi,
spark-1.0.1/examples/src/main/python/kmeans.py => Naive example for users to
understand how to code in Spark
spark-1.0.1/python/pyspark/mllib/clustering.py => Use this!!!
Bonus: spark-1.0.1/examples/src/main/python/mllib/kmeans.py => Example on how
to call KMeans. Feel free to use it as a t
Hi,
The spacing between the inputs should be a single space, not a tab. I feel like
your inputs have tabs between them instead of a single space. Therefore the
parser
cannot parse the input.
Best,
Burak
- Original Message -
From: "Sameer Tilak"
To: user@spark.apache.org
Sent: Wednesda
Hi,
Could you try repartitioning the data by .repartition(# of cores on machine) or
while reading the data, supply the number of minimum partitions as in
sc.textFile(path, # of cores on machine).
It may be that the whole data is stored in one block? If it is billions of
rows, then the indexing
I believe it will be in the main repo.
Burak
- Original Message -
From: "Kyle Ellrott"
To: "Burak Yavuz"
Cc: dev@spark.apache.org
Sent: Wednesday, September 17, 2014 9:48:54 AM
Subject: Re: [mllib] State of Multi-Model training
This sounds like a pretty major re
To properly perform PCA, you must left multiply the resulting DenseMatrix with
the original RowMatrix. The result will also be a RowMatrix,
therefore you can easily access the values by .values, and train KMeans on that.
Don't forget to Broadcast the DenseMatrix returned from
RowMatrix.computePr
age, except in Spark Streaming, and some MLlib algorithms.
If you can help with the guide, I think it would be a nice feature to have!
Burak
- Original Message -
From: "Andrew Ash"
To: "Burak Yavuz"
Cc: "Макар Красноперов" , "user"
Sent: Wednesday
etting the directory will not be enough.
Best,
Burak
- Original Message -
From: "Andrew Ash"
To: "Burak Yavuz"
Cc: "Макар Красноперов" , "user"
Sent: Wednesday, September 17, 2014 10:19:42 AM
Subject: Re: Spark and disk usage.
Hi Burak,
Most discussion
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/localized_docs
Commit: a01814147d950fa4fa4a4a9006a7c5690a9701b6
https://github.com/phpmyadmin/localized_docs/commit/a01814147d950fa4fa4a4a9006a7c5690a9701b6
Author: Burak Yavuz
Date: 2014-09-17 (Wed, 17 Sep 2014
Hi,
The files you mentioned are temporary files written by Spark during shuffling.
ALS will write a LOT of those files as it is a shuffle heavy algorithm.
Those files will be deleted after your program completes as Spark looks for
those files in case a fault occurs. Having those files ready allo
ny feedback from you and the rest of the
community!
Best,
Burak
- Original Message -
From: "Kyle Ellrott"
To: "Burak Yavuz"
Cc: dev@spark.apache.org
Sent: Tuesday, September 16, 2014 9:41:45 PM
Subject: Re: [mllib] State of Multi-Model training
I'd be intereste
Hi Kyle,
I'm actively working on it now. It's pretty close to completion, I'm just
trying to figure out bottlenecks and optimize as much as possible.
As Phase 1, I implemented multi model training on Gradient Descent. Instead of
performing Vector-Vector operations on rows (examples) and weights,
Hi,
I'm not a master on SparkSQL, but from what I understand, the problem ıs that
you're trying to access an RDD
inside an RDD here: val xyz = file.map(line => ***
extractCurRate(sqlContext.sql("select rate ... *** and
here: xyz = file.map(line => *** extractCurRate(sqlContext.sql("select rate
Hi,
val test = persons.value
.map{tuple => (tuple._1, tuple._2
.filter{event => *inactiveIDs.filter(event2 => event2._1 ==
tuple._1).count() != 0})}
Your problem is right between the asterisk. You can't make an RDD operation
inside an RDD operation, because RDD's can't be serialized
Hi Nicolas,
It seems that you are starting to lose executors and then the job starts to
fail. Can you please share more information about your application
so that we can help you debug it, such as what you're trying to do, and your
driver logs please?
Best,
Burak
- Original Message -
F
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/localized_docs
Commit: 6d551e2fce7ea6e02e1194acc6a800a1af836b5b
https://github.com/phpmyadmin/localized_docs/commit/6d551e2fce7ea6e02e1194acc6a800a1af836b5b
Author: Burak Yavuz
Date: 2014-09-07 (Sun, 07 Sep 2014
[
https://issues.apache.org/jira/browse/SPARK-3418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Burak Yavuz updated SPARK-3418:
---
Summary: [MLlib] Additional BLAS and Local Sparse Matrix support (was:
Additional BLAS and Local
Burak Yavuz created SPARK-3418:
--
Summary: Additional BLAS and Local Sparse Matrix support
Key: SPARK-3418
URL: https://issues.apache.org/jira/browse/SPARK-3418
Project: Spark
Issue Type: New
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/localized_docs
Commit: 1e0179a5b88ed87450de23a73fac265a686d0476
https://github.com/phpmyadmin/localized_docs/commit/1e0179a5b88ed87450de23a73fac265a686d0476
Author: Burak Yavuz
Date: 2014-08-30 (Sat, 30 Aug 2014
[
https://issues.apache.org/jira/browse/SPARK-3280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14114873#comment-14114873
]
Burak Yavuz commented on SPARK-3280:
I don't have as detailed a comparison
[
https://issues.apache.org/jira/browse/SPARK-3280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Burak Yavuz updated SPARK-3280:
---
Attachment: hash-sort-comp.png
> Made sort-based shuffle the default implementat
Hi,
Spark uses by default approximately 60% of the executor heap memory to store
RDDs. That's why you have 8.6GB instead of 16GB. 95.5 is therefore the sum of
all the 8.6 GB of executor memory + the driver memory.
Best,
Burak
- Original Message -
From: "SK"
To: u...@spark.incubator.ap
Yeah, saveAsTextFile is an RDD specific method. If you really want to use that
method, just turn the map into an RDD:
`sc.parallelize(x.toSeq).saveAsTextFile(...)`
Reading through the api-docs will present you many more alternate solutions!
Best,
Burak
- Original Message -
From: "SK"
+1. Tested MLlib algorithms on Amazon EC2, algorithms show speed-ups between
1.5-5x compared to the 1.0.2 release.
- Original Message -
From: "Patrick Wendell"
To: dev@spark.apache.org
Sent: Thursday, August 28, 2014 8:32:11 PM
Subject: Re: [VOTE] Release Apache Spark 1.1.0 (RC2)
I'll k
Hi Sameer,
I've faced this issue before. They don't show up on
http://s3.amazonaws.com/big-data-benchmark/. But you can directly use:
`sc.textFile("s3n://big-data-benchmark/pavlo/text/tiny/crawl")`
The gotcha is that you also need to supply which dataset you want: crawl,
uservisits, or rankings
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: d0e0ed047816fa84ce213df88be75670c765eeb5
https://github.com/phpmyadmin/phpmyadmin/commit/d0e0ed047816fa84ce213df88be75670c765eeb5
Author: Burak Yavuz
Date: 2014-08-27 (Wed, 27 Aug 2014
Hi David,
Your job is probably hanging on the groupByKey process. Probably GC is kicking
in and the process starts to hang or the data is unbalanced and you end up with
stragglers (Once GC kicks in you'll start to get the connection errors you
shared). If you don't care about the list of value
Hi,
The error doesn't occur during saveAsTextFile but rather during the groupByKey
as far as I can tell. We strongly urge users to not use groupByKey
if they don't have to. What I would suggest is the following work-around:
sc.textFile(baseFile)).map { line =>
val fields = line.split("\t")
(
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: 9cedf0a58feaad7604cdf7d09828854c11c630e6
https://github.com/phpmyadmin/phpmyadmin/commit/9cedf0a58feaad7604cdf7d09828854c11c630e6
Author: Burak Yavuz
Date: 2014-08-25 (Mon, 25 Aug 2014
Spearman's Correlation requires the calculation of ranks for columns. You can
checkout the code here and slice the part you need!
https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/stat/correlation/SpearmanCorrelation.scala
Best,
Burak
- Original Message
You can check out this pull request: https://github.com/apache/spark/pull/476
LDA is on the roadmap for the 1.2 release, hopefully we will officially support
it then!
Best,
Burak
- Original Message -
From: "Denny Lee"
To: user@spark.apache.org
Sent: Thursday, August 21, 2014 10:10:35 P
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: 0d8c331a97e2861765cff4df7aabe5e757cac4bb
https://github.com/phpmyadmin/phpmyadmin/commit/0d8c331a97e2861765cff4df7aabe5e757cac4bb
Author: Burak Yavuz
Date: 2014-08-16 (Sat, 16 Aug 2014
phpmyadmin/phpmyadmin/commit/21a01002926cd479b2e2592b4fbea827509fed14
Author: Burak Yavuz
Date: 2014-08-16 (Sat, 16 Aug 2014)
Changed paths:
M po/tr.po
Log Message:
---
Translated using Weblate (Turkish)
Currently translated at 100.0% (2964 of 2964)
[ci skip]
Compa
[
https://issues.apache.org/jira/browse/SPARK-3080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Burak Yavuz updated SPARK-3080:
---
Description:
The stack trace is below:
{quote}
java.lang.ArrayIndexOutOfBoundsException: 2716
[
https://issues.apache.org/jira/browse/SPARK-3080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Burak Yavuz updated SPARK-3080:
---
Description:
The stack trace is below:
{quote}
java.lang.ArrayIndexOutOfBoundsException: 2716
Burak Yavuz created SPARK-3080:
--
Summary: ArrayIndexOutOfBoundsException in ALS for Large datasets
Key: SPARK-3080
URL: https://issues.apache.org/jira/browse/SPARK-3080
Project: Spark
Issue
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: 2b61fb1281e094a885f580f83d8381c7cca8bb04
https://github.com/phpmyadmin/phpmyadmin/commit/2b61fb1281e094a885f580f83d8381c7cca8bb04
Author: Burak Yavuz
Date: 2014-08-13 (Wed, 13 Aug 2014
[
https://issues.apache.org/jira/browse/SPARK-2831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Burak Yavuz resolved SPARK-2831.
Resolution: Fixed
> performance tests for linear classification meth
[
https://issues.apache.org/jira/browse/SPARK-2829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Burak Yavuz resolved SPARK-2829.
Resolution: Fixed
> Implement MLlib performance tests in spark-p
[
https://issues.apache.org/jira/browse/SPARK-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Burak Yavuz resolved SPARK-2837.
Resolution: Done
> performance tests for ALS
> -
>
>
[
https://issues.apache.org/jira/browse/SPARK-2836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Burak Yavuz closed SPARK-2836.
--
Resolution: Fixed
> performance tests for k-me
[
https://issues.apache.org/jira/browse/SPARK-2834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Burak Yavuz resolved SPARK-2834.
Resolution: Fixed
> performance tests for linear algebra functi
[
https://issues.apache.org/jira/browse/SPARK-2833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Burak Yavuz resolved SPARK-2833.
Resolution: Fixed
> performance tests for linear regress
Hi,
// Initialize the optimizer using logistic regression as the loss function with
L2 regularization
val lbfgs = new LBFGS(new LogisticGradient(), new SquaredL2Updater())
// Set the hyperparameters
lbfgs.setMaxNumIterations(numIterations).setRegParam(regParam).setConvergenceTol(tol).setNumCorre
[
https://issues.apache.org/jira/browse/SPARK-2916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Burak Yavuz updated SPARK-2916:
---
Description:
While running any of the regression algorithms with gradient descent, the
[
https://issues.apache.org/jira/browse/SPARK-2916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Burak Yavuz updated SPARK-2916:
---
Component/s: Spark Core
> [MLlib] While running regression tests with dense vectors of len
[
https://issues.apache.org/jira/browse/SPARK-2916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Burak Yavuz updated SPARK-2916:
---
Description:
While running any of the regression algorithms with gradient descent, the
[
https://issues.apache.org/jira/browse/SPARK-2916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14090498#comment-14090498
]
Burak Yavuz commented on SPARK-2916:
will do
> [MLlib] While running reg
[
https://issues.apache.org/jira/browse/SPARK-2916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Burak Yavuz updated SPARK-2916:
---
Description:
While running any of the regression algorithms with gradient descent, the
Burak Yavuz created SPARK-2916:
--
Summary: While running regression tests with dense vectors of
length greater than 1000, the treeAggregate blows up after several iterations
Key: SPARK-2916
URL: https
[
https://issues.apache.org/jira/browse/SPARK-2916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Burak Yavuz updated SPARK-2916:
---
Summary: [MLlib] While running regression tests with dense vectors of
length greater than 1000, the
The following code will allow you to run Logistic Regression using L-BFGS:
val lbfgs = new LBFGS(new LogisticGradient(), new SquaredL2Updater())
lbfgs.setMaxNumIterations(numIterations).setRegParam(regParam).setConvergenceTol(tol).setNumCorrections(numCor)
val weights = lbfgs.optimize(data, initi
Hi Jay,
I've had the same problem you've been having in Question 1 with a synthetic
dataset. I thought I wasn't producing the dataset well enough. This seems to
be a bug. I will open a JIRA for it.
Instead of using:
ratings.map{ case Rating(u,m,r) => {
val pred = model.predict(u, m)
(r
Hi,
Could you try running spark-shell with the flag --driver-memory 2g or more if
you have more RAM available and try again?
Thanks,
Burak
- Original Message -
From: "AlexanderRiggers"
To: u...@spark.incubator.apache.org
Sent: Thursday, August 7, 2014 7:37:40 AM
Subject: KMeans Input F
Hi,
That is interesting. Would you please share some code on how you are setting
the regularization type, regularization parameters and running Logistic
Regression?
Thanks,
Burak
- Original Message -
From: "SK"
To: u...@spark.incubator.apache.org
Sent: Wednesday, August 6, 2014 6:18:4
Hi,
Could you please send the link for the example you are talking about?
minPartitions and numFeatures do not exist in the current API
for NaiveBayes as far as I know. So, I don't know how to answer your second
question.
Regarding your first question, guessing blindly, it should be related to
Branch: refs/heads/QA_4_2
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: ae0d3d038db1fde4baaa6c02ce7ca2aa11f76b74
https://github.com/phpmyadmin/phpmyadmin/commit/ae0d3d038db1fde4baaa6c02ce7ca2aa11f76b74
Author: Burak Yavuz
Date: 2014-08-06 (Wed, 06 Aug 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/localized_docs
Commit: 96fe3575d28d71967ff6d906c4cc1c720014427e
https://github.com/phpmyadmin/localized_docs/commit/96fe3575d28d71967ff6d906c4cc1c720014427e
Author: Burak Yavuz
Date: 2014-08-06 (Wed, 06 Aug 2014
Hi Guru,
Take a look at:
https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark
It has all the information you need on how to contribute to Spark. Also take a
look at:
https://issues.apache.org/jira/browse/SPARK/?selectedTab=com.atlassian.jira.jira-projects-plugin:summary-panel
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: eab70ea7b0e1e17034ffb90fe246cc836e76fd97
https://github.com/phpmyadmin/phpmyadmin/commit/eab70ea7b0e1e17034ffb90fe246cc836e76fd97
Author: Burak Yavuz
Date: 2014-08-05 (Tue, 05 Aug 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: bc01c12eefc26e03088e30f36fe84cd1e727379c
https://github.com/phpmyadmin/phpmyadmin/commit/bc01c12eefc26e03088e30f36fe84cd1e727379c
Author: Burak Yavuz
Date: 2014-08-04 (Mon, 04 Aug 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: 80197c034781461ce1b3a91d2b5a6e2853c926e0
https://github.com/phpmyadmin/phpmyadmin/commit/80197c034781461ce1b3a91d2b5a6e2853c926e0
Author: Burak Yavuz
Date: 2014-08-03 (Sun, 03 Aug 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: efb24cbafefcaeb708321c58e5be6d89dfc1c1e3
https://github.com/phpmyadmin/phpmyadmin/commit/efb24cbafefcaeb708321c58e5be6d89dfc1c1e3
Author: Burak Yavuz
Date: 2014-08-02 (Sat, 02 Aug 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/localized_docs
Commit: a6e1769df029c2df4e2858ac4bdefb788f789d75
https://github.com/phpmyadmin/localized_docs/commit/a6e1769df029c2df4e2858ac4bdefb788f789d75
Author: Burak Yavuz
Date: 2014-08-02 (Sat, 02 Aug 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: ce4ae5d1e5ef1411032740b069530219a263c020
https://github.com/phpmyadmin/phpmyadmin/commit/ce4ae5d1e5ef1411032740b069530219a263c020
Author: Burak Yavuz
Date: 2014-08-02 (Sat, 02 Aug 2014
/phpmyadmin/commit/bf790561ab463731d6fc70de39d066c6d70f4e3b
Author: Burak Yavuz
Date: 2014-08-02 (Sat, 02 Aug 2014)
Changed paths:
M po/tr.po
Log Message:
---
Translated using Weblate (Turkish)
Currently translated at 99.9% (2956 of 2958)
[ci skip]
Compare:
https
[
https://issues.apache.org/jira/browse/SPARK-2801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Burak Yavuz updated SPARK-2801:
---
Description:
The RandomRDDGenerators only output RDD[Double]. The DistributionGenerator will
be
Burak Yavuz created SPARK-2801:
--
Summary: Generalize RandomRDD Generator output to generic type
Key: SPARK-2801
URL: https://issues.apache.org/jira/browse/SPARK-2801
Project: Spark
Issue Type
[
https://issues.apache.org/jira/browse/SPARK-2434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14068771#comment-14068771
]
Burak Yavuz commented on SPARK-2434:
Hi Michael,
I did what was required alr
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/localized_docs
Commit: e78eb3f0773866170394b6b6dff4157493a14c5f
https://github.com/phpmyadmin/localized_docs/commit/e78eb3f0773866170394b6b6dff4157493a14c5f
Author: Burak Yavuz
Date: 2014-07-17 (Thu, 17 Jul 2014
Hi Tom,
Actually I was mistaken, sorry about that. Indeed on the website, the keys for
the datasets you mention are not showing up. However,
they are still accessible through the spark-shell, which means that they are
there.
So in order to answer your questions:
- Are the tiny and 1node sets s
Hi Tom,
If you wish to load the file in Spark directly, you can use
sc.textFile("s3n://big-data-benchmark/pavlo/...") where sc is your
SparkContext. This can be
done because the files should be publicly available and you don't need AWS
Credentials to access them.
If you want to download the fi
/568f6ea5ecd4324629b94c74009bcb10a0051067
Author: Burak Yavuz
Date: 2014-07-15 (Tue, 15 Jul 2014)
Changed paths:
M po/tr.po
Log Message:
---
Translated using Weblate (Turkish)
Currently translated at 99.3% (2912 of 2930)
Compare:
https://github.com/phpmyadmin
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: 2a6515e68c0ce41789bdbdcd04386cef24067efe
https://github.com/phpmyadmin/phpmyadmin/commit/2a6515e68c0ce41789bdbdcd04386cef24067efe
Author: Burak Yavuz
Date: 2014-07-14 (Mon, 14 Jul 2014
Someone can correct me if I'm wrong, but unfortunately for now, once a
streaming context is stopped, it can't be restarted.
- Original Message -
From: "Nick Chammas"
To: u...@spark.incubator.apache.org
Sent: Wednesday, July 9, 2014 6:11:51 PM
Subject: Restarting a Streaming Context
So
Hi,
The roadmap for the 1.1 release and MLLib includes algorithms such as:
Non-negative matrix factorization, Sparse SVD, Multiclass
decision tree, Random Forests (?)
and optimizers such as:
ADMM, Accelerated gradient methods
also a statistical toolbox that includes:
descriptive statistics, sa
Branch: refs/heads/QA_4_2
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: afdc05ab6e58fa99e605ec997a77adbea8d9df45
https://github.com/phpmyadmin/phpmyadmin/commit/afdc05ab6e58fa99e605ec997a77adbea8d9df45
Author: Burak Yavuz
Date: 2014-07-08 (Tue, 08 Jul 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: f07daed129b858d583226b96ee6043f769d21622
https://github.com/phpmyadmin/phpmyadmin/commit/f07daed129b858d583226b96ee6043f769d21622
Author: Burak Yavuz
Date: 2014-07-01 (Tue, 01 Jul 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/localized_docs
Commit: 90092e5cff92c00ebb5d71a98ad54f0e89705389
https://github.com/phpmyadmin/localized_docs/commit/90092e5cff92c00ebb5d71a98ad54f0e89705389
Author: Burak Yavuz
Date: 2014-06-30 (Mon, 30 Jun 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: 3cd16f17f0af981873f4cc2ed80b00be6200e17e
https://github.com/phpmyadmin/phpmyadmin/commit/3cd16f17f0af981873f4cc2ed80b00be6200e17e
Author: Burak Yavuz
Date: 2014-06-14 (Sat, 14 Jun 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: 8371ae9ea39b242648b080adcede711d524d7e6b
https://github.com/phpmyadmin/phpmyadmin/commit/8371ae9ea39b242648b080adcede711d524d7e6b
Author: Burak Yavuz
Date: 2014-06-10 (Tue, 10 Jun 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: 3cd3f1ba13d45531c3554c5ce6f12a05fe850a9b
https://github.com/phpmyadmin/phpmyadmin/commit/3cd3f1ba13d45531c3554c5ce6f12a05fe850a9b
Author: Burak Yavuz
Date: 2014-06-10 (Tue, 10 Jun 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: 04aca7494c96df7d2b1ac31f226e4ff55b486f30
https://github.com/phpmyadmin/phpmyadmin/commit/04aca7494c96df7d2b1ac31f226e4ff55b486f30
Author: Burak Yavuz
Date: 2014-06-09 (Mon, 09 Jun 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: 676f405dd0b442c61555160ed34addfa984ac7c4
https://github.com/phpmyadmin/phpmyadmin/commit/676f405dd0b442c61555160ed34addfa984ac7c4
Author: Burak Yavuz
Date: 2014-06-08 (Sun, 08 Jun 2014
: d1bcd086f400a067ea535db63aa2f94da4d9698b
https://github.com/phpmyadmin/phpmyadmin/commit/d1bcd086f400a067ea535db63aa2f94da4d9698b
Author: Burak Yavuz
Date: 2014-06-08 (Sun, 08 Jun 2014)
Changed paths:
M po/tr.po
Log Message:
---
Translated using Weblate (Turkish
Branch: refs/heads/QA_4_2
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: 4099c29119d462a07e468d28b225c77fa583ee03
https://github.com/phpmyadmin/phpmyadmin/commit/4099c29119d462a07e468d28b225c77fa583ee03
Author: Burak Yavuz
Date: 2014-06-05 (Thu, 05 Jun 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/localized_docs
Commit: c285a8f4c164a08794b08121b0c26775b073c504
https://github.com/phpmyadmin/localized_docs/commit/c285a8f4c164a08794b08121b0c26775b073c504
Author: Burak Yavuz
Date: 2014-05-29 (Thu, 29 May 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: 6ddbfab662d54e6a23a090c35f506ba436d4d340
https://github.com/phpmyadmin/phpmyadmin/commit/6ddbfab662d54e6a23a090c35f506ba436d4d340
Author: Burak Yavuz
Date: 2014-05-28 (Wed, 28 May 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: 42a694db77692911ada324ac54c47e512b98174b
https://github.com/phpmyadmin/phpmyadmin/commit/42a694db77692911ada324ac54c47e512b98174b
Author: Burak Yavuz
Date: 2014-05-28 (Wed, 28 May 2014
Branch: refs/heads/QA_4_2
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: dccd7e2a4e114136285d05dde157d6152dc8ac4d
https://github.com/phpmyadmin/phpmyadmin/commit/dccd7e2a4e114136285d05dde157d6152dc8ac4d
Author: Burak Yavuz
Date: 2014-05-26 (Mon, 26 May 2014
Branch: refs/heads/master
Home: https://github.com/phpmyadmin/phpmyadmin
Commit: f96d850202d2851802097a88267243c5a3bebd18
https://github.com/phpmyadmin/phpmyadmin/commit/f96d850202d2851802097a88267243c5a3bebd18
Author: Burak Yavuz
Date: 2014-05-25 (Sun, 25 May 2014
901 - 1000 of 1161 matches
Mail list logo