So thr are static cost associated with parsing the queries, structuring the
operators but should not be that much.
Another thing is all the data is passed through a parser in Shark,
serialized passed through filter sent to driver.
In Spark data is simply read as text, run through contains
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/93#issuecomment-36976663
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/93#issuecomment-36976664
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13039/
---
If your project
Github user holdenk commented on the pull request:
https://github.com/apache/spark/pull/18#issuecomment-36977010
Is MLI-2 not a good JIRA issue to use for this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/18#issuecomment-36977058
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/79#issuecomment-36977336
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/96
[SPARK-1194] Fix the same-RDD rule for cache replacement
SPARK-1194: https://spark-project.atlassian.net/browse/SPARK-1194
In the current implementation, when selecting candidate blocks to
Hi Xiangrui,
I think it doesn't matter whether we use Fortran/Breeze/RISO for
optimizers since optimization only takes 1% of time. Most of the
time is in gradientSum and lossSum parallel computation.
Sincerely,
DB Tsai
Machine Learning Engineer
Alpine Data Labs
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/96#issuecomment-36980467
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/96#issuecomment-36980466
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/79#issuecomment-36980445
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13041/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/79#issuecomment-36980547
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/79#issuecomment-36980553
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/18#issuecomment-36980443
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/79#issuecomment-36980442
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-36983520
Build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Thanks Mayur - based on the doc-comments in source looks like this will
work for the case. I will confirm.
the dreamers of the day are dangerous men, for they may act their dream
with open eyes, and make it possible
On Fri, Mar 7, 2014 at 2:21 AM, Mayur Rustagi
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/79#issuecomment-37012316
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37013190
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/97
Spark 1162 Implemented takeOrdered in pyspark.
Since python does not have a library for max heap and usual tricks like
inverting values etc.. does not work for all cases. So best thing I could
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37013191
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37016128
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37016129
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13045/
---
If your project
GitHub user guojc opened a pull request:
https://github.com/apache/spark/pull/98
Add timeout for fetch file
Currently, when fetch a file, the connection's connect timeout
and read timeout is based on the default jvm setting, in this change, I
change it to
use
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/98#issuecomment-37033983
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/96#discussion_r10386811
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -236,13 +236,23 @@ private class MemoryStore(blockManager: BlockManager,
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/96#discussion_r10388297
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -236,13 +236,23 @@ private class MemoryStore(blockManager: BlockManager,
Hi Xiangrui,
I used lambda = 0.1...It is possible that 2 users ranked in movies in a
very similar way...
I agree that increasing lambda will solve the problem but you agree this is
not a solution...lambda should be tuned based on sparsity / other criteria
and not to make a linearly dependent
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/96#discussion_r10388411
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -236,13 +236,23 @@ private class MemoryStore(blockManager: BlockManager,
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37040297
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13046/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/94#issuecomment-37040303
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13047/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/94#issuecomment-37040302
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37041120
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37041118
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/96#discussion_r10390021
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -236,13 +236,23 @@ private class MemoryStore(blockManager: BlockManager,
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37046661
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/96#issuecomment-37046789
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/98#issuecomment-37052716
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/96#issuecomment-37052691
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/98#issuecomment-37052715
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/96#issuecomment-37052692
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13049/
---
If your project
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/98#issuecomment-37052776
@guojc hey I'm wondering - if the default is -1 (unlimited, no timeout)
then why is it removing your task set due to failure? If there is no timeout
then won't it just
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/94#issuecomment-37053143
LGTM thanks for improving the existing code here.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/99#issuecomment-37053200
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/99#issuecomment-37053201
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user aarondav opened a pull request:
https://github.com/apache/spark/pull/99
SPARK-929: Fully deprecate usage of SPARK_MEM
(Continued from old repo, prior discussion at
https://github.com/apache/incubator-spark/pull/615)
This patch cements our deprecation of the
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/94#issuecomment-37053538
thanks tom, merged this into master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user guojc commented on the pull request:
https://github.com/apache/spark/pull/98#issuecomment-37054016
I'm not sure the behavior of default -1, as in
http://docs.oracle.com/javase/7/docs/api/java/net/URLConnection.html#setReadTimeout%28int%29
says 0 is for infinity. But we do
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/80#issuecomment-37054161
@ScrapCodes I think the original scaladoc explains that this performs a
shuffle, but you didn't copy this code in any of the python/java docs. Would
you mind adding that?
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/96#discussion_r10394468
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -236,13 +236,18 @@ private class MemoryStore(blockManager: BlockManager,
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/96#discussion_r10394826
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -236,13 +236,18 @@ private class MemoryStore(blockManager: BlockManager,
Github user berngp commented on the pull request:
https://github.com/apache/spark/pull/84#issuecomment-37055758
@pwendell,@aarondav, @sryza couple of questions.
1. Based [SPARK-929] would it make sense to also include
--spark-daemon-memory as an optional argument.?
2. Should I
Hey guys,
This is a follow-up to this semi-recent thread:
http://apache-spark-developers-list.1001551.n3.nabble.com/0-9-0-forces-log4j-usage-td532.html
0.9.0 final is causing issues for us as well because we use Logback as
our backend and Spark requires Log4j now.
I see Patrick has a PR #560 to
Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-37057167
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/99#issuecomment-37058576
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/96#issuecomment-37058765
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-37058828
Build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-37058830
Build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/42#issuecomment-37064296
Build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/96#issuecomment-37064310
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/86#discussion_r10399046
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/100#issuecomment-37079388
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user pwendell opened a pull request:
https://github.com/apache/spark/pull/100
SPARK-782 Clean up for ASM dependency.
This makes two changes.
1) Spark uses the shaded version of asm that is (conveniently) published
with Kryo.
2) Existing exclude rules
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/100#issuecomment-37079425
Come to think of it, we may want to stop excluding asm now since we don't
directly use it anymore (therefore there can be no conflicts w/ Spark).
---
If your project
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/86#discussion_r10405655
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF)
import java.io.IOException;
import java.io.Serializable;
import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.TreeMap;
import java.util.regex.Matcher;
import
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/101#issuecomment-37082403
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user hsaputra commented on a diff in the pull request:
https://github.com/apache/spark/pull/86#discussion_r10406142
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -0,0 +1,153 @@
+/*
+ * Licensed to the Apache Software
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/100#issuecomment-37084570
Will this also work on Java 8?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/97#discussion_r10406957
--- Diff: python/pyspark/maxheapq.py ---
@@ -0,0 +1,115 @@
+# -*- coding: latin-1 -*-
+
+Heap queue algorithm (a.k.a. priority queue).
+
+#
Github user koertkuipers commented on the pull request:
https://github.com/apache/spark/pull/100#issuecomment-37086194
ah got it, thanks. so asm 3.x will be on classpath wether we like it or
not. and we remove all other asm dependencies here, except for a kryo version.
will
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/92#issuecomment-37086543
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13057/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/92#issuecomment-37086542
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37086581
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/101#issuecomment-37086647
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/80#issuecomment-37086649
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/80#issuecomment-37086650
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/80#issuecomment-37086967
Thanks, merging this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/100#issuecomment-37087149
@koertkuipers so I looked at chill and they don't use ASM except inside of
the ClosureCleaner (which they actually borrowed from Spark). Since we don't
use chill's
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/80
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/101#issuecomment-37087798
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37087805
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/80#issuecomment-37087803
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13060/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37087802
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/80#issuecomment-37087801
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37087806
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/101#issuecomment-37088861
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/101#issuecomment-37088862
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37088856
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13061/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/97#issuecomment-37088855
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/86#issuecomment-37088913
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/86#issuecomment-37088937
One or more automated tests failed
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13063/
---
If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/86#issuecomment-37088936
Merged build finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/86#issuecomment-37088962
Newest patch includes tests and doc. @pwendell, do you have a link to the
addJar patch? If it's definitely going to happen, I'll take out the
classloader stuff here.
---
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/91#issuecomment-37089032
Upmerged
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/96#issuecomment-37089290
@pwendell Regression test case added, also ensured that the old
implementation fails on this test case.
---
If your project is set up for it, you can reply to this
GitHub user sryza opened a pull request:
https://github.com/apache/spark/pull/102
SPARK-1064
This reopens PR 649 from incubator-spark against the new repo
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/sryza/spark
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/101#issuecomment-37089804
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/13062/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/91#issuecomment-37089817
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/91#issuecomment-37089816
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
1 - 100 of 117 matches
Mail list logo