Github user rxin commented on the issue:
https://github.com/apache/spark/pull/7
Please remove the 0 semantics. IMO the zero vs negative number difference
is too subtle. I only find Java String supporting that. Python doesn't
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/7#discussion_r214135400
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/regexpExpressions.scala
---
@@ -229,33 +229,58 @@ case class RLike(left
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/7#discussion_r214131195
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/regexpExpressions.scala
---
@@ -229,33 +229,58 @@ case class RLike(left
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/22010
Thanks for pinging. Please don't merge this until you've addressed the OOM
issue. The aggregators were created to handle incoming data larger than size of
memory. We should never use a Scala or Java
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/22010#discussion_r214103667
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -396,7 +396,16 @@ abstract class RDD[T: ClassTag](
* Return a new RDD containing
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/22010#discussion_r214103223
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -396,7 +396,16 @@ abstract class RDD[T: ClassTag](
* Return a new RDD containing
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/22258
Can you remove vulnerability from the title? Otherwise it sounds like a
security vulnerability here.
---
-
To unsubscribe, e-mail
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/22205#discussion_r213100428
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -130,6 +130,10 @@ abstract class Optimizer
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18447
Yea I'd probably reject this for now, until we see bigger needs for it.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/22162#discussion_r213026874
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -815,6 +815,24 @@ class Dataset[T] private[sql](
println(showString
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/7#discussion_r212815703
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/regexpExpressions.scala
---
@@ -232,30 +232,41 @@ case class RLike(left
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/7#discussion_r212815685
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/regexpExpressions.scala
---
@@ -232,30 +232,41 @@ case class RLike(left
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/22185
cc @rdblue @cloud-fan @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/22185
[SPARK-25127] DataSourceV2: Remove SupportsPushDownCatalystFilters
## What changes were proposed in this pull request?
They depend on internal Expression APIs. Let's see how far we can get
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16600
Can you close this pr?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/22065
Are we talking about a 0.7% margin improvement? It doesn't seem like it's
worth the complexity.
---
-
To unsubscribe, e-mail
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/22157
Do we have a similar issue for Parquet?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/22160
Can you add to the pr description why we are reverting? Just copy paste
what you had above. Thanks.
---
-
To unsubscribe, e-mail
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/22134
I think it's premature to introduce this. The extra layer of abstraction
actually makes it more difficult to reason about what's going on. We don't have
that many data sources that require flexibility
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21944
Thanks, Mahmoud!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21951
LGTM.
On Thu, Aug 2, 2018 at 1:14 AM Xiao Li wrote:
> This will simplify the code and improve the readability. We can do the
> same in the other expression.
>
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21951
Why would we want to use the DSL here? Do we use it in other expressions?
---
-
To unsubscribe, e-mail: reviews-unsubscr
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/21938
[SPARK-24982][SQL] UDAF resolution should not throw AssertionError
## What changes were proposed in this pull request?
When user calls anUDAF with the wrong number of arguments, Spark previously
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21934#discussion_r206681479
--- Diff:
sql/core/src/test/resources/sql-tests/results/table-valued-functions.sql.out ---
@@ -83,8 +83,13 @@ select * from range(1, null)
-- !query 6
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21934
Jenkins, retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21934
cc @gatorsmile @ericl who originally wrote this.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/21934
[SPARK-24951][SQL] Table valued functions should throw AnalysisException
## What changes were proposed in this pull request?
Previously TVF resolution could throw IllegalArgumentException
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21932
Do we really need this? It's almost always the case for resolution that
you'd want to do bottom up, so I thought Michael's original design to just call
it resolveOperators make a lot of sense
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21923
Are there more specific use cases? I always feel it'd be impossible to
design APIs without seeing couple different use cases
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21922
what are the actual changes?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21318
LGTM.
On Fri, Jul 27, 2018 at 10:58 PM Hyukjin Kwon
wrote:
> @rxin <https://github.com/rxin> re: #21318 (comment)
> <https://github.com/apache/s
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21318#discussion_r20582
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -39,7 +39,21 @@ import org.apache.spark.util.Utils
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21897
cc @gatorsmile
cc @hvanhovell why did we expose these types as public Scala APIs? I feel
they should not have been public. If they are public, we should have more
generic VarcharType
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/21897
[minor] Improve documentation for HiveStringType's
The diff should be self-explanatory.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/rxin
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21706#discussion_r205851385
--- Diff: sql/core/src/test/resources/sql-tests/inputs/cast.sql ---
@@ -42,4 +42,38 @@ SELECT CAST('9223372036854775808' AS long);
DESC FUNCTION
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21896
cc @gatorsmile @cloud-fan
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/21896
[SPARK-24865][SQL] Remove AnalysisBarrier addendum
## What changes were proposed in this pull request?
I didn't want to pollute the diff in the previous PR and left some TODOs.
This is a follow
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21318
Yup will do.
On Fri, Jul 27, 2018 at 10:23 AM Sean Owen wrote:
> Just browsing old PRs .. want to finish this one up @rxin
> <https://github.com/rxin>?
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21699
I'm OK with it.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21873#discussion_r205252848
--- Diff: scalastyle-config.xml ---
@@ -150,6 +150,19 @@ This file is divided into 3 sections:
// scalastyle:on println
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21758
What's the failure mode if there are not enough slots for the barrier mode?
We should throw an exception right
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21758#discussion_r205250930
--- Diff: core/src/main/scala/org/apache/spark/scheduler/ActiveJob.scala ---
@@ -60,4 +60,10 @@ private[spark] class ActiveJob(
val finished
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21758#discussion_r205250352
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -1839,6 +1847,20 @@ abstract class RDD[T: ClassTag](
def toJavaRDD() : JavaRDD[T
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21758#discussion_r205249547
--- Diff: core/src/main/scala/org/apache/spark/rdd/MapPartitionsRDD.scala
---
@@ -27,7 +27,8 @@ import org.apache.spark.{Partition, TaskContext}
private
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21758#discussion_r205249449
--- Diff: core/src/main/scala/org/apache/spark/BarrierTaskContext.scala ---
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21758#discussion_r205249225
--- Diff: core/src/main/scala/org/apache/spark/BarrierTaskInfo.scala ---
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21758#discussion_r205249297
--- Diff: core/src/main/scala/org/apache/spark/BarrierTaskInfo.scala ---
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21875
Can you add JDBC to the title?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21866#discussion_r204961291
--- Diff:
external/avro/src/main/scala/org/apache/spark/sql/avro/AvroFileFormat.scala ---
@@ -56,7 +56,7 @@ private[avro] class AvroFileFormat extends
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21867#discussion_r204959300
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -731,7 +731,14 @@ private[spark] class BlockManager
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21822#discussion_r204957474
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -751,7 +751,8 @@ object TypeCoercion
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21822
I changed the way we do the checks in test to use a thread local rather
than checking the stacktrace, so they should run faster now. Also added test
cases for the various new methods. Also moved
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21822#discussion_r204955869
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -787,6 +782,7 @@ class Analyzer(
right
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21845
If that's the only one I think that PR itself needs to be fixed
(significantly increases test runtime), and I wouldn't increase the time
here.
On Mon, Jul 23, 2018 at 11:44 PM
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21822
Yea the extra check in test cases might've contributed to the longer test
time. Let me think about how to reduce it.
On Mon, Jul 23, 2018 at 11:28 PM Hyukjin Kwon
wrote
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21845
Are more pull requests failing due to time out right now?
On Mon, Jul 23, 2018 at 6:30 PM Hyukjin Kwon
wrote:
> @rxin <https://github.com/rxin>, btw you want me close
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21758#discussion_r204504127
--- Diff: core/src/main/scala/org/apache/spark/BarrierTaskInfo.scala ---
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21826
No we can't because you can still use string concat in filters, e.g.
colA || colB == "ab"
What is
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21845
This helps, but it is not sustainable to keep increasing the threshold.
What we need to do is to look at test time distribution and figure out what
test suites are unnecessarily long and actually cut
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21822
Jenkins, retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21822
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21802
Do we really need full codegen for all of these collection functions? They
seem pretty slow and specialization with full codegen won't help perf that much
(and might even hurt by blowing up the code
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21826
cc @gatorsmile @cloud-fan @HyukjinKwon this is a good thing to do?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21826
Jenkins, test this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21822
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21822#discussion_r204163484
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -33,6 +49,116 @@ abstract class LogicalPlan
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21822#discussion_r204163424
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -23,8 +23,24 @@ import
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21822#discussion_r204163328
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -2390,16 +2375,21 @@ class Analyzer(
* scoping
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21822#discussion_r204160853
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
---
@@ -533,7 +537,8 @@ trait CheckAnalysis extends
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21822#discussion_r204160150
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -787,6 +782,7 @@ class Analyzer(
right
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21803
Should we do schema.toDDL, or StructType.toDDL(schema)?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21822#discussion_r203918981
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
---
@@ -533,7 +537,8 @@ trait CheckAnalysis extends
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18784
Let's remove it in 3.0 then. We can do it after 2.4 release.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21742#discussion_r203496489
--- Diff:
external/avro/src/main/scala/org/apache/spark/sql/avro/package.scala ---
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21766
Why did you need this change? Given it's very difficult to revert the
change (or introduce a proper numeric type if ever needed in the future), I
would not merge this pull request unless
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21568
To me it is actually confusing to have the decimal one in there at all, by
defining a list of queries that are reused for different functional
testing. It is very easy to just ignore the subtle
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21568
What are the use cases other than decimal? I am not sure if we need to
build a lot of infrastructure just for one or two use cases
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21568
If they produce different results why do you need any infrastructure for
them? They are just part of the normal test flow.
If they produce the same result, and you don't want to define
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21568
Can you just define a config matrix in the beginning of the file, and each
file is run with the config matrix
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21568
I think it's super confusing to have the config names encoded in file
names. Makes the names super long and difficult to read, and also hard to
verify what was set, and difficult to get multiple
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21705#discussion_r199940775
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/StaticSQLConf.scala
---
@@ -66,6 +66,12 @@ object StaticSQLConf {
.checkValue
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21686
Thanks. Awesome. This matches what I had in mind then.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21459
SGTM.
On Mon, Jul 2, 2018 at 4:38 PM DB Tsai wrote:
> There are three approvals from the committers, and the changes are pretty
> trivial to revert if we see any perfo
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21686
Does this actually work in SQL? How does it work when we don't have a data
type that's a schema?
---
-
To unsubscribe, e-mail
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21626
It is on the public list: https://issues.apache.org/jira/browse/SPARK-24642
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/21598#discussion_r198364343
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -1324,6 +1324,12 @@ object SQLConf {
"Other column v
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21482
OK I double checked. I don't think we should be adding this functionality,
since different databases implemented it differently, and it is somewhat
difficult to create Infinity in Spark SQL given we
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21482
Hey I have an additional thought on this. Will leave it in the next ten
mins.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21598
Here: https://en.wikipedia.org/wiki/Bug_compatibility
On Tue, Jun 26, 2018 at 9:28 AM Reynold Xin wrote:
> Itâs actually common software engineering practice to keep âbug
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21598
Itâs actually common software engineering practice to keep âbuggyâ
semantics if a bug has been out there long enough and a lot of applications
depend on the semantics.
On Tue, Jun
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21598
Do we have other "legacy" configs that we haven't released and can change
to match this prefix? It's pretty nice to have a single prefix for
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21598
This is not a "bug" and there is no "right" behavior in APIs. It's been
defined as -1 since the very beginning (when was it added?), so we can't just
change the default value
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21544
Thanks. Merging in master.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Repository: spark
Updated Branches:
refs/heads/master e4fee395e -> c7c0b086a
add one supported type missing from the javadoc
## What changes were proposed in this pull request?
The supported java.math.BigInteger type is not mentioned in the javadoc of
Encoders.bean()
## How was this patch
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21568
I'm confused by the description. What does this PR actually do?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21574
Does this move actually make sense? It'd destroy stats estimation for
partition pruning.
---
-
To unsubscribe, e-mail: reviews
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21502
How does this solve the problem you described? If the container is gone,
the process is gone and users can't destroy things anymore
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/19498
LGTM
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21482
Thanks, Henry. In general I'm not a huge fan of adding something because
hypothetically somebody might want it. Also if you want this to be compatible
with Impala, wouldn't you want to name
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/21482
@henryr 1.0/0.0 also returns null in Spark SQL ...
```
scala> sql("select cast(1.0 as double)/cast(0 as double)").show()
+-+
201 - 300 of 19261 matches
Mail list logo