[
https://issues.apache.org/jira/browse/SPARK-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei resolved SPARK-5756.
Resolution: Fixed
Analyzer should not throw scala.NotImplementedError for illegitimate sql
wangfei created SPARK-5756:
--
Summary: Analyzer should not throw scala.NotImplementedError for
legitimate sql
Key: SPARK-5756
URL: https://issues.apache.org/jira/browse/SPARK-5756
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-5756:
---
Summary: Analyzer should not throw scala.NotImplementedError for
illegitimate sql (was: Analyzer should not
wangfei created SPARK-5649:
--
Summary: Throw exception when can not apply datatype cast
Key: SPARK-5649
URL: https://issues.apache.org/jira/browse/SPARK-5649
Project: Spark
Issue Type: Improvement
wangfei created SPARK-5617:
--
Summary: test failure of SQLQuerySuite
Key: SPARK-5617
URL: https://issues.apache.org/jira/browse/SPARK-5617
Project: Spark
Issue Type: Bug
Components: SQL
wangfei created SPARK-5592:
--
Summary: java.net.URISyntaxException when insert data to a
partitioned table
Key: SPARK-5592
URL: https://issues.apache.org/jira/browse/SPARK-5592
Project: Spark
wangfei created SPARK-5591:
--
Summary: NoSuchObjectException for CTAS
Key: SPARK-5591
URL: https://issues.apache.org/jira/browse/SPARK-5591
Project: Spark
Issue Type: Improvement
wangfei created SPARK-5583:
--
Summary: Support unique join in hive context
Key: SPARK-5583
URL: https://issues.apache.org/jira/browse/SPARK-5583
Project: Spark
Issue Type: Improvement
wangfei created SPARK-5587:
--
Summary: Support change database owner
Key: SPARK-5587
URL: https://issues.apache.org/jira/browse/SPARK-5587
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-5383:
---
Summary: support alias for udfs with multi output columns (was: Multi
alias names support)
support alias
[
https://issues.apache.org/jira/browse/SPARK-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-5383:
---
Description:
when a udf output multi columns, now we can not use alias for them in
spark-sql, see this
wangfei created SPARK-5383:
--
Summary: Multi alias names support
Key: SPARK-5383
URL: https://issues.apache.org/jira/browse/SPARK-5383
Project: Spark
Issue Type: Improvement
Components:
wangfei created SPARK-5367:
--
Summary: support star expression in udf
Key: SPARK-5367
URL: https://issues.apache.org/jira/browse/SPARK-5367
Project: Spark
Issue Type: Bug
Components: SQL
[
https://issues.apache.org/jira/browse/SPARK-5367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-5367:
---
Description:
now spark sql does not support star expression in udf, the following sql will
get error
```
[
https://issues.apache.org/jira/browse/SPARK-5373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-5373:
---
Description: select key, count( * ) from src group by key, 1 will get the
wrong answer! (was: select key,
wangfei created SPARK-5373:
--
Summary: literal in agg grouping expressioons leads to incorrect
result
Key: SPARK-5373
URL: https://issues.apache.org/jira/browse/SPARK-5373
Project: Spark
Issue
wangfei created SPARK-5285:
--
Summary: Removed GroupExpression in catalyst
Key: SPARK-5285
URL: https://issues.apache.org/jira/browse/SPARK-5285
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-5251:
---
Target Version/s: 1.3.0
Using `tableIdentifier` in hive metastore
[
https://issues.apache.org/jira/browse/SPARK-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-5251:
---
Target Version/s: (was: 1.3.0)
Using `tableIdentifier` in hive metastore
wangfei created SPARK-5251:
--
Summary: Using `tableIdentifier` in hive metastore
Key: SPARK-5251
URL: https://issues.apache.org/jira/browse/SPARK-5251
Project: Spark
Issue Type: Improvement
wangfei created SPARK-5240:
--
Summary: Adding `createDataSourceTable` interface to Catalog
Key: SPARK-5240
URL: https://issues.apache.org/jira/browse/SPARK-5240
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272862#comment-14272862
]
wangfei commented on SPARK-4861:
[~yhuai]of course if possible, but i have not find a way
[
https://issues.apache.org/jira/browse/SPARK-4572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270960#comment-14270960
]
wangfei commented on SPARK-4572:
which version you get this error? it should be fixed
[
https://issues.apache.org/jira/browse/SPARK-4574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270967#comment-14270967
]
wangfei commented on SPARK-4574:
[~pwendell] get it, thanks.
Adding support for defining
[
https://issues.apache.org/jira/browse/SPARK-1442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270973#comment-14270973
]
wangfei commented on SPARK-1442:
why the two PR both closed?
Add Window function support
[
https://issues.apache.org/jira/browse/SPARK-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei closed SPARK-4673.
--
Resolution: Fixed
since coalesce (1) will lead to run with a single thread, not always speed up
limit so close
[
https://issues.apache.org/jira/browse/SPARK-5000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei closed SPARK-5000.
--
Resolution: Fixed
Alias support string literal in spark sql
-
[
https://issues.apache.org/jira/browse/SPARK-5000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14270718#comment-14270718
]
wangfei commented on SPARK-5000:
backticks can do this, so close this one.
Alias support
wangfei created SPARK-5165:
--
Summary: Add support for rollup and cube in sqlcontext
Key: SPARK-5165
URL: https://issues.apache.org/jira/browse/SPARK-5165
Project: Spark
Issue Type: New Feature
wangfei created SPARK-5029:
--
Summary: Enable from follow multiple brackets
Key: SPARK-5029
URL: https://issues.apache.org/jira/browse/SPARK-5029
Project: Spark
Issue Type: Improvement
wangfei created SPARK-5000:
--
Summary: Alias support string literal in spark sql parser
Key: SPARK-5000
URL: https://issues.apache.org/jira/browse/SPARK-5000
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-5000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-5000:
---
Summary: Alias support string literal in spark sql (was: Alias support
string literal in spark sql parser)
wangfei created SPARK-4984:
--
Summary: add a pop-up containing the full for job description when
it is very long
Key: SPARK-4984
URL: https://issues.apache.org/jira/browse/SPARK-4984
Project: Spark
wangfei created SPARK-4975:
--
Summary: HiveInspectorSuite test failure
Key: SPARK-4975
URL: https://issues.apache.org/jira/browse/SPARK-4975
Project: Spark
Issue Type: Bug
Components: SQL
wangfei created SPARK-4935:
--
Summary: When hive.cli.print.header configured, spark-sql aborted
if passed in a invalid sql
Key: SPARK-4935
URL: https://issues.apache.org/jira/browse/SPARK-4935
Project: Spark
wangfei created SPARK-4937:
--
Summary: Adding optimization to simplify the filter condition
Key: SPARK-4937
URL: https://issues.apache.org/jira/browse/SPARK-4937
Project: Spark
Issue Type:
wangfei created SPARK-4938:
--
Summary: Adding optimization to simplify the filter condition
Key: SPARK-4938
URL: https://issues.apache.org/jira/browse/SPARK-4938
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-4938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14257042#comment-14257042
]
wangfei commented on SPARK-4938:
Duplicate
Adding optimization to simplify the filter
wangfei created SPARK-4861:
--
Summary: Refactory command in spark sql
Key: SPARK-4861
URL: https://issues.apache.org/jira/browse/SPARK-4861
Project: Spark
Issue Type: Improvement
wangfei created SPARK-4845:
--
Summary: Adding a parallelismRatio to control the partitions num
of shuffledRDD
Key: SPARK-4845
URL: https://issues.apache.org/jira/browse/SPARK-4845
Project: Spark
wangfei created SPARK-4695:
--
Summary: Get result using executeCollect in spark sql
Key: SPARK-4695
URL: https://issues.apache.org/jira/browse/SPARK-4695
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-4695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4695:
---
Issue Type: Improvement (was: Bug)
Get result using executeCollect in spark sql
wangfei created SPARK-4673:
--
Summary: Optimizing limit using coalesce
Key: SPARK-4673
URL: https://issues.apache.org/jira/browse/SPARK-4673
Project: Spark
Issue Type: Bug
Components: SQL
wangfei created SPARK-4618:
--
Summary: Make foreign DDL commands options case-insensitive
Key: SPARK-4618
URL: https://issues.apache.org/jira/browse/SPARK-4618
Project: Spark
Issue Type: Improvement
wangfei created SPARK-4574:
--
Summary: Adding support for defining schema in foreign DDL
commands.
Key: SPARK-4574
URL: https://issues.apache.org/jira/browse/SPARK-4574
Project: Spark
Issue Type:
wangfei created SPARK-4552:
--
Summary: query for empty parquet table in spark sql hive get
IllegalArgumentException
Key: SPARK-4552
URL: https://issues.apache.org/jira/browse/SPARK-4552
Project: Spark
wangfei created SPARK-4553:
--
Summary: query for parquet table with string fields in spark sql
hive get binary result
Key: SPARK-4553
URL: https://issues.apache.org/jira/browse/SPARK-4553
Project: Spark
wangfei created SPARK-4554:
--
Summary: Set fair scheduler pool for JDBC client session in hive 13
Key: SPARK-4554
URL: https://issues.apache.org/jira/browse/SPARK-4554
Project: Spark
Issue Type: Bug
wangfei created SPARK-4559:
--
Summary: Adding support for ucase and lcase
Key: SPARK-4559
URL: https://issues.apache.org/jira/browse/SPARK-4559
Project: Spark
Issue Type: Bug
Components:
wangfei created SPARK-4449:
--
Summary: specify port range in spark
Key: SPARK-4449
URL: https://issues.apache.org/jira/browse/SPARK-4449
Project: Spark
Issue Type: Bug
Components: Spark
wangfei created SPARK-4443:
--
Summary: Statistics bug for external table in spark sql hive
Key: SPARK-4443
URL: https://issues.apache.org/jira/browse/SPARK-4443
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4443:
---
Description: When table is external, the `totalSize` is always zero, which
will influence join
[
https://issues.apache.org/jira/browse/SPARK-4443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4443:
---
Description: When table is external, `totalSize` is always zero,
which will influence join
wangfei created SPARK-4292:
--
Summary: incorrect result set in JDBC/ODBC
Key: SPARK-4292
URL: https://issues.apache.org/jira/browse/SPARK-4292
Project: Spark
Issue Type: Bug
Components:
[
https://issues.apache.org/jira/browse/SPARK-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4261:
---
Description:
Running with spark sql jdbc/odbc, the output will be
JackydeMacBook-Pro:spark1 jackylee$
wangfei created SPARK-4261:
--
Summary: make right version info for beeline
Key: SPARK-4261
URL: https://issues.apache.org/jira/browse/SPARK-4261
Project: Spark
Issue Type: Bug
Components:
[
https://issues.apache.org/jira/browse/SPARK-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4237:
---
Description:
Now build spark with maven produce the Manifest File of guava,
we should make right Manifest
wangfei created SPARK-4225:
--
Summary: jdbc/odbc error when using maven build spark
Key: SPARK-4225
URL: https://issues.apache.org/jira/browse/SPARK-4225
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-4225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196693#comment-14196693
]
wangfei commented on SPARK-4225:
it seems there is some difference between using sbt and
[
https://issues.apache.org/jira/browse/SPARK-4225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14196693#comment-14196693
]
wangfei edited comment on SPARK-4225 at 11/4/14 7:46 PM:
-
it seems
wangfei created SPARK-4237:
--
Summary: add Manifest File for Maven building
Key: SPARK-4237
URL: https://issues.apache.org/jira/browse/SPARK-4237
Project: Spark
Issue Type: Bug
Components:
[
https://issues.apache.org/jira/browse/SPARK-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14197715#comment-14197715
]
wangfei commented on SPARK-4237:
The title is not correct, should be generate right
[
https://issues.apache.org/jira/browse/SPARK-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4237:
---
Summary: Generate right Manifest File for maven building (was: add
Manifest File for Maven building)
wangfei created SPARK-4191:
--
Summary: move wrapperFor to HiveInspectors to reuse them
Key: SPARK-4191
URL: https://issues.apache.org/jira/browse/SPARK-4191
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4191:
---
Issue Type: Improvement (was: Bug)
move wrapperFor to HiveInspectors to reuse them
[
https://issues.apache.org/jira/browse/SPARK-3652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei resolved SPARK-3652.
Resolution: Fixed
upgrade spark sql hive version to 0.13.1
[
https://issues.apache.org/jira/browse/SPARK-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14192701#comment-14192701
]
wangfei commented on SPARK-3322:
yes, to close this.
ConnectionManager logs an error
[
https://issues.apache.org/jira/browse/SPARK-2460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei closed SPARK-2460.
--
Resolution: Fixed
Optimize SparkContext.hadoopFile api
-
wangfei created SPARK-4177:
--
Summary: update build doc for JDBC/CLI already supporting hive 13
Key: SPARK-4177
URL: https://issues.apache.org/jira/browse/SPARK-4177
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-4001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178183#comment-14178183
]
wangfei commented on SPARK-4001:
Thanks Sean Owen for explaining! Frequent itemset
[
https://issues.apache.org/jira/browse/SPARK-4001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14178183#comment-14178183
]
wangfei edited comment on SPARK-4001 at 10/21/14 9:38 AM:
--
.
[
https://issues.apache.org/jira/browse/SPARK-4001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4001:
---
Comment: was deleted
(was: .)
Add Apriori algorithm to Spark MLlib
wangfei created SPARK-4041:
--
Summary: convert attributes names in table scan lowercase when
compare with relation attributes
Key: SPARK-4041
URL: https://issues.apache.org/jira/browse/SPARK-4041
Project:
wangfei created SPARK-4042:
--
Summary: append columns ids and names before broadcast
Key: SPARK-4042
URL: https://issues.apache.org/jira/browse/SPARK-4042
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4042:
---
Description: appended columns ids and names will not broadcast because we
append them after create table
[
https://issues.apache.org/jira/browse/SPARK-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4042:
---
Description:
appended columns ids and names will not broadcast because we append them after
create table
[
https://issues.apache.org/jira/browse/SPARK-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4042:
---
Description:
appended columns ids and names will not broadcast because we append them after
creating table
[
https://issues.apache.org/jira/browse/SPARK-4042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-4042:
---
Description:
appended columns ids and names will not broadcast because we append them after
create table
[
https://issues.apache.org/jira/browse/SPARK-3935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3935:
---
Description:
There is a unused variable (count) in saveAsHadoopDataset function in
PairRDDFunctions.scala.
[
https://issues.apache.org/jira/browse/SPARK-3826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3826:
---
Affects Version/s: (was: 1.1.1)
1.1.0
enable hive-thriftserver support
wangfei created SPARK-3899:
--
Summary: wrong links in streaming doc
Key: SPARK-3899
URL: https://issues.apache.org/jira/browse/SPARK-3899
Project: Spark
Issue Type: Bug
Components:
wangfei created SPARK-3809:
--
Summary: make HiveThriftServer2Suite work correctly
Key: SPARK-3809
URL: https://issues.apache.org/jira/browse/SPARK-3809
Project: Spark
Issue Type: Bug
wangfei created SPARK-3826:
--
Summary: enable hive-thriftserver support hive-0.13.1
Key: SPARK-3826
URL: https://issues.apache.org/jira/browse/SPARK-3826
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-3793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei closed SPARK-3793.
--
Resolution: Fixed
should fix it in #2241
use hiveconf when parse hive ql
---
wangfei created SPARK-3806:
--
Summary: minor bug and exception in CliSuite
Key: SPARK-3806
URL: https://issues.apache.org/jira/browse/SPARK-3806
Project: Spark
Issue Type: Bug
Components:
[
https://issues.apache.org/jira/browse/SPARK-3806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3806:
---
Summary: minor bug in CliSuite (was: minor bug and exception in CliSuite)
minor bug in CliSuite
wangfei created SPARK-3792:
--
Summary: enable JavaHiveQLSuite
Key: SPARK-3792
URL: https://issues.apache.org/jira/browse/SPARK-3792
Project: Spark
Issue Type: Improvement
Components: SQL
wangfei created SPARK-3793:
--
Summary: add para hiveconf when parse hive ql
Key: SPARK-3793
URL: https://issues.apache.org/jira/browse/SPARK-3793
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-3793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3793:
---
Summary: use hiveconf when parse hive ql (was: add para hiveconf when
parse hive ql)
use hiveconf when
[
https://issues.apache.org/jira/browse/SPARK-3793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3793:
---
Summary: add para hiveconf when parse hive ql (was: use hiveconf when
parse hive ql)
add para hiveconf
[
https://issues.apache.org/jira/browse/SPARK-3793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3793:
---
Summary: use hiveconf when parse hive ql (was: add para hiveconf when
parse hive ql)
use hiveconf when
wangfei created SPARK-3765:
--
Summary: add testing with sbt to doc
Key: SPARK-3765
URL: https://issues.apache.org/jira/browse/SPARK-3765
Project: Spark
Issue Type: Improvement
Affects Versions:
wangfei created SPARK-3766:
--
Summary: Snappy is also the default compression codec for
broadcast variables
Key: SPARK-3766
URL: https://issues.apache.org/jira/browse/SPARK-3766
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-3766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3766:
---
Component/s: Documentation
Snappy is also the default compression codec for broadcast variables
[
https://issues.apache.org/jira/browse/SPARK-3765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3765:
---
Component/s: Documentation
add testing with sbt to doc
---
Key:
wangfei created SPARK-3755:
--
Summary: Do not bind port 1 - 1024 to server in spark
Key: SPARK-3755
URL: https://issues.apache.org/jira/browse/SPARK-3755
Project: Spark
Issue Type: Bug
wangfei created SPARK-3756:
--
Summary: check exception is caused by an address-port collision
when binding properly
Key: SPARK-3756
URL: https://issues.apache.org/jira/browse/SPARK-3756
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-3756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3756:
---
Affects Version/s: 1.1.0
check exception is caused by an address-port collision when binding properly
[
https://issues.apache.org/jira/browse/SPARK-3756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3756:
---
Description: a tiny bug in method isBindCollision
Target Version/s: 1.2.0
check exception is
[
https://issues.apache.org/jira/browse/SPARK-3756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
wangfei updated SPARK-3756:
---
Summary: check exception is caused by an address-port collision properly
(was: check exception is caused by
1 - 100 of 138 matches
Mail list logo