[
https://issues.apache.org/jira/browse/SPARK-47287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831625#comment-17831625
]
Juefei Yan commented on SPARK-47287:
I tried the code on 3.4 branch, cannot reproduce this problem
[ https://issues.apache.org/jira/browse/SPARK-43251 ]
Liang Yan deleted comment on SPARK-43251:
---
was (Author: JIRAUSER299156):
I will work on this issue.
> Assign a name to the error class _LEGACY_ERROR_TEMP_2015
>
[
https://issues.apache.org/jira/browse/SPARK-43251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17720107#comment-17720107
]
Liang Yan commented on SPARK-43251:
---
I will work on this issue.
> Assign a name to the error class
[
https://issues.apache.org/jira/browse/SPARK-42844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17709235#comment-17709235
]
Liang Yan commented on SPARK-42844:
---
[~maxgekk], I have added PR.
> Assign a name to the error class
[
https://issues.apache.org/jira/browse/SPARK-42711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liang Yan resolved SPARK-42711.
---
Resolution: Not A Problem
The original codes are just a copy of upstream. And the changes not fix
[
https://issues.apache.org/jira/browse/SPARK-42711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liang Yan closed SPARK-42711.
-
> build/sbt usage error messages and shellcheck warn/error
>
[
https://issues.apache.org/jira/browse/SPARK-42711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liang Yan updated SPARK-42711:
--
Description:
The build/sbt tool's usage information has some missing content:
{code:java}
(base)
[
https://issues.apache.org/jira/browse/SPARK-42711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liang Yan updated SPARK-42711:
--
Summary: build/sbt usage error messages and shellcheck warn/error (was:
build/sbt usage error
[
https://issues.apache.org/jira/browse/SPARK-42711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697745#comment-17697745
]
Liang Yan commented on SPARK-42711:
---
I am preparing my PR for this issue. Please assign it to me.
>
Liang Yan created SPARK-42711:
-
Summary: build/sbt usage error messages about java-home
Key: SPARK-42711
URL: https://issues.apache.org/jira/browse/SPARK-42711
Project: Spark
Issue Type: Bug
Wei Yan created SPARK-42395:
---
Summary: The code logic of the configmap max size validation lacks
extra content
Key: SPARK-42395
URL: https://issues.apache.org/jira/browse/SPARK-42395
Project: Spark
Wei Yan created SPARK-42344:
---
Summary: The default size of the CONFIG_MAP_MAXSIZE should not be
greater than 1048576
Key: SPARK-42344
URL: https://issues.apache.org/jira/browse/SPARK-42344
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-24666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16957623#comment-16957623
]
carlos yan commented on SPARK-24666:
I also get this question, and my spark version is 2.1.0. I used
[
https://issues.apache.org/jira/browse/SPARK-27894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeffrey(Xilang) Yan updated SPARK-27894:
Description:
In PySpark Steaming, if checkpoint enabled and there is a
[
https://issues.apache.org/jira/browse/SPARK-27894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jeffrey(Xilang) Yan updated SPARK-27894:
Description:
In PySpark Steaming, if checkpoint enabled and there is a
Jeffrey(Xilang) Yan created SPARK-27894:
---
Summary: PySpark streaming transform RDD join not works when
checkpoint enabled
Key: SPARK-27894
URL: https://issues.apache.org/jira/browse/SPARK-27894
[
https://issues.apache.org/jira/browse/SPARK-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16833632#comment-16833632
]
Jeffrey(Xilang) Yan commented on SPARK-5594:
There is a bug before 2.2.3/2.3.0
If you met
[
https://issues.apache.org/jira/browse/SPARK-24374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16490307#comment-16490307
]
Wei Yan commented on SPARK-24374:
-
Thanks [~mengxr] for the initiative and the doc. cc [~leftnoteasy]
Yan created SPARK-21576:
---
Summary: Spark caching difference between 2.0.2 and 2.1.1
Key: SPARK-21576
URL: https://issues.apache.org/jira/browse/SPARK-21576
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-3165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935646#comment-15935646
]
Facai Yan edited comment on SPARK-3165 at 3/22/17 1:57 AM:
---
Do you mean that:
[
https://issues.apache.org/jira/browse/SPARK-3165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935646#comment-15935646
]
Facai Yan commented on SPARK-3165:
--
Do you mean that:
TreePoint.binnedFeatures is Array[int], which
[
https://issues.apache.org/jira/browse/SPARK-19320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904479#comment-15904479
]
Ji Yan commented on SPARK-19320:
i'm proposing to add a configuration parameter to guarantee a hard limit
[
https://issues.apache.org/jira/browse/SPARK-19320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888998#comment-15888998
]
Ji Yan commented on SPARK-19320:
[~tnachen] in this case, should we rename spark.mesos.gpus.max to
[
https://issues.apache.org/jira/browse/SPARK-19740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884487#comment-15884487
]
Ji Yan commented on SPARK-19740:
the problem is that when running Spark on Mesos, there is no way to run
[
https://issues.apache.org/jira/browse/SPARK-19740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884426#comment-15884426
]
Ji Yan commented on SPARK-19740:
proposed change:
[
https://issues.apache.org/jira/browse/SPARK-19740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Ji Yan updated SPARK-19740:
---
Description:
When running Spark on Mesos with docker containerizer, the spark executors are
always launched
Ji Yan created SPARK-19740:
--
Summary: Spark executor always runs as root when running on mesos
Key: SPARK-19740
URL: https://issues.apache.org/jira/browse/SPARK-19740
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-15777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15593229#comment-15593229
]
Yan commented on SPARK-15777:
-
One approach could be first tagging a subtree as specific to a data source,
[
https://issues.apache.org/jira/browse/SPARK-15777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15590055#comment-15590055
]
Yan commented on SPARK-15777:
-
There is a paragraph in the design doc about the ordering of rule application
[
https://issues.apache.org/jira/browse/SPARK-15777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15543329#comment-15543329
]
Yan commented on SPARK-15777:
-
1) Currently the rules are applied on a per-session basis. Right, ideally they
[
https://issues.apache.org/jira/browse/SPARK-15777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15541643#comment-15541643
]
Yan commented on SPARK-15777:
-
A design document is just attached above. We realize that this task requires
[
https://issues.apache.org/jira/browse/SPARK-15777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yan updated SPARK-15777:
Attachment: SparkFederationDesign.pdf
> Catalog federation
> --
>
> Key:
[
https://issues.apache.org/jira/browse/SPARK-17556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15519497#comment-15519497
]
Yan commented on SPARK-17556:
-
For 2), I think BitTorrent won't help in the case of all-to-all transfers,
[
https://issues.apache.org/jira/browse/SPARK-17556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15518598#comment-15518598
]
Yan commented on SPARK-17556:
-
A few comments of mine are as follows:
1) The "one-executor collection"
Yan created SPARK-17375:
---
Summary: Star Join Optimization
Key: SPARK-17375
URL: https://issues.apache.org/jira/browse/SPARK-17375
Project: Spark
Issue Type: Improvement
Components: SQL
[
https://issues.apache.org/jira/browse/SPARK-14521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15267811#comment-15267811
]
Yan commented on SPARK-14521:
-
Yes, we are serializing the fields that should not be serialized. Please check
[
https://issues.apache.org/jira/browse/SPARK-14507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15240616#comment-15240616
]
Yan commented on SPARK-14507:
-
In terms of Hive support vs Spark SQL support, the "external table" concept
[
https://issues.apache.org/jira/browse/SPARK-11368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15233231#comment-15233231
]
Yan commented on SPARK-11368:
-
The issue seems to be gone with the latest master code (for 2.0):
[
https://issues.apache.org/jira/browse/SPARK-14389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231655#comment-15231655
]
Yan commented on SPARK-14389:
-
Actually the current Master branch does not have the issue; while 1.6.0 has.
[
https://issues.apache.org/jira/browse/SPARK-14389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231013#comment-15231013
]
Yan commented on SPARK-14389:
-
[~Steve Johnston]
I guess the memory eater is probably not from the broadcast
[
https://issues.apache.org/jira/browse/SPARK-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15128829#comment-15128829
]
Yan commented on SPARK-12988:
-
My thinking is that projections should parse the column names; while the
[
https://issues.apache.org/jira/browse/SPARK-12988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15127883#comment-15127883
]
Yan commented on SPARK-12988:
-
[~marmbrus] For the same reason of "`a.c` is an invalid column name. toDF(...)
[
https://issues.apache.org/jira/browse/SPARK-12686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088343#comment-15088343
]
Yan commented on SPARK-12686:
-
Spark-12449 seems to be a super set of this Jira.
> Support group-by push
[
https://issues.apache.org/jira/browse/SPARK-12449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088334#comment-15088334
]
Yan commented on SPARK-12449:
-
Stephan, thanks for your explanations and questions. My answers are as
[
https://issues.apache.org/jira/browse/SPARK-12449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15087983#comment-15087983
]
Yan commented on SPARK-12449:
-
Stephan,
By "partial op" I mean, for instance, partial map-side aggregation.
[
https://issues.apache.org/jira/browse/SPARK-12449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070088#comment-15070088
]
Yan commented on SPARK-12449:
-
To push down map-side (partial) ops, Physical Plan would need to be checked I
[
https://issues.apache.org/jira/browse/SPARK-12449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070135#comment-15070135
]
Yan commented on SPARK-12449:
-
Conceivably, if only logical plan is used, the Spark SQL execution would
[
https://issues.apache.org/jira/browse/SPARK-12449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068447#comment-15068447
]
Yan commented on SPARK-12449:
-
A few thoughts on the capabilities of this "CatalystSource Interface":
1)
[
https://issues.apache.org/jira/browse/SPARK-7825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tang Yan updated SPARK-7825:
Affects Version/s: (was: 1.3.1)
(was: 1.2.2)
(was:
Tang Yan created SPARK-7825:
---
Summary: Poor performance in Cross Product due to no combine
operations for small files.
Key: SPARK-7825
URL: https://issues.apache.org/jira/browse/SPARK-7825
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Feixiang Yan updated SPARK-7270:
Description:
Create a hive table with two partitons,the first type is bigint and the second
type
Feixiang Yan created SPARK-7270:
---
Summary: StringType dynamic partition cast to DecimalType in Spark
Sql Hive
Key: SPARK-7270
URL: https://issues.apache.org/jira/browse/SPARK-7270
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-3306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14378073#comment-14378073
]
Yan commented on SPARK-3306:
If by global singleton object, you meant it to be in the Executor
[
https://issues.apache.org/jira/browse/SPARK-3306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14377414#comment-14377414
]
Yan commented on SPARK-3306:
The external resource primarily will serve the purpose of reuse
Lu Yan created SPARK-5614:
-
Summary: Predicate pushdown through Generate
Key: SPARK-5614
URL: https://issues.apache.org/jira/browse/SPARK-5614
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-3880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yan updated SPARK-3880:
---
Attachment: SparkSQLOnHBase_v2.0.docx
HBase as data source to SparkSQL
[
https://issues.apache.org/jira/browse/SPARK-3880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yan updated SPARK-3880:
---
Attachment: (was: SparkSQLOnHBase_v2.docx)
HBase as data source to SparkSQL
[
https://issues.apache.org/jira/browse/SPARK-3880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yan updated SPARK-3880:
---
Attachment: SparkSQLOnHBase_v2.docx
Version 2
HBase as data source to SparkSQL
[
https://issues.apache.org/jira/browse/SPARK-3306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14263914#comment-14263914
]
Yan commented on SPARK-3306:
Yes, close/cleanup will be supported, plus a show/list
[
https://issues.apache.org/jira/browse/SPARK-3880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14167006#comment-14167006
]
Yan commented on SPARK-3880:
The new context is intended to be very light-weighted. We noticed
Yan created SPARK-3880:
--
Summary: HBase as data source to SparkSQL
Key: SPARK-3880
URL: https://issues.apache.org/jira/browse/SPARK-3880
Project: Spark
Issue Type: New Feature
Reporter: Yan
[
https://issues.apache.org/jira/browse/SPARK-3880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yan updated SPARK-3880:
---
Component/s: SQL
Fix Version/s: (was: 1.3.0)
HBase as data source to SparkSQL
[
https://issues.apache.org/jira/browse/SPARK-3880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yan updated SPARK-3880:
---
Attachment: HBaseOnSpark.docx
Design Document
HBase as data source to SparkSQL
Yan created SPARK-3306:
--
Summary: Addition of external resource dependency in executors
Key: SPARK-3306
URL: https://issues.apache.org/jira/browse/SPARK-3306
Project: Spark
Issue Type: New Feature
64 matches
Mail list logo