Weizhong created SPARK-19325:
Summary: Running query hang-up 5min
Key: SPARK-19325
URL: https://issues.apache.org/jira/browse/SPARK-19325
Project: Spark
Issue Type: Bug
Components: SQL
[
https://issues.apache.org/jira/browse/SPARK-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15803301#comment-15803301
]
Weizhong commented on SPARK-16180:
--
Hi, we also meet this issue on Spark 1.6. From the executor log, we
[
https://issues.apache.org/jira/browse/SPARK-17929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-17929:
-
Summary: Deadlock when AM restart and send RemoveExecutor on reset (was:
Deadlock when AM restart send
Weizhong created SPARK-17929:
Summary: Deadlock when AM restart send RemoveExecutor
Key: SPARK-17929
URL: https://issues.apache.org/jira/browse/SPARK-17929
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong closed SPARK-9066.
---
Resolution: Fixed
> Improve cartesian performance
> --
>
> Key:
[
https://issues.apache.org/jira/browse/SPARK-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong closed SPARK-13768.
Resolution: Fixed
> Set hive conf failed use --hiveconf when beeline connect to thriftserver
>
[
https://issues.apache.org/jira/browse/SPARK-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-17277:
-
Description:
Now we can't use "SET k=v" to set Hive conf, for example: run below SQL in
spark-sql
[
https://issues.apache.org/jira/browse/SPARK-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-17277:
-
Description:
Now we can't use "SET k=v" to set Hive conf, for example: run below SQL in
spark-sql
Weizhong created SPARK-17277:
Summary: Set hive conf failed
Key: SPARK-17277
URL: https://issues.apache.org/jira/browse/SPARK-17277
Project: Spark
Issue Type: Bug
Components: SQL
[
https://issues.apache.org/jira/browse/SPARK-16852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15404166#comment-15404166
]
Weizhong commented on SPARK-16852:
--
I run 2T tpcds, and some times will print the stack.
{noformat}
Weizhong created SPARK-16852:
Summary: RejectedExecutionException when exit at some times
Key: SPARK-16852
URL: https://issues.apache.org/jira/browse/SPARK-16852
Project: Spark
Issue Type: Bug
Weizhong created SPARK-15824:
Summary: Run 'with ... insert ... select' failed when use spark
thriftserver
Key: SPARK-15824
URL: https://issues.apache.org/jira/browse/SPARK-15824
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-15776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-15776:
-
Description:
{code:sql}
CREATE TABLE cdr (
debet_dt int ,
srv_typ_cdstring ,
Weizhong created SPARK-15776:
Summary: Type coercion incorrect
Key: SPARK-15776
URL: https://issues.apache.org/jira/browse/SPARK-15776
Project: Spark
Issue Type: Bug
Components: SQL
Weizhong created SPARK-15335:
Summary: In Spark 2.0 TRUNCATE TABLE is unsupported
Key: SPARK-15335
URL: https://issues.apache.org/jira/browse/SPARK-15335
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-14261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15275238#comment-15275238
]
Weizhong edited comment on SPARK-14261 at 5/10/16 6:52 AM:
---
I also face this
[
https://issues.apache.org/jira/browse/SPARK-14261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15275238#comment-15275238
]
Weizhong edited comment on SPARK-14261 at 5/7/16 1:32 PM:
--
I also face this
[
https://issues.apache.org/jira/browse/SPARK-14261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15275238#comment-15275238
]
Weizhong commented on SPARK-14261:
--
I also face this issue.
I found every session will add one metastore
[
https://issues.apache.org/jira/browse/SPARK-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-13768:
-
Description:
1. Start thriftserver
2. ./bin/beeline -u '...' --hiveconf
[
https://issues.apache.org/jira/browse/SPARK-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong resolved SPARK-14527.
--
Resolution: Fixed
Fix Version/s: 1.6.1
> Job can't finish when restart all nodemanages with
[
https://issues.apache.org/jira/browse/SPARK-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-14527:
-
Description:
1) Submit a wordcount app
2) Stop all nodenamages when running ShuffleMapStage
3) After
[
https://issues.apache.org/jira/browse/SPARK-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-14527:
-
Description:
1) Submit a wordcount app
2) Stop all nodenamages when running ShuffleMapStage
3) After
Weizhong created SPARK-14527:
Summary: Job can't finish when restart all nodemanage when using
external shuffle services
Key: SPARK-14527
URL: https://issues.apache.org/jira/browse/SPARK-14527
Project:
[
https://issues.apache.org/jira/browse/SPARK-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-14527:
-
Summary: Job can't finish when restart all nodemanages with using external
shuffle services (was: Job
[
https://issues.apache.org/jira/browse/SPARK-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-13768:
-
Description:
1. Start thriftserver
2. ./bin/beeline -u '...' --hiveconf
[
https://issues.apache.org/jira/browse/SPARK-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-13768:
-
Description:
1. Start thriftserver
2. ./bin/beeline -u '...' --hiveconf
[
https://issues.apache.org/jira/browse/SPARK-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-13768:
-
Description:
1. Start thriftserver
2. ./bin/beeline -u '...' --hiveconf
[
https://issues.apache.org/jira/browse/SPARK-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-13768:
-
Description:
1. Start thriftserver
2. ./bin/beeline -u '...' --hiveconf
Weizhong created SPARK-14003:
Summary: Multi-session can not work when run one session is
running "INSERT ... SELECT" move files step
Key: SPARK-14003
URL: https://issues.apache.org/jira/browse/SPARK-14003
[
https://issues.apache.org/jira/browse/SPARK-14003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-14003:
-
Summary: Multi-session can not work when one session is moving files for
"INSERT ... SELECT" clause
[
https://issues.apache.org/jira/browse/SPARK-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-13800:
-
Description:
{color:red}connect to Hive MetaStore service as we have set hive.metastore.uris
in
[
https://issues.apache.org/jira/browse/SPARK-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-13800:
-
Description:
{color:red}connect to Hive MetaStore service as we have set hive.metastore.uris
in
[
https://issues.apache.org/jira/browse/SPARK-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-13800:
-
Description:
{color:red}connect to Hive MetaStore service as we have set hive.metastore.uris
in
Weizhong created SPARK-13800:
Summary: Hive conf will be modified on multi-beeline connect to
thriftserver
Key: SPARK-13800
URL: https://issues.apache.org/jira/browse/SPARK-13800
Project: Spark
Weizhong created SPARK-13768:
Summary: Set hive conf failed use --hiveconf when beeline connect
to thriftserver
Key: SPARK-13768
URL: https://issues.apache.org/jira/browse/SPARK-13768
Project: Spark
Weizhong created SPARK-11948:
Summary: UDF not work
Key: SPARK-11948
URL: https://issues.apache.org/jira/browse/SPARK-11948
Project: Spark
Issue Type: Bug
Components: SQL
Affects
[
https://issues.apache.org/jira/browse/SPARK-11948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-11948:
-
Description:
We create a function,
{noformat}
add jar /home/test/smartcare-udf-0.0.1-SNAPSHOT.jar;
[
https://issues.apache.org/jira/browse/SPARK-11948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-11948:
-
Description:
We create a function,
{noformat}
add jar /home/test/smartcare-udf-0.0.1-SNAPSHOT.jar;
[
https://issues.apache.org/jira/browse/SPARK-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14962140#comment-14962140
]
Weizhong commented on SPARK-11083:
--
Right, this PR can fix this issue, thank you!
> insert overwrite
[
https://issues.apache.org/jira/browse/SPARK-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14956453#comment-14956453
]
Weizhong commented on SPARK-11083:
--
I have retest on latest master branch(end at commit
Weizhong created SPARK-11083:
Summary: insert overwrite table failed when beeline reconnect
Key: SPARK-11083
URL: https://issues.apache.org/jira/browse/SPARK-11083
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-11083:
-
Description:
1. Start Thriftserver
2. Use beeline connect to thriftserver, then execute "insert
[
https://issues.apache.org/jira/browse/SPARK-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14681525#comment-14681525
]
Weizhong commented on SPARK-8361:
-
On SparkSQLSessionManager only override the
[
https://issues.apache.org/jira/browse/SPARK-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14662707#comment-14662707
]
Weizhong commented on SPARK-9066:
-
Yes, the root reaason is same, that is cause by scan
Weizhong created SPARK-9522:
---
Summary: SparkSubmit process can not exit if kill application when
HiveThriftServer was starting
Key: SPARK-9522
URL: https://issues.apache.org/jira/browse/SPARK-9522
Project:
[
https://issues.apache.org/jira/browse/SPARK-9522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-9522:
Component/s: SQL
SparkSubmit process can not exit if kill application when HiveThriftServer
was starting
Weizhong created SPARK-9519:
---
Summary: Confirm stop sc successfully when application was killed
Key: SPARK-9519
URL: https://issues.apache.org/jira/browse/SPARK-9519
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-9066:
Description:
Currently, for CartesianProduct, if right plan partition record number are
small than left
[
https://issues.apache.org/jira/browse/SPARK-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-9066:
Description:
Currently, for CartesianProduct, if right plan partition record number are
small than left
Weizhong created SPARK-9066:
---
Summary: Improve cartesian performance
Key: SPARK-9066
URL: https://issues.apache.org/jira/browse/SPARK-9066
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-9066:
Description:
Currently, for CartesianProduct, if right plan partition number are small than
left partition
[
https://issues.apache.org/jira/browse/SPARK-8811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8811:
Description:
Now we have a table which have some arraystruct... fields, we save the
table data as
Weizhong created SPARK-8811:
---
Summary: Read array struct data from parquet error
Key: SPARK-8811
URL: https://issues.apache.org/jira/browse/SPARK-8811
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14578825#comment-14578825
]
Weizhong commented on SPARK-8071:
-
Hello [~chenghao], it *only failed on the doctest*, the
[
https://issues.apache.org/jira/browse/SPARK-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8162:
Description:
run spark-shell on latest master branch, then failed, details are:
{noformat}
Welcome to
Weizhong created SPARK-8162:
---
Summary: Run spark-shell cause NullPointerException
Key: SPARK-8162
URL: https://issues.apache.org/jira/browse/SPARK-8162
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14575619#comment-14575619
]
Weizhong commented on SPARK-8071:
-
Hello [~chenghao], it failed on the PySpark unit
Weizhong created SPARK-8071:
---
Summary: Run PySpark dataframe.rollup/cube test failed
Key: SPARK-8071
URL: https://issues.apache.org/jira/browse/SPARK-8071
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8071:
Description:
I run test for Spark, and failed on PySpark, details are:
File
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8071:
Environment:
OS: SUSE 11 SP3; JDK: 1.8.0_40; Python: 2.6.8; Hadoop: 2.7.0; Spark: master
branch
was:
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8071:
Description:
I run test for Spark, and failed on PySpark, details are:
File
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8071:
Description:
I run test for Spark, and failed on PySpark, details are:
File
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8071:
Description:
I run test for Spark, and failed on PySpark, details are:
File
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8071:
Description:
I run test for Spark, and failed on PySpark, details are:
File
[
https://issues.apache.org/jira/browse/SPARK-7963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14570317#comment-14570317
]
Weizhong commented on SPARK-7963:
-
Is it same as SPARK-6583?
order by should support
Weizhong created SPARK-7595:
---
Summary: Window will cause resolve failed with self join
Key: SPARK-7595
URL: https://issues.apache.org/jira/browse/SPARK-7595
Project: Spark
Issue Type: Bug
Weizhong created SPARK-7526:
---
Summary: Specify ip of RBackend, MonitorServer and RRDD Socket
server
Key: SPARK-7526
URL: https://issues.apache.org/jira/browse/SPARK-7526
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-7479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537503#comment-14537503
]
Weizhong commented on SPARK-7479:
-
OK. Thank you!
SparkR can not work
Weizhong created SPARK-7479:
---
Summary: SparkR can not work
Key: SPARK-7479
URL: https://issues.apache.org/jira/browse/SPARK-7479
Project: Spark
Issue Type: Bug
Components: SparkR
Weizhong created SPARK-7339:
---
Summary: PySpark shuffle spill memory sometimes are not correct
Key: SPARK-7339
URL: https://issues.apache.org/jira/browse/SPARK-7339
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-6869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-6869:
Description:
From SPARK-1920 and SPARK-1520 we know PySpark on Yarn can not work when the
assembly jar are
Weizhong created SPARK-6870:
---
Summary: Catch InterruptedException when yarn application state
monitor thread been interrupted
Key: SPARK-6870
URL: https://issues.apache.org/jira/browse/SPARK-6870
Project:
Weizhong created SPARK-6869:
---
Summary: Pass PYTHONPATH to executor, so that executor can read
pyspark file from local file system on executor node
Key: SPARK-6869
URL: https://issues.apache.org/jira/browse/SPARK-6869
Weizhong created SPARK-6641:
---
Summary: Add config or control of accumulator on python
Key: SPARK-6641
URL: https://issues.apache.org/jira/browse/SPARK-6641
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-6641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-6641:
Description:
Now if we init SparkContext of Python, then will create a single Accumulator in
Java and
Weizhong created SPARK-6604:
---
Summary: Specify ip of python server scoket
Key: SPARK-6604
URL: https://issues.apache.org/jira/browse/SPARK-6604
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14323531#comment-14323531
]
Weizhong commented on SPARK-5801:
-
This is because in standalone, worker will create temp
Weizhong created SPARK-5830:
---
Summary: Don't create unnecessary directory for local root dir
Key: SPARK-5830
URL: https://issues.apache.org/jira/browse/SPARK-5830
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-5663:
Description:
As we know, in yarn mode Client will create appStagingDir on file system, and
AppMaster will
[
https://issues.apache.org/jira/browse/SPARK-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-5663:
Description:
As we know, in yarn mode Client will create appStagingDir on file system, and
AppMaster will
Weizhong created SPARK-5663:
---
Summary: Delete appStagingDir on local file system
Key: SPARK-5663
URL: https://issues.apache.org/jira/browse/SPARK-5663
Project: Spark
Issue Type: Improvement
Weizhong created SPARK-5644:
---
Summary: Delete tmp dir when sc is stop
Key: SPARK-5644
URL: https://issues.apache.org/jira/browse/SPARK-5644
Project: Spark
Issue Type: Improvement
82 matches
Mail list logo