Weizhong created SPARK-19325:
Summary: Running query hang-up 5min
Key: SPARK-19325
URL: https://issues.apache.org/jira/browse/SPARK-19325
Project: Spark
Issue Type: Bug
Components: SQL
[
https://issues.apache.org/jira/browse/SPARK-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15803301#comment-15803301
]
Weizhong commented on SPARK-16180:
--
Hi, we also meet this issue on Spark 1.6. From the e
[
https://issues.apache.org/jira/browse/SPARK-17929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-17929:
-
Summary: Deadlock when AM restart and send RemoveExecutor on reset (was:
Deadlock when AM restart send R
Weizhong created SPARK-17929:
Summary: Deadlock when AM restart send RemoveExecutor
Key: SPARK-17929
URL: https://issues.apache.org/jira/browse/SPARK-17929
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong closed SPARK-9066.
---
Resolution: Fixed
> Improve cartesian performance
> --
>
> Key: S
[
https://issues.apache.org/jira/browse/SPARK-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong closed SPARK-13768.
Resolution: Fixed
> Set hive conf failed use --hiveconf when beeline connect to thriftserver
>
[
https://issues.apache.org/jira/browse/SPARK-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-17277:
-
Description:
Now we can't use "SET k=v" to set Hive conf, for example: run below SQL in
spark-sql
{nofor
[
https://issues.apache.org/jira/browse/SPARK-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-17277:
-
Description:
Now we can't use "SET k=v" to set Hive conf, for example: run below SQL in
spark-sql
{nofor
Weizhong created SPARK-17277:
Summary: Set hive conf failed
Key: SPARK-17277
URL: https://issues.apache.org/jira/browse/SPARK-17277
Project: Spark
Issue Type: Bug
Components: SQL
Af
[
https://issues.apache.org/jira/browse/SPARK-16852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15404166#comment-15404166
]
Weizhong commented on SPARK-16852:
--
I run 2T tpcds, and some times will print the stack.
Weizhong created SPARK-16852:
Summary: RejectedExecutionException when exit at some times
Key: SPARK-16852
URL: https://issues.apache.org/jira/browse/SPARK-16852
Project: Spark
Issue Type: Bug
Weizhong created SPARK-15824:
Summary: Run 'with ... insert ... select' failed when use spark
thriftserver
Key: SPARK-15824
URL: https://issues.apache.org/jira/browse/SPARK-15824
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-15776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-15776:
-
Description:
{code:sql}
CREATE TABLE cdr (
debet_dt int ,
srv_typ_cdstring ,
Weizhong created SPARK-15776:
Summary: Type coercion incorrect
Key: SPARK-15776
URL: https://issues.apache.org/jira/browse/SPARK-15776
Project: Spark
Issue Type: Bug
Components: SQL
Weizhong created SPARK-15335:
Summary: In Spark 2.0 TRUNCATE TABLE is unsupported
Key: SPARK-15335
URL: https://issues.apache.org/jira/browse/SPARK-15335
Project: Spark
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/SPARK-14261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15275238#comment-15275238
]
Weizhong edited comment on SPARK-14261 at 5/10/16 6:52 AM:
---
I a
[
https://issues.apache.org/jira/browse/SPARK-14261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15275238#comment-15275238
]
Weizhong edited comment on SPARK-14261 at 5/7/16 1:32 PM:
--
I als
[
https://issues.apache.org/jira/browse/SPARK-14261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15275238#comment-15275238
]
Weizhong commented on SPARK-14261:
--
I also face this issue.
I found every session will a
[
https://issues.apache.org/jira/browse/SPARK-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-13768:
-
Description:
1. Start thriftserver
2. ./bin/beeline -u '...' --hiveconf hive.exec.max.dynamic.partitions=
[
https://issues.apache.org/jira/browse/SPARK-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong resolved SPARK-14527.
--
Resolution: Fixed
Fix Version/s: 1.6.1
> Job can't finish when restart all nodemanages with usin
[
https://issues.apache.org/jira/browse/SPARK-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-14527:
-
Description:
1) Submit a wordcount app
2) Stop all nodenamages when running ShuffleMapStage
3) After some
[
https://issues.apache.org/jira/browse/SPARK-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-14527:
-
Description:
1) Submit a wordcount app
2) Stop all nodenamages when running ShuffleMapStage
3) After some
Weizhong created SPARK-14527:
Summary: Job can't finish when restart all nodemanage when using
external shuffle services
Key: SPARK-14527
URL: https://issues.apache.org/jira/browse/SPARK-14527
Project: Sp
[
https://issues.apache.org/jira/browse/SPARK-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-14527:
-
Summary: Job can't finish when restart all nodemanages with using external
shuffle services (was: Job ca
[
https://issues.apache.org/jira/browse/SPARK-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-13768:
-
Description:
1. Start thriftserver
2. ./bin/beeline -u '...' --hiveconf hive.exec.max.dynamic.partitions=
[
https://issues.apache.org/jira/browse/SPARK-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-13768:
-
Description:
1. Start thriftserver
2. ./bin/beeline -u '...' --hiveconf hive.exec.max.dynamic.partitions=
[
https://issues.apache.org/jira/browse/SPARK-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-13768:
-
Description:
1. Start thriftserver
2. ./bin/beeline -u '...' --hiveconf hive.exec.max.dynamic.partitions=
[
https://issues.apache.org/jira/browse/SPARK-13768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-13768:
-
Description:
1. Start thriftserver
2. ./bin/beeline -u '...' --hiveconf hive.exec.max.dynamic.partitions=
Weizhong created SPARK-14003:
Summary: Multi-session can not work when run one session is
running "INSERT ... SELECT" move files step
Key: SPARK-14003
URL: https://issues.apache.org/jira/browse/SPARK-14003
[
https://issues.apache.org/jira/browse/SPARK-14003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-14003:
-
Summary: Multi-session can not work when one session is moving files for
"INSERT ... SELECT" clause (was
[
https://issues.apache.org/jira/browse/SPARK-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-13800:
-
Description:
{color:red}connect to Hive MetaStore service as we have set hive.metastore.uris
in hive-sit
[
https://issues.apache.org/jira/browse/SPARK-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-13800:
-
Description:
{color:red}connect to Hive MetaStore service as we have set hive.metastore.uris
in hive-sit
[
https://issues.apache.org/jira/browse/SPARK-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-13800:
-
Description:
{color:red}connect to Hive MetaStore service as we have set hive.metastore.uris
in hive-sit
Weizhong created SPARK-13800:
Summary: Hive conf will be modified on multi-beeline connect to
thriftserver
Key: SPARK-13800
URL: https://issues.apache.org/jira/browse/SPARK-13800
Project: Spark
Weizhong created SPARK-13768:
Summary: Set hive conf failed use --hiveconf when beeline connect
to thriftserver
Key: SPARK-13768
URL: https://issues.apache.org/jira/browse/SPARK-13768
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-11948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-11948:
-
Description:
We create a function,
{noformat}
add jar /home/test/smartcare-udf-0.0.1-SNAPSHOT.jar;
create
[
https://issues.apache.org/jira/browse/SPARK-11948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-11948:
-
Description:
We create a function,
{noformat}
add jar /home/test/smartcare-udf-0.0.1-SNAPSHOT.jar;
create
Weizhong created SPARK-11948:
Summary: UDF not work
Key: SPARK-11948
URL: https://issues.apache.org/jira/browse/SPARK-11948
Project: Spark
Issue Type: Bug
Components: SQL
Affects Ve
[
https://issues.apache.org/jira/browse/SPARK-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14962140#comment-14962140
]
Weizhong commented on SPARK-11083:
--
Right, this PR can fix this issue, thank you!
> ins
[
https://issues.apache.org/jira/browse/SPARK-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14956453#comment-14956453
]
Weizhong commented on SPARK-11083:
--
I have retest on latest master branch(end at commit
[
https://issues.apache.org/jira/browse/SPARK-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-11083:
-
Description:
1. Start Thriftserver
2. Use beeline connect to thriftserver, then execute "insert overwrite
Weizhong created SPARK-11083:
Summary: insert overwrite table failed when beeline reconnect
Key: SPARK-11083
URL: https://issues.apache.org/jira/browse/SPARK-11083
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14681525#comment-14681525
]
Weizhong commented on SPARK-8361:
-
On SparkSQLSessionManager only override the closeSessio
[
https://issues.apache.org/jira/browse/SPARK-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14662707#comment-14662707
]
Weizhong commented on SPARK-9066:
-
Yes, the root reaason is same, that is cause by scan HD
[
https://issues.apache.org/jira/browse/SPARK-9522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-9522:
Component/s: SQL
> SparkSubmit process can not exit if kill application when HiveThriftServer
> was startin
Weizhong created SPARK-9522:
---
Summary: SparkSubmit process can not exit if kill application when
HiveThriftServer was starting
Key: SPARK-9522
URL: https://issues.apache.org/jira/browse/SPARK-9522
Project:
Weizhong created SPARK-9519:
---
Summary: Confirm stop sc successfully when application was killed
Key: SPARK-9519
URL: https://issues.apache.org/jira/browse/SPARK-9519
Project: Spark
Issue Type: Impr
[
https://issues.apache.org/jira/browse/SPARK-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-9066:
Description:
Currently, for CartesianProduct, if right plan partition record number are
small than left par
[
https://issues.apache.org/jira/browse/SPARK-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-9066:
Description:
Currently, for CartesianProduct, if right plan partition record number are
small than left par
[
https://issues.apache.org/jira/browse/SPARK-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-9066:
Description:
Currently, for CartesianProduct, if right plan partition number are small than
left partition
Weizhong created SPARK-9066:
---
Summary: Improve cartesian performance
Key: SPARK-9066
URL: https://issues.apache.org/jira/browse/SPARK-9066
Project: Spark
Issue Type: Improvement
Componen
[
https://issues.apache.org/jira/browse/SPARK-8811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8811:
Description:
Now we have a table which have some array> fields, we save the
table data as parquet.
But we
Weizhong created SPARK-8811:
---
Summary: Read array struct data from parquet error
Key: SPARK-8811
URL: https://issues.apache.org/jira/browse/SPARK-8811
Project: Spark
Issue Type: Bug
Compo
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14578825#comment-14578825
]
Weizhong commented on SPARK-8071:
-
Hello [~chenghao], it *only failed on the doctest*, the
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8071:
Description:
I run test for Spark, and failed on PySpark, details are:
{code}
File "/xxx/Spark/python/pyspar
[
https://issues.apache.org/jira/browse/SPARK-8162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8162:
Description:
run spark-shell on latest master branch, then failed, details are:
{noformat}
Welcome to
Weizhong created SPARK-8162:
---
Summary: Run spark-shell cause NullPointerException
Key: SPARK-8162
URL: https://issues.apache.org/jira/browse/SPARK-8162
Project: Spark
Issue Type: Bug
Comp
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14576627#comment-14576627
]
Weizhong commented on SPARK-8071:
-
Hi [~davies], In JDK 1.7.0_76 also can reproduce again
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14575619#comment-14575619
]
Weizhong commented on SPARK-8071:
-
Hello [~chenghao], it failed on the PySpark unit test(f
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8071:
Description:
I run test for Spark, and failed on PySpark, details are:
File "/xxx/Spark/python/pyspark/sql/
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8071:
Description:
I run test for Spark, and failed on PySpark, details are:
File "/xxx/Spark/python/pyspark/sql/
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8071:
Description:
I run test for Spark, and failed on PySpark, details are:
File "/xxx/Spark/python/pyspark/sql/
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8071:
Description:
I run test for Spark, and failed on PySpark, details are:
File "/xxx/Spark/python/pyspark/sql/
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8071:
Environment:
OS: SUSE 11 SP3; JDK: 1.8.0_40; Python: 2.6.8; Hadoop: 2.7.0; Spark: master
branch
was:
OS
[
https://issues.apache.org/jira/browse/SPARK-8071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-8071:
Description:
I run test for Spark, and failed on PySpark, details are:
File "/xxx/Spark/python/pyspark/sql/d
Weizhong created SPARK-8071:
---
Summary: Run PySpark dataframe.rollup/cube test failed
Key: SPARK-8071
URL: https://issues.apache.org/jira/browse/SPARK-8071
Project: Spark
Issue Type: Bug
C
[
https://issues.apache.org/jira/browse/SPARK-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14570339#comment-14570339
]
Weizhong edited comment on SPARK-7917 at 6/3/15 6:20 AM:
-
On stand
[
https://issues.apache.org/jira/browse/SPARK-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14570339#comment-14570339
]
Weizhong commented on SPARK-7917:
-
On standalone mode, will kept one tmp dir on each execu
[
https://issues.apache.org/jira/browse/SPARK-7963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14570317#comment-14570317
]
Weizhong commented on SPARK-7963:
-
Is it same as SPARK-6583?
> order by should support ag
Weizhong created SPARK-7595:
---
Summary: Window will cause resolve failed with self join
Key: SPARK-7595
URL: https://issues.apache.org/jira/browse/SPARK-7595
Project: Spark
Issue Type: Bug
Weizhong created SPARK-7526:
---
Summary: Specify ip of RBackend, MonitorServer and RRDD Socket
server
Key: SPARK-7526
URL: https://issues.apache.org/jira/browse/SPARK-7526
Project: Spark
Issue Type:
[
https://issues.apache.org/jira/browse/SPARK-7479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14537503#comment-14537503
]
Weizhong commented on SPARK-7479:
-
OK. Thank you!
> SparkR can not work
> ---
Weizhong created SPARK-7479:
---
Summary: SparkR can not work
Key: SPARK-7479
URL: https://issues.apache.org/jira/browse/SPARK-7479
Project: Spark
Issue Type: Bug
Components: SparkR
Weizhong created SPARK-7339:
---
Summary: PySpark shuffle spill memory sometimes are not correct
Key: SPARK-7339
URL: https://issues.apache.org/jira/browse/SPARK-7339
Project: Spark
Issue Type: Improv
[
https://issues.apache.org/jira/browse/SPARK-6869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-6869:
Description:
>From SPARK-1920 and SPARK-1520 we know PySpark on Yarn can not work when the
>assembly jar ar
Weizhong created SPARK-6870:
---
Summary: Catch InterruptedException when yarn application state
monitor thread been interrupted
Key: SPARK-6870
URL: https://issues.apache.org/jira/browse/SPARK-6870
Project: S
Weizhong created SPARK-6869:
---
Summary: Pass PYTHONPATH to executor, so that executor can read
pyspark file from local file system on executor node
Key: SPARK-6869
URL: https://issues.apache.org/jira/browse/SPARK-6869
[
https://issues.apache.org/jira/browse/SPARK-6641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-6641:
Description:
Now if we init SparkContext of Python, then will create a single Accumulator in
Java and start
Weizhong created SPARK-6641:
---
Summary: Add config or control of accumulator on python
Key: SPARK-6641
URL: https://issues.apache.org/jira/browse/SPARK-6641
Project: Spark
Issue Type: Improvement
Weizhong created SPARK-6604:
---
Summary: Specify ip of python server scoket
Key: SPARK-6604
URL: https://issues.apache.org/jira/browse/SPARK-6604
Project: Spark
Issue Type: Improvement
Comp
[
https://issues.apache.org/jira/browse/SPARK-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong closed SPARK-5663.
---
Resolution: Won't Fix
> Delete appStagingDir on local file system
> -
[
https://issues.apache.org/jira/browse/SPARK-5801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14323531#comment-14323531
]
Weizhong commented on SPARK-5801:
-
This is because in standalone, worker will create temp
Weizhong created SPARK-5830:
---
Summary: Don't create unnecessary directory for local root dir
Key: SPARK-5830
URL: https://issues.apache.org/jira/browse/SPARK-5830
Project: Spark
Issue Type: Improve
[
https://issues.apache.org/jira/browse/SPARK-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-5663:
Description:
As we know, in yarn mode Client will create appStagingDir on file system, and
AppMaster will d
[
https://issues.apache.org/jira/browse/SPARK-5663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Weizhong updated SPARK-5663:
Description:
As we know, in yarn mode Client will create appStagingDir on file system, and
AppMaster will d
Weizhong created SPARK-5663:
---
Summary: Delete appStagingDir on local file system
Key: SPARK-5663
URL: https://issues.apache.org/jira/browse/SPARK-5663
Project: Spark
Issue Type: Improvement
Weizhong created SPARK-5644:
---
Summary: Delete tmp dir when sc is stop
Key: SPARK-5644
URL: https://issues.apache.org/jira/browse/SPARK-5644
Project: Spark
Issue Type: Improvement
Componen
87 matches
Mail list logo