[
https://issues.apache.org/jira/browse/SPARK-10506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14736089#comment-14736089
]
baishuo commented on SPARK-10506:
-
flattenGenerator.close and generator.close() should
baishuo created SPARK-10506:
---
Summary: There exits some potential resource leak in
jsonExpressions.scala
Key: SPARK-10506
URL: https://issues.apache.org/jira/browse/SPARK-10506
Project: Spark
Iss
[
https://issues.apache.org/jira/browse/SPARK-8561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14597476#comment-14597476
]
baishuo commented on SPARK-8561:
we run the following code
hivecontext.sql("use dbname")
h
baishuo created SPARK-8561:
--
Summary: Drop table can only drop the tables under database
"default"
Key: SPARK-8561
URL: https://issues.apache.org/jira/browse/SPARK-8561
Project: Spark
Issue Type: B
[
https://issues.apache.org/jira/browse/SPARK-8156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14576597#comment-14576597
]
baishuo commented on SPARK-8156:
hiveContext.sql("""use testdb""")
val df = (1 to 3).map(i
[
https://issues.apache.org/jira/browse/SPARK-8156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14576597#comment-14576597
]
baishuo edited comment on SPARK-8156 at 6/8/15 4:57 AM:
the follow
baishuo created SPARK-8156:
--
Summary: DataFrame created by hiveContext can not create table to
other database except "defualt"
Key: SPARK-8156
URL: https://issues.apache.org/jira/browse/SPARK-8156
Project:
[
https://issues.apache.org/jira/browse/SPARK-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14566899#comment-14566899
]
baishuo commented on SPARK-7935:
close it
> sparkContext in SparkPlan is better to be d
[
https://issues.apache.org/jira/browse/SPARK-7935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
baishuo closed SPARK-7935.
--
Resolution: Not A Problem
> sparkContext in SparkPlan is better to be define as val
> -
baishuo created SPARK-7943:
--
Summary: saveAsTable in DataFrameWriter can only add table to
DataBase “default”
Key: SPARK-7943
URL: https://issues.apache.org/jira/browse/SPARK-7943
Project: Spark
Is
baishuo created SPARK-7935:
--
Summary: sparkContext in SparkPlan is better to be define as val
Key: SPARK-7935
URL: https://issues.apache.org/jira/browse/SPARK-7935
Project: Spark
Issue Type: Improv
[
https://issues.apache.org/jira/browse/SPARK-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549714#comment-14549714
]
baishuo edited comment on SPARK-6533 at 5/19/15 3:16 AM:
-
hi [~hu
[
https://issues.apache.org/jira/browse/SPARK-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549714#comment-14549714
]
baishuo commented on SPARK-6533:
hi [~huangjs], I use wildcard by " sqlContext.load("/..
[
https://issues.apache.org/jira/browse/SPARK-3904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14532192#comment-14532192
]
baishuo commented on SPARK-3904:
please referrence https://github.com/apache/spark/pull/56
[
https://issues.apache.org/jira/browse/SPARK-7179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
baishuo updated SPARK-7179:
---
Priority: Minor (was: Major)
> Add pattern after "show tables" to filter desire tablename
> -
[
https://issues.apache.org/jira/browse/SPARK-7179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14516273#comment-14516273
]
baishuo commented on SPARK-7179:
the semantic of "show tables" in hive is like "show table
baishuo created SPARK-7179:
--
Summary: Add pattern after "show tables" to filter desire tablename
Key: SPARK-7179
URL: https://issues.apache.org/jira/browse/SPARK-7179
Project: Spark
Issue Type: New
[
https://issues.apache.org/jira/browse/SPARK-6067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14356756#comment-14356756
]
baishuo commented on SPARK-6067:
hi jason, I had commit a patch at https://github.com/apa
[
https://issues.apache.org/jira/browse/SPARK-6067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14348528#comment-14348528
]
baishuo commented on SPARK-6067:
Hi Jason, could you please give me more instruction on ho
[
https://issues.apache.org/jira/browse/SPARK-6067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14342642#comment-14342642
]
baishuo commented on SPARK-6067:
Hi Jason, can I get more information about the problem?
[
https://issues.apache.org/jira/browse/SPARK-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14264225#comment-14264225
]
baishuo commented on SPARK-5084:
the exception like this:
from org.apache.hadoop.hive.ql.e
baishuo created SPARK-5084:
--
Summary: when mysql is used as the metadata storage for spark-sql,
Exception occurs when HiveQuerySuite is excute
Key: SPARK-5084
URL: https://issues.apache.org/jira/browse/SPARK-5084
baishuo created SPARK-4663:
--
Summary: close() function is not surrounded by finally in
ParquetTableOperations.scala
Key: SPARK-4663
URL: https://issues.apache.org/jira/browse/SPARK-4663
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14178224#comment-14178224
]
baishuo commented on SPARK-4034:
After click maven->reimport for spark project in idea,
baishuo created SPARK-4034:
--
Summary: get java.lang.NoClassDefFoundError:
com/google/common/util/concurrent/ThreadFactoryBuilder in idea
Key: SPARK-4034
URL: https://issues.apache.org/jira/browse/SPARK-4034
[
https://issues.apache.org/jira/browse/SPARK-3999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
baishuo resolved SPARK-3999.
Resolution: Not a Problem
just need to: right click the project and click maven->reimport
> meet wrong
[
https://issues.apache.org/jira/browse/SPARK-3999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14176050#comment-14176050
]
baishuo commented on SPARK-3999:
the error like:
wrong number of arguments for pattern akk
baishuo created SPARK-3999:
--
Summary: meet wrong number of arguments for pattern error when
compile spark in idea
Key: SPARK-3999
URL: https://issues.apache.org/jira/browse/SPARK-3999
Project: Spark
baishuo created SPARK-3241:
--
Summary: NumberFormat.getInstance() in SparkHiveHadoopWriter is
not threadsafe
Key: SPARK-3241
URL: https://issues.apache.org/jira/browse/SPARK-3241
Project: Spark
Issu
[
https://issues.apache.org/jira/browse/SPARK-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14095300#comment-14095300
]
baishuo edited comment on SPARK-3007 at 8/13/14 9:10 AM:
-
after mo
[
https://issues.apache.org/jira/browse/SPARK-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14095300#comment-14095300
]
baishuo edited comment on SPARK-3007 at 8/13/14 9:08 AM:
-
after mo
[
https://issues.apache.org/jira/browse/SPARK-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14095300#comment-14095300
]
baishuo commented on SPARK-3007:
after modify the code, I can run the hiveql with dynamic
baishuo created SPARK-3007:
--
Summary: Add "Dynamic Partition" support to Spark Sql hive
Key: SPARK-3007
URL: https://issues.apache.org/jira/browse/SPARK-3007
Project: Spark
Issue Type: Improvement
33 matches
Mail list logo