Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/19819
I've seen your PR: https://github.com/apache/spark/pull/20997, a good
solution @gaborgsomogyi
---
-
To unsubscribe, e-mail
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/18756
Execute me ,has the concept of default value been introduce to schema in
master branch? @gatorsmile thank you
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/20356
Think you very much for your review. I see the discussion, your pr and
learn a lot. But I just want to solve the problem when execute "insert into
... values ...", which not involv
GitHub user lvdongr opened a pull request:
https://github.com/apache/spark/pull/20356
[SPARK-23185][SQL] Make the configuration "spark.default.parallelism" can
be changed on each SQL session to decrease empty files
## What changes were proposed in this pull request
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/19819
Will the cached consumer to the same partition increase , when different
tasks consume the same partition and no place to remove
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/18987
ok. Thank you all the same for your review @srowen @jerryshao @ajbozarth .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user lvdongr closed the pull request at:
https://github.com/apache/spark/pull/18987
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/18987
The log level setting is a very useful function.Our team is doing a spark
application and when we want to see the debug log, we have to restart the
application every time. So we develop
GitHub user lvdongr opened a pull request:
https://github.com/apache/spark/pull/18987
[SPARK-21775][Core]Dynamic Log Level Settings for executors
## What changes were proposed in this pull request?
Someimes we want to change the log level of executor when our application
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/18756
ok, I will solve the problems left first, and hold this PR @gatorsmile.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/18756
You mean we can provide the different type of values with different
default values? like int with 0 ,and string with "" ?Or we set the default
values when define the table? @gatorsmi
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/18756
You can see this picture,my table has three columns,and I insert only two
columns, then the last column is null. @maropu @gatorsmile
![insertinto](https://user-images.githubusercontent.com
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/18756
The target of this pr is support to insert into specified columnsï¼ all
columns is no need ï¼ like insert into t(a, c) values (1, 0.8) .
---
If your project is set up for it, you can
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/18756
Thank you for review, I will finish the tests as soon as possible.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user lvdongr closed the pull request at:
https://github.com/apache/spark/pull/18753
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user lvdongr opened a pull request:
https://github.com/apache/spark/pull/18756
[SPARK-21548][SQL] "Support insert into serial columns of table"
## What changes were proposed in this pull request?
When we use the 'insert into ...' statement we can only
Github user lvdongr closed the pull request at:
https://github.com/apache/spark/pull/18751
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user lvdongr opened a pull request:
https://github.com/apache/spark/pull/18753
[SPARK-21548] [SQL] Support insert into serial columns of table
## What changes were proposed in this pull request?
When we use the 'insert into ...' statement we can only insert all
Github user lvdongr closed the pull request at:
https://github.com/apache/spark/pull/17203
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user lvdongr opened a pull request:
https://github.com/apache/spark/pull/18751
[SPARK-21548][SQL]Support insert into serial columns of table
## What changes were proposed in this pull request?
When we use the 'insert into ...' statement we can only insert all
Github user lvdongr closed the pull request at:
https://github.com/apache/spark/pull/17620
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/17620
You can see the main method in Master.scala.
def main(argStrings: Array[String]) {
Utils.initDaemon(log)
val conf = new SparkConf
val args = new MasterArguments
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/17620
This happend at the time the previous master leader remove the died worker
,clear the worker's node on persistEngine(we use zookeeper),But before the
worker node was removed from the zookeeper
Github user lvdongr commented on a diff in the pull request:
https://github.com/apache/spark/pull/17620#discussion_r111732189
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -561,6 +561,11 @@ private[deploy] class Master(
state
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/17620
Execute me, Can this issue be closed or threre are some other problem?
@jerryshao
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user lvdongr commented on a diff in the pull request:
https://github.com/apache/spark/pull/17620#discussion_r111337583
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -539,7 +539,7 @@ private[deploy] class Master(
private def
Github user lvdongr commented on a diff in the pull request:
https://github.com/apache/spark/pull/17620#discussion_r111337249
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -539,7 +539,7 @@ private[deploy] class Master(
private def
GitHub user lvdongr opened a pull request:
https://github.com/apache/spark/pull/17620
[SPARK-20305][Spark Core]Master may keep in the state of "COMPELETINGâ¦
## What changes were proposed in this pull request?
Master may keep in the state of "COMPELETING_RECOVERY
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/17203
You can see this issue ,and this is a problem of cached KafkaConsumer,
https://issues.apache.org/jira/browse/SPARK-19185, and a commentator
suggest the same method not to use cached kafka
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/17203
In our case,we deploy a streaming application whose data source are 20
topics with 30 partitions in kafka cluster(3 brokers). Then the amount of
connection with kafka is very large,up to a thousand
GitHub user lvdongr opened a pull request:
https://github.com/apache/spark/pull/17203
[SPARK-19863][DStream] Whether or not use CachedKafkaConsumer need to be
configured, when you use DirectKafkaInputDStream t
## What changes were proposed in this pull request?
Whether
Github user lvdongr closed the pull request at:
https://github.com/apache/spark/pull/16879
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/17010
Execuse me, may this issue be merged and closed ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user lvdongr commented on the issue:
https://github.com/apache/spark/pull/17010
Before spark1.4.x, the ThriftServer name is "SparkSQL:localhostname",while
https://issues.apache.org/jira/browse/SPARK-8650 change the rule as a side
effect. Then the ThriftServer show
GitHub user lvdongr opened a pull request:
https://github.com/apache/spark/pull/17010
[SPARK-19673][SQL] "ThriftServer default app name is changed wrong"
## What changes were proposed in this pull request?
In spark 1.x ,the name of ThriftServer is SparkSQL:localHostN
GitHub user lvdongr opened a pull request:
https://github.com/apache/spark/pull/16879
[SPARK-19541][SQL] High Availability support for ThriftServer
JIRA Issue: https://issues.apache.org/jira/browse/SPARK-19541
## What changes were proposed in this pull request
36 matches
Mail list logo