this is a managed hive table I would expect you can just MSCK REPAIR the
table to get the new partition. of course you will need to change the schema to
reflect the new partition
Kind Regards
From: "Triones,Deng(vip.com)"
<triones.d...@vipshop.com<
around it in that situation apart from
making sure all reducers write to different folders. In the past I partitioned
by executor id. I don't know if this is the best way though.
Kind Regards
From: "Triones,Deng(vip.com)"
<triones.d..
Hi dev and users
Now I am running spark streaming , (spark version 2.0.2) to write
file to hdfs. When my spark.streaming.concurrentJobs is more than one. Like 20.
I meet the exception as below.
We know that when the batch finished, there will be a _SUCCESS file.
As I guess
邓刚[技术中心] 将撤回邮件“How to deal with string column data for spark mlib?”。
本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
This communication is intended only for the addressee(s) and may contain
information that is privileged and confidential. You are
Hi spark dev,
I am using spark 2 to write orc file to hdfs. I have one questions
about the savemode.
My use case is this. When I write data into hdfs. If one task failed I
hope the file that the task created should be delete and the retry task can
write all data, that is to
Hi spark dev,
I am using spark 2 to write orc file to hdfs. I have one questions
about the savemode.
My use case is this. When I write data into hdfs. If one task failed I
hope the file that the task created should be delete and the retry task can
write all data, that is to
ination();` will throw an
exception. Then you app main method will exit and trigger the shutdown hook and
call `jsc.stop()`.
On Thu, Jan 14, 2016 at 10:20 PM, Triones,Deng(vip.com<http://vip.com>)
<triones.d...@vipshop.com<mailto:triones.d...@vipshop.com>> wrote:
Thanks for you
spark streaming context trigger invoke stop why?
Could you show your codes? Did you use `StreamingContext.awaitTermination`? If
so, it will return if any exception happens.
On Wed, Jan 13, 2016 at 11:47 PM, Triones,Deng(vip.com<http://vip.com>)
<triones.d...@vipshop.com<mailto:trio
Gracefully)
}
Regards,
Yogesh Mahajan,
SnappyData Inc, snappydata.io<http://snappydata.io>
On Thu, Jan 14, 2016 at 8:55 AM, Triones,Deng(vip.com<http://vip.com>)
<triones.d...@vipshop.com<mailto:triones.d...@vipshop.com>> wrote:
More info
I am using spark version 1.5.2
false, stopGracefully = stopGracefully)
}
Regards,
Yogesh Mahajan,
SnappyData Inc, snappydata.io<http://snappydata.io>
On Thu, Jan 14, 2016 at 8:55 AM, Triones,Deng(vip.com<http://vip.com>)
<triones.d...@vipshop.com<mailto:triones.d...@vipshop.com>> wrote:
More info
I
More info
I am using spark version 1.5.2
发件人: Triones,Deng(vip.com) [mailto:triones.d...@vipshop.com]
发送时间: 2016年1月14日 11:24
收件人: user
主题: spark streaming context trigger invoke stop why?
Hi all
As I saw the driver log, the task failed 4 times in a stage, the stage
will be dropped
Hi all
As I saw the driver log, the task failed 4 times in a stage, the stage
will be dropped when the input block was deleted before make use of. After that
the StreamingContext invoke stop. Does anyone know what kind of akka message
trigger the stop or which code trigger the
Hi All
We run an application with version 1.4.1 standalone mode. We saw two tasks in
one stage which runs very slow seems it is hang. We know that the JobScheduler
have the function to assign the straggle task to another node. But what we saw
it does not reassign. So we want to know is there
13 matches
Mail list logo