Thank you, Takeshi!
Dongjoon Hyun 于2019年1月8日周二 下午10:13写道:
> Great! Thank you, Takeshi! :D
>
> Bests,
> Dongjoon.
>
> On Tue, Jan 8, 2019 at 8:47 PM Takeshi Yamamuro
> wrote:
>
>> If there is no other volunteer for the release of 2.3.3, I'd like to.
>>
>> best,
>> takeshi
>>
>> On Fri, Jan 4,
Great! Thank you, Takeshi! :D
Bests,
Dongjoon.
On Tue, Jan 8, 2019 at 8:47 PM Takeshi Yamamuro
wrote:
> If there is no other volunteer for the release of 2.3.3, I'd like to.
>
> best,
> takeshi
>
> On Fri, Jan 4, 2019 at 11:49 AM Dongjoon Hyun
> wrote:
>
>> Thank you, Sean!
>>
>> Bests,
>>
If there is no other volunteer for the release of 2.3.3, I'd like to.
best,
takeshi
On Fri, Jan 4, 2019 at 11:49 AM Dongjoon Hyun
wrote:
> Thank you, Sean!
>
> Bests,
> Dongjoon.
>
>
> On Thu, Jan 3, 2019 at 2:50 PM Sean Owen wrote:
>
>> Yes, that one's not going to be back-ported to 2.3. I
Some more thoughts. If we support unlimited negative scale, why can't we
support unlimited positive scale? e.g. 0.0001 can be decimal(1, 4) instead
of (4, 4). I think we need more references here: how other databases deal
with decimal type and parse decimal literals?
On Mon, Jan 7, 2019 at 10:36
+1
On Wed, Jan 9, 2019 at 3:37 AM DB Tsai wrote:
> +1
>
> Sincerely,
>
> DB Tsai
> --
> Web: https://www.dbtsai.com
> PGP Key ID: 0x5CED8B896A6BDFA0
>
> On Tue, Jan 8, 2019 at 11:14 AM Dongjoon Hyun
> wrote:
> >
> > Please vote on
+1
Sincerely,
DB Tsai
--
Web: https://www.dbtsai.com
PGP Key ID: 0x5CED8B896A6BDFA0
On Tue, Jan 8, 2019 at 11:14 AM Dongjoon Hyun wrote:
>
> Please vote on releasing the following candidate as Apache Spark version
> 2.2.3.
>
> The vote
Please vote on releasing the following candidate as Apache Spark version
2.2.3.
The vote is open until January 11 11:30AM (PST) and passes if a majority +1
PMC votes are cast, with
a minimum of 3 +1 votes.
[ ] +1 Release this package as Apache Spark 2.2.3
[ ] -1 Do not release this package
Hi Miguel,
On Sun, Jan 6, 2019 at 11:35 AM Miguel F. S. Vasconcelos <
miguel.vasconce...@usp.br> wrote:
> When an action is performed onto a RDD, Spark send it as a job to the
> DAGScheduler;
> The DAGScheduler compute the execution DAG based on the RDD's lineage, and
> split the job into stages
Hi,
> ORC has long had a timestamp format. If extra attributes are needed on a
> timestamp, as long as the default "no metadata" value isn't changed, then at
> the file level things should be OK.
>
> more problematic is: what would happen to an existing app reading in
> timestamps and ignoring