yeah, I tried that, but there is always an issue when I ran dev/mima,
it always gives me some binary compatibility error on Java API part….
so I have to wait for Jenkins’ result when fixing MIMA issues
--
Nan Zhu
On Thursday, September 25, 2014 at 12:04 AM, Patrick Wendell wrote:
> Have y
Have you considered running the mima checks locally? We prefer people
not use Jenkins for very frequent checks since it takes resources away
from other people trying to run tests.
On Wed, Sep 24, 2014 at 6:44 PM, Nan Zhu wrote:
> Hi, all
>
> It seems that, currently, Jenkins makes MIMA checking a
Maybe it's the way SQL works.
The select part is executed after the where filter is applied, so you
cannot use alias declared in select part in where clause.
Hive and Oracle behavior the same as Spark SQL.
2014-09-25 8:58 GMT+08:00 Du Li :
> Hi,
>
> The following query does not work in Shark n
Hi, all
It seems that, currently, Jenkins makes MIMA checking after all test cases have
finished, IIRC, during the first months we introduced MIMA, we do the MIMA
checking before running test cases
What’s the motivation to adjust this behaviour?
In my opinion, if you have some binary compati
Hi,
The following query does not work in Shark nor in the new Spark SQLContext or
HiveContext.
SELECT key, value, concat(key, value) as combined from src where combined like
’11%’;
The following tweak of syntax works fine although a bit ugly.
SELECT key, value, concat(key, value) as combined fr
I proposed a fix https://github.com/apache/spark/pull/2524
Glad to receive feedbacks
--
Nan Zhu
On Tuesday, September 23, 2014 at 9:06 PM, Sandy Ryza wrote:
> Filed https://issues.apache.org/jira/browse/SPARK-3642 for documenting these
> nuances.
>
> -Sandy
>
> On Mon, Sep 22, 2014
Hi Yi,
So I've been thinking about implementing windowing for some time and started
working on it in earnest yesterday. There is already a PR for ROLLUP and CUBE;
you may want to look at it and see if you can help the author out or provide
some test cases: https://github.com/apache/spark/pull
So you have a single Kafka topic which has very high retention period (
that decides the storage capacity of a given Kafka topic) and you want to
process all historical data first using Camus and then start the streaming
process ?
The challenge is, Camus and Spark are two different consumer for Ka
I don’t think so. For example, we’ve already added extended syntax like CACHE
TABLE.
On Wed, Sep 24, 2014 at 3:27 PM, Yi Tian wrote:
> Hi Reynold!
>
> Will sparkSQL strictly obey the HQL syntax ?
>
> For example, the cube function.
>
> In other words, the hiveContext of sparkSQL should only im
Hi Reynold!
Will sparkSQL strictly obey the HQL syntax ?
For example, the cube function.
In other words, the hiveContext of sparkSQL should only implement the subset of
HQL features?
Best Regards,
Yi Tian
tianyi.asiai...@gmail.com
On Sep 23, 2014, at 15:49, Reynold Xin wrote:
>
> On T
10 matches
Mail list logo