Hi,
who knows the EOL schedule for each Hive release? For
example, when will 3.1.0 EOL be implemented.
Best,
Guangming
Hi, who knows the EOL schedule for each Hive release? For example, when
will 3.1.0 EOL be implemented.
Best,
Guangming
Hi Jan,
Thanks for your feedback, very helpful!
You made a great point about potential dependency conflict, we'll make sure
our libraries coexist well with shared Hive dependencies.
I am not sure SASL is necessarily superior to gRPC. It depends a lot on the
platform we run these services. For exam
oat up the Hive codebase.
Would that work for you Jan?
Feng
On Tue, Aug 18, 2020 at 10:12 AM Jan Fili wrote:
> Would you mind shading the proto-library in hive-exec-core along the way?
>
> Am Mo., 17. Aug. 2020 um 08:31 Uhr schrieb Feng Lu :
> >
> > It has been a while since
It has been a while since we shared this design proposal, if there's no
more concern, we'll go ahead with the implementation work soon.
Thank you!
On Fri, Jul 31, 2020 at 1:57 PM Feng Lu wrote:
> Hi all,
>
> Several of us from Google and Cloudera explored the possibili
Hi all,
Several of us from Google and Cloudera explored the possibility of adding
gRPC support in Apache Hive. The detailed design proposal can be found
here:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158869886.
*TL;DR:*
*Why?*
- modernize Hive Metastore's Thrift interface
Thank you Panos and Ashutosh.
On Fri, Jul 31, 2020 at 8:28 AM Ashutosh Chauhan
wrote:
> Added you Feng Lu. Welcome to the project!
>
> On Thu, Jul 30, 2020 at 10:53 PM Feng Lu wrote:
>
>> Hi,
>>
>> Can someone please grant me write access to the Hive Wiki?
Hi,
Can someone please grant me write access to the Hive Wiki?
Thank you!
account_id: fenglu_g.
Feng
For someone like me who is new to the hive community, is there a
(semi-)formal process on contributing a large-scale feature like
Hive-Iceberg integration?
For example, hive improvement proposal, community voting, development and
code review, release, etc.
Thank you and sorry for derailing this co
)
Thanks
--
Lu Li
The update did not succeed with this error.
Did anyone have similar case before or know anything about this?
On Thu, Oct 4, 2012 at 10:23 AM, Feng Lu wrote:
> Thanks for your reply, Edward.
> But for this case, the update did not succeed.
>
>
>
> On Thu, Oct 4, 2012 at 9:27 A
;
> On Wed, Oct 3, 2012 at 11:05 PM, Feng Lu
> wrote:
> > Hi,
> >
> > I was trying to do "arc diff --update ..." under Ubuntu and got this
> error:
> >
> >
> > PHP Fatal error: Class 'ArcanistDifferentialRevisionRef'
Hi,
I was trying to do "arc diff --update ..." under Ubuntu and got this error:
PHP Fatal error: Class 'ArcanistDifferentialRevisionRef' not found in
/home/feng/projects/hive/hive-trunk2/.arc_jira_lib/arcanist/ArcJIRAConfiguration.php
on line 0
Fatal error: Class 'ArcanistDifferentialRevisionR
rt data to external table right?
But you can put the data at same place and read through hive (actually no need
to insert data through table)
On Wed, Mar 14, 2012 at 12:04 PM, Lu, Wei
mailto:w...@microstrategy.com>> wrote:
Hi ,
Can we insert data to external hive tables?
1)Create an exter
Hi ,
Can we insert data to external hive tables?
1)Create an external table
create external table binary_tbl_local(byt TINYINT, bl boolean, it int, lng
BIGINT, flt float, dbl double, shrt SMALLINT, str string) row format serde
'org.apache.hadoop.hive.contrib.serde2.TypedBtesSerDe' stored a
Hi Viral and Bejoy.K.S
I can see that both of you are suggesting Sqoop, and I will have a try ☺.
Thanks,
Wei
From: Bejoy Ks [mailto:bejoy...@yahoo.com]
Sent: Saturday, March 10, 2012 2:11 AM
To: user@hive.apache.org; Lu, Wei
Subject: Re: Hive-645 is slow to insert query results to mysql
Hi Wei
Hi,
I recently tried Hive-645 feature and save query results directly to Mysql
table. The feature can be found here:
https://issues.apache.org/jira/browse/HIVE-645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel.
The query I tried looks like this:
hive>CREATE TEMPORARY FUNCTIO
of
Step 1.
Regards
Bejoy.K.S
____
From: "Lu, Wei"
To: "user@hive.apache.org"
Sent: Wednesday, March 7, 2012 5:12 PM
Subject: Why Move Operations after MapReduce are in sequential?
Hi,
For the query below, I find the five Move Operations (after MapReduce job) are
Hi,
For the query below, I find the five Move Operations (after MapReduce job) are
not operated in parallel.
from impressions2
insert OVERWRITE LOCAL DIRECTORY '/disk2/iis1' select * where
impressionid<'1239572996000'
insert OVERWRITE LOCAL DIRECTORY '/disk2/iis2' select * where
impressionid<'
Sorry, query 1 should be:
create table tmp__imp as select requestbegintime, count(*) from impressions2
where requestbegintime<'1239572996000' group by requestbegintime;
-Original Message-
From: Lu, Wei [mailto:w...@microstrategy.com]
Sent: Wednesday, March 07, 2012 9:0
doesn't seem to be a group by clause there. Is that
the right query?
Mark
Mark Grover, Business Intelligence Analyst
OANDA Corporation
www: oanda.com www: fxtrade.com
"Best Trading Platform" - World Finance's Forex Awards 2009.
"The One to Watch" - Treasury Today
Hi,
I tried to do aggregation based on Table impressions2, and then need to save
the results to two different local files (or tables).
I tried three methods, only the first one succeeded:
1) create a new table and store aggregation results to it, and then use
multi-insert to split the results t
processed in parallel.
On Sun, Mar 4, 2012 at 9:26 PM, Lu, Wei wrote:
> Hi,
>
>
>
> I need to load data directly from a ctl A delimiter zipped file from the
> Linux box directly.
>
> Do I need to 1) un-zip the files and then load them to Hive tables, or 2) is
> there a d
Hi,
I need to load data directly from a ctl A delimiter zipped file from the Linux
box directly.
Do I need to 1) un-zip the files and then load them to Hive tables, or 2) is
there a direct command that can load zipped data to Hive table directly?
Thanks,
Wei
BTW, I am using hive 0.7
From: Lu, Wei
Sent: Wednesday, January 11, 2012 3:13 PM
To: 'user@hive.apache.org'
Subject:
Hi,
I am using ThriftHive.Client to access a pretty large table.
SQL Statement:
selecta11.asin asin,
max(a11.title) title,
a11
Hi,
I am using ThriftHive.Client to access a pretty large table.
SQL Statement:
selecta11.asin asin,
max(a11.title) title,
a11.salesrank salesrank,
a11.category category,
avg(a11.avg_rating) WJXBFS1,
sum(a11.
Hi there,
Codes highlighted below may have some problem. SERIALIZATION_FORMAT should be
FIELD_DELIM.
In file: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
public static SerDeParameters initSerdeParams(Configuration job,
Properties tbl, String serdeName) throws SerDeException {
27 matches
Mail list logo