Hello Ravi,
When you wget this url
wget http://:9091/schema?name=ed=parquet&
isMutated=true=ed=testing'
Do you get avsc file?
Regards,
Jagat Singh
On Sat, 27 Jun 2020, 7:01 am ravi kanth, wrote:
> Just want to follow up on the below email.
>
> Thanks,
> Ravi
>
>
>
Hi,
Is it possible to do bulk load using files in hive table backed by
transactions instead of update statements.
Thanks
ther hand, there is NPE, which might
> be the problem you're having. Have you tried your query with MapReduce?
>
> On Sun, Nov 1, 2015 at 5:32 PM, Jagat Singh <jagatsi...@gmail.com> wrote:
>
>> One interesting message here , *No plan file found: *
>>
>> 15/11/01 23:55:36
One interesting message here , *No plan file found: *
15/11/01 23:55:36 INFO exec.Utilities: No plan file found: hdfs://
Hi,
I am trying to run Hive on Spark on HDP Virtual machine 2.3
Following wiki
https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started
I have replaced all the occurrences of hdp.version with 2.3.0.0-2557
I start hive with following
set hive.execution.engine=spark;
We are using Hive 0.14
Our input file size is around 100 GB uncompressed
We are using insering this data to hive which is ORC based table , ZLIB
While inserting we are also using following two parameters.
SET hive.exec.reducers.max=10;
SET mapred.reduce.tasks=5;
The output ORC file produced
Did you do table column stats
On 30 May 2015 9:04 am, sreejesh s sreejesh...@yahoo.com wrote:
Hi,
I am new to Hive, please help me understand the benefit of ORC file format
storing Sum, Min, Max values.
Whenever we try to find a sum of values in a particular column, it still
runs the
Hi,
How to monitor logs for the Hive Tez jobs.
In the shell i can see the progress of the Hive job.
If i click the application master on RM i get the following error.
Thanks,
HTTP ERROR 500
Problem accessing /proxy/application_1431929898495_21650/. Reason:
Connection refused
Caused by:
Can you please share the command which you are trying to run.
Thanks
On Thu, Jul 10, 2014 at 10:32 AM, wenlong...@changhong.com
wenlong...@changhong.com wrote:
Hi guys,
Is anybody can tell me , why my hive(0.12.0) cannot load data from hdfs
where the filename is the same.
But I cat load
Hi,
Is the user which is running query having rights to access the file.
Thanks
On Wed, Jul 2, 2014 at 1:51 PM, shouvanik.hal...@accenture.com wrote:
Hi,
Cannot add a jar to hive classpath.
Once I launch HIVE, I type - ADD JAR hdfs://10.37.83.117
Its defined in build.properties
You can try changing there and build.
http://svn.apache.org/viewvc/hive/trunk/build.properties?revision=1521520view=markup
On 21/09/2013 8:19 PM, wesley dias wesleyd...@outlook.com wrote:
Hello Everyone,
I am new to hive and I had a query related to building
Adding to Sanjay's reply
The only thing left after flume has added partitions is to tell hive
metastore to update partition information.
which you can do via
add partition command
Then you can read data via hive straight away.
On Sat, Sep 14, 2013 at 10:00 AM, Sanjay Subramanian
Hi
You can use distributed cache and hive add file command
See here for example syntax
http://stackoverflow.com/questions/15429040/add-multiple-files-to-distributed-cache-in-hive
Regards,
Jagat
On Sat, Sep 14, 2013 at 9:57 AM, Stephen Boesch java...@gmail.com wrote:
We have a UDF that is
. we need to use Java api's
2013/9/13 Jagat Singh jagatsi...@gmail.com
Hi
You can use distributed cache and hive add file command
See here for example syntax
http://stackoverflow.com/questions/15429040/add-multiple-files-to-distributed-cache-in-hive
Regards,
Jagat
On Sat, Sep 14
.
Thanks in advance for your help.
Regards,
Jagat Singh
On Fri, Mar 29, 2013 at 4:48 PM, Owen O'Malley omal...@apache.org wrote:
Actually, Hive already has the ability to have different schemas for
different partitions. (Although of course it would be nice to have the
alter table be more
Hello Nitin,
Thanks for sharing.
Do we have more details on
Versioned metadata feature of ORC ? , is it like handling varying schemas
in Hive?
Regards,
Jagat Singh
On Fri, Mar 29, 2013 at 4:16 PM, Nitin Pawar nitinpawar...@gmail.comwrote:
Hi,
Here is is a nice presentation from Owen
/property
Thanks,
Jagat Singh
On Mon, Mar 4, 2013 at 7:37 PM, Sai Sai saigr...@yahoo.in wrote:
When we run a query in hive like:
Select * from myTable limit 10;
We get the results successfully but the column names r not displayed.
Is it possible to display the column names also so
Hi,
$hive -e 'select * from myTable' MyResultsFile.txt
Then you can use this file to import into excel
If you want to use HUE , then it has functionality to export to excel
directly.
Thanks,
Jagat Singh
On Mon, Mar 4, 2013 at 7:56 PM, Sai Sai saigr...@yahoo.in wrote:
Just wondering how
Hi,
There are many reporting tool which can read from Hive server.
All you need is to start hive server and then point tool to use it.
Pentaho , Talend , ireport are few.
Just search over here.
Thanks.
Jagat Singh
On Mon, Mar 4, 2013 at 7:58 PM, Sai Sai saigr...@yahoo.in wrote:
Just
.
--
*From:* Jagat Singh jagatsi...@gmail.com
*To:* user@hive.apache.org; Sai Sai saigr...@yahoo.in
*Sent:* Monday, 4 March 2013 1:01 AM
*Subject:* Re: hive light weight reporting tool
Hi,
There are many reporting tool which can read from Hive server.
All you
You might want to read this
https://cwiki.apache.org/Hive/languagemanual-auth.html
On Fri, Feb 22, 2013 at 9:44 PM, Sachin Sudarshana
sachin.sudarsh...@gmail.com wrote:
Hi,
I have just started learning about hive.
I have configured Hive to use mysql as the metastore instead of derby.
If
If all files are in same partition then they satisfy condition of same
value as partion column .
You cannot do with hive but can have one intermediate table and then to
move required files using glob pattern
---
Sent from Mobile , short and crisp.
On 07-Jan-2013 1:07 AM, Oded Poncz
,
Jagat Singh
On Thu, Dec 13, 2012 at 7:15 PM, Manish Malhotra
manish.hadoop.w...@gmail.com wrote:
Ideally, push the aggregated data to some RDBMS like MySQL and have REST
API or some API to enable ui to build report or query out of it.
If the use case is ad-hoc query then once that qry
Hi,
I had same error few days back.
Now difficulty we have is to find which gz file is corrupt. Its not corrupt
as such but some how hadoop says it is. If you made the file in Windows and
then transfer to hadoop of can give. This error. If you want to see which
file is corrupt do select count
Hi Anurag,
How much space is for /user and /tmp directory on client.
Did you check that part? , anything which might stop move task from
finishing.
---
Sent from Mobile , short and crisp.
On 11-Aug-2012 1:37 PM, Anurag Tangri tangri.anu...@gmail.com wrote:
Hi,
We are facing this
From the code here
http://svn.apache.org/viewvc/hive/branches/branch-0.7/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFSum.java?view=markup
For float , doble and string the implementation points to common function
GenericUDAFSumDouble()
if (parameters[0].getCategory() !=
One of similar use case which I worked in , the record timestamp is not
guaranteed to arrive in some order. So we used Pig to do some processing
similar to what your custom code is doing and after the records are in
required order of timestamp we push them to hive.
---
Sent from Mobile ,
Hi Can you do
#netstat -nl | grep 1
Hive is compatible with 0.20.2 series , not with 1.x series of Hadoop
If you start Hive server with Hadoop 0.20 it would work
- Original Message -
From: ylyy-1985
Sent: 04/10/12 08:33 AM
To: user
Subject: cannot start the thrift server
hi
28 matches
Mail list logo