t;
> For Hive's Text format it takes few seconds while Hive's Orc format takes
> fraction of seconds.
>
> Regards,
> Amey
>
--
Nitin Pawar
gt; I am running Hive 1.2.1
>
> regards
>
>
>
--
Nitin Pawar
nitinpathakala . <nitinpathak...@gmail.com
> > wrote:
>
>> Hello,
>>
>> We have a requirement to load data from xml file to Hive tables.
>> The xml tags woud be the columns and values will be the data for those
>> columns.
>> Any pointers will be really helpful.
>>
>> Thanks,
>> Nitin
>>
>
>
--
Nitin Pawar
rprisingly , when it did following setting , it started working
> set hive.auto.convert.join=true;
>
> can you please help me understand , what had happened ?
>
>
>
> Regards
> Sanjiv Singh
> Mob : +091 9990-447-339
>
> On Tue, Sep 22, 2015 at 11:41 AM, Nitin
doop.hive.cli.CliDriver.main(CliDriver.java:570)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>
>
>
> Regards
> Sanjiv Singh
> Mob : +091 9990-447-339
>
--
Nitin Pawar
t; Can someone guide , what this means?
>>>
>>>
>>> CREATE EXTERNAL TABLE IF NOT EXISTS test_table
>>> OK
>>> Time taken: 0.124 seconds
>>>
>>> MSCK REPAIR TABLE test_table
>>> OK
>>> Tables missing on filesystem: test_table
>>>
>>> Time taken: 0.691 seconds, Fetched: 1 row(s)
>>>
>>>
>>> Thanks,
>>> Ravi
>>>
>>>
>>
>
--
Nitin Pawar
? mean to say this data wont
be available to hive until it converts to parquet and write to hive
location?
On Tue, Aug 25, 2015 at 11:53 AM, Nitin Pawar nitinpawar...@gmail.com
wrote:
Is it possible for you to write the data into staging area and run a job
on that and then convert ito
we will habe 15 mins Json data.
Can i provide multiple serde in hive?
regards
Jeetendra
--
Nitin Pawar
events right?
On Tue, Aug 25, 2015 at 11:35 AM, Nitin Pawar nitinpawar...@gmail.com
wrote:
file formats in a hive is a table level property.
I am not sure why would you have data at 15mins interval to your actual
table instead of a staging table and do the conversion or have the raw file
any help guys ?
On Thu, Aug 13, 2015 at 2:52 PM, Nitin Pawar nitinpawar...@gmail.com
wrote:
Hi,
right now hive does not support the equality clause in sub-queries.
for ex: select * from A where date = (select max(date) from B)
It though supports IN clause
select * from A where date
ever you run queries where you don't
specify statistics partitions, Hive doesn't pre-compute which one to take
so it will take all the table.
I would suggest implementing the max date by code in a separate query.
On Thu, Aug 20, 2015 at 12:16 PM, Nitin Pawar nitinpawar...@gmail.com
wrote:
any
)
at
org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.makeValueWritable(ReduceSinkOperator.java:558)
at
org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.process(ReduceSinkOperator.java:383)
... 13 more
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask
Thanks,
Ravi
--
Nitin Pawar
into a table,
then this problem occurs.
Please find the answers inline.
Thanks,
Ravi
On Fri, Jul 31, 2015 at 12:34 PM, Nitin Pawar nitinpawar...@gmail.com
wrote:
sorry but i could not find following info
1) are you using tez as execution engine? if yes make sure its not
snapshot version
then why not just use max function?
select max(a) from (select sum(a) as a, b from t group by b)n
On Fri, Jul 31, 2015 at 12:48 PM, Renuka Be renunalin...@gmail.com wrote:
Hi Nitin,
I am using hive query.
Regards,
Renuka N.
On Fri, Jul 31, 2015 at 2:42 AM, Nitin Pawar nitinpawar
this
'HiveResultSet.Max()' it throws exception.
Error : At least one object must implement IComparable.
Is there any way to find Min, Max from the HiveResultSet?
Thanks,
Renuka N.
--
Nitin Pawar
Be renunalin...@gmail.com wrote:
I have used hive query to get column values that returns HiveResultSet. I
need to find Min and Max value in HiveResultSet in code level.
Is there any possibility. I am using c#.
-Renuka N
On Fri, Jul 31, 2015 at 3:29 AM, Nitin Pawar nitinpawar...@gmail.com
wrote
Table, any more, just got error line 1:1
character '' not supported here, no matter Tez or MR engine.
How can you solve the problem in your case?
BR,
Patcharee
On 18. juli 2015 21:26, Nitin Pawar wrote:
can you tell exactly what steps you did/?
also did you try running the query
line 1:144 character '' not supported here
line 1:145 character '' not supported here
line 1:146 character '' not supported here
BR,
Patcharee
--
Nitin Pawar
, Jul 18, 2015 at 3:58 PM, patcharee patcharee.thong...@uni.no
wrote:
This select * from table limit 5; works, but not others. So?
Patcharee
On 18. juli 2015 12:08, Nitin Pawar wrote:
can you do select * from table limit 5;
On Sat, Jul 18, 2015 at 3:35 PM, patcharee patcharee.thong
. The
problem happened just after I concatenate table.
BR,
Patcharee
On 18/07/15 12:46, Nitin Pawar wrote:
select * without where will work because it does not involve file
processing
I suspect the problem is with field delimiter so i asked for records so
that we can see whats the data in each
,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--
Nitin Pawar
(Thread.java:745)
]
DAG failed due to vertex failure. failedVertices:1 killedVertices:0
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.DDLTask
BR,
Patcharee
--
Nitin Pawar
following
result
[image: Inline image]
I am using Scala version 1.3.1 in windows 8
Thanks in advance,
Vinod
--
Nitin Pawar
-- 1 root root 279781 Mar 31 20:26
/usr/hdp/current/hive-client/lib/commons-httpclient-3.0.1.jar
What am I missing ?
Thanks a lot for your help,
Kind regards,
Erwan
--
Nitin Pawar
on Tue Mar 31 16:26:33 EDT 2015
From source with checksum 1f34a1d4e566c3e801582862ed85ee93
Thanks for taking the time.
Kind regards,
Erwan
On Mon, Jun 29, 2015 at 3:44 PM, Nitin Pawar nitinpawar...@gmail.com
wrote:
by any chance you built hive yourself ?
On Mon, Jun 29, 2015 at 7:11 PM
...@gmail.com wrote:
Hi Nitin,
Digging up a bit I discovered that the error is probably on our end :
On Mon, Jun 29, 2015 at 3:54 PM, Nitin Pawar nitinpawar...@gmail.com
wrote:
I am using 2.2.4-2.2 but did not get any error.
can you check what all services are installed on the node where hive
this scenario?
Regards
Ravisnkar
--
Nitin Pawar
Answering my own question
either way the file was available via distributed cache.
it was a spelling mistake in the code for me, correcting it solved the
problem
On Sun, May 17, 2015 at 2:46 AM, Nitin Pawar nitinpawar...@gmail.com
wrote:
Hi,
I am trying to access a lookup file from a udf
-7d8ed9a271a0_resources/tmp.txt
Question: how do I get the file at same location (like option 1 all times)
cause from option 2 I keep getting the error tmp.txt does not exists when
I initialize the udf
thanks
--
Nitin Pawar
up hue
session from server ?
The query is succeed in hive command line.
On Fri, May 15, 2015 at 11:52 AM, Nitin Pawar nitinpawar...@gmail.com
wrote:
Is this happening for Hue?
If yes, may be you can try cleaning up hue sessions from server. (this
may clean all users active sessions from
on this,
Thanks you
Amit
--
Nitin Pawar
arising
from its use. If you have received this e-mail in error please notify
immediately the sender and delete the original email received, any
attachments and all copies from your system.
--
Nitin Pawar
of the author and do not
necessarily represent Jaywing. If you are not
the intended recipient, you must not forward or show this to anyone or
take any action based upon it.
Please contact the sender if you received this in error.
--
Nitin Pawar
with wildcard character before
and after
Looking forward for your response on this. How can it be handled/achieved
in hive.
Regards
Sanjiv Singh
Mob : +091 9990-447-339
--
Nitin Pawar
about it.
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Fri, Mar 27, 2015 at 1:41 PM, Nitin Pawar nitinpawar...@gmail.com
wrote:
Hive does not manipulate data by its own, if your processing logic needs
the trimming of spaces then you can provide that in query.
On Fri, Mar 27, 2015
!
--
Nitin Pawar
or directory
Command failed with exit code = 1
Query returned non-zero code: 1, cause: null
--
Nitin Pawar
, Nitin Pawar nitinpawar...@gmail.com wrote:
just copy pasting Jason's reply to other thread
If you have a recent version of Hive (0.13+), you could try registering
your UDF as a permanent UDF which was added in HIVE-6047:
1) Copy your JAR somewhere on HDFS, say
hdfs:///home/nirmal/udf/hiveUDF-1.0
Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/
--
Nitin Pawar
whats your create table DDL?
On 24 Nov 2014 13:43, unmesha sreeveni unmeshab...@gmail.com wrote:
Hi
I am using hive -0.14.0 which support UPDATE statement
but I am getting an error once I did this Command
UPDATE Emp SET salary = 5 WHERE employeeid = 19;
FAILED: SemanticException
if you're able to dig it out! Thanks!
Best Regards
On Thu, Nov 6, 2014 at 7:48 AM, Jason Dere jd...@hortonworks.com wrote:
That would be great!
On Nov 5, 2014, at 10:49 PM, Nitin Pawar nitinpawar...@gmail.com wrote:
May be a JIRA ?
I remember having my own UDF for doing this. If possible I
shared this on github :
https://github.com/devopam/hadoopHA
apologies if there is any problem on github as I have limited familiarity
with it :(
regards
Devopam
On Wed, Nov 5, 2014 at 12:31 PM, Nitin Pawar nitinpawar...@gmail.com
wrote:
+1
If you can optionally add hadoop home
(now).
Based on all above, I don't see the reason the time gets shifted by one
hour, but I realise the issue might be down to the general problems in
Hive' implementation of timezones…
On Fri, Oct 31, 2014 at 12:26 PM, Nitin Pawar nitinpawar...@gmail.com
wrote:
In hive from_unixtime
with AcidOuptut Format.?Can you send me
examples.
Thanks
Mahesh
On Tue, Nov 4, 2014 at 12:21 PM, Nitin Pawar nitinpawar...@gmail.com
wrote:
As the error says, your table file format has to be AcidOutPutFormat or
table needs to be bucketed to perform update operation.
You may want to create a new
share your feedback/ fixes if you spot any.
--
Devopam Mittra
Life and Relations are not binary
--
Nitin Pawar
or is not bucketed.
When i update the table i got the above error.
Can you help me guys.
Thanks
Mahesh.S
--
Nitin Pawar
it to be 1970-01-01 01:00:00…?
--
Nitin Pawar
why
select from_unixtime(0) t0 FROM …
gives
1970-01-01 01:00:00
?
By all available definitions (epoch, from_unixtime etc..) I would expect
it to be 1970-01-01 00:00:00…?
--
Kind Regards
Maciek Kocon
--
Nitin Pawar
whats your table create ddl?
is the data in csv like format?
On 21 Oct 2014 00:26, Raj Hadoop hadoop...@yahoo.com wrote:
I am able to see the data in the table for all the columns when I issue
the following -
SELECT * FROM t1 WHERE dt1='2013-11-20'
But I am unable to see the column data
in advance.
Shiang
--
Shiang Luong
Software Engineer in Test | OpenX
888 East Walnut Street, 2nd Floor | Pasadena, CA 91101
o: +1 (626) 466-1141 x | m: +1 (626) 512-2165 | shiang.lu...@openx.com
OpenX ranked No. 7 in Forbes’ America’s Most Promising Companies
--
Nitin Pawar
thought bucketing will
speed up the queries. What are my options ?
Please let me know.
Regards,
Murali.
--
Nitin Pawar
.fk = T1.fk
AND T2.location != -1
ORDER BY T2.Record DESC
)
ENDFROM#temp1 AS T1
Thank you for your help in advance!
--
Nitin Pawar
at 6:56 AM, Sreenath sreenaths1...@gmail.com wrote:
How about writing a python UDF that takes input line by line
and it saves the previous lines location and can replace it with that
if location turns out to be '-1'
On 15 September 2014 17:01, Nitin Pawar nitinpawar...@gmail.com wrote:
have
Thanks for correcting me Anusha,
Here are the links you gave me
https://cwiki.apache.org/confluence/display/Hive/HCatalog+Config+Properties
https://issues.apache.org/jira/secure/attachment/12622686/HIVE-6109.pdf
On Tue, Sep 9, 2014 at 5:16 PM, Nitin Pawar nitinpawar...@gmail.com wrote:
you
/invoice_details_hive
_partitioned/INDIA/DELHI?
Thanks in Advance
--
Nitin Pawar
in error please notify
immediately the sender and delete the original email received, any
attachments and all copies from your system.
--
Nitin Pawar
:
Can you please specify what this means?
*From:* Nitin Pawar [mailto:nitinpawar...@gmail.com]
*Sent:* Thursday, September 04, 2014 4:00 PM
*To:* user@hive.apache.org
*Subject:* Re: Hive columns
If those are text files you can create the table with single column and
then process them
report using the hive, data pulled
from mysql using the prototype tool.My data will be around 2GB/day.
*Regards Muthupandi.K*
[image: Picture (Device Independent Bitmap)]
--
Nitin Pawar
know about writing UDF's in Hive but have no idea about creating user
defined data type in HIve. Any idea and example on the same would be of
great help.
Thanks.
--
Nitin Pawar
table is partitioned by date.. i want to get data to run a
query every days to find top 10 products in last 15 days .
How to pass list of dates dynamically as arguments in hive query using
hiveconf?
--
Nitin Pawar
the dates with in START DATE AND END DATE.??? . so that my
query looks something like this
Select a, b, c from table_x where date in (${hiveconf:LIST_OF DATES})
On 24 August 2014 01:18, Nitin Pawar nitinpawar...@gmail.com wrote:
with your shell script calculate your start date and end date
hive
, 2014 at 12:57 AM, Nitin Pawar nitinpawar...@gmail.com
wrote:
I am not sure if you can transform array from shell to java, you may want
to write your own custom UDF for that
if these are continuous dates, then you can have less than greater than
comparison
On Sun, Aug 24, 2014 at 12:39 PM
wrote:
Hi,
I am passing substitution variable using hiveconf in Hive..
But i couldnt execute simple queries when i am trying to pass more than
one parameter. It throws NoViableAltException - AtomExpression.. Am i
missing something.?
--
Nitin Pawar
On Tuesday 19 August 2014 02:33 PM, Nitin Pawar wrote:
can you give an example of your dataset?
On Tue, Aug 19, 2014 at 2:31 PM, Sushant Prusty sushan...@gmx.com wrote:
Pl let me know how I can load a CSV file with embedded map and arrays
data into Hive.
Regards,
Sushant
--
Nitin
can you give an example of your dataset?
On Tue, Aug 19, 2014 at 2:31 PM, Sushant Prusty sushan...@gmx.com wrote:
Pl let me know how I can load a CSV file with embedded map and arrays data
into Hive.
Regards,
Sushant
--
Nitin Pawar
are you talking about the tables in map--join being loaded into distributed
cache?
On Wed, Aug 13, 2014 at 6:01 PM, harish tangella harish.tange...@gmail.com
wrote:
Hi all,
Request you to help
What are cache tables in hive
Regards
Harish
--
Nitin Pawar
the original email received, any
attachments and all copies from your system.
--
Nitin Pawar
then you
can use them as well
On Tue, Aug 12, 2014 at 5:58 PM, CHEBARO Abdallah
abdallah.cheb...@murex.com wrote:
Yes I mean the data is on hdfs like filesystem
*From:* Nitin Pawar [mailto:nitinpawar...@gmail.com]
*Sent:* Tuesday, August 12, 2014 3:26 PM
*To:* user@hive.apache.org
the Cloudera documentation
http://www.cloudera.com/content/cloudera-content/cloudera-docs/Impala/latest/Installing-and-Using-Impala/ciiu_perf_hdfs_caching.html
Thanks
uli
--
Nitin Pawar
is a partitioned table in hive, and
return the result to shell.
How can I do?
--
Nitin Pawar
no responsibility for any loss or damage arising
from its use. If you have received this e-mail in error please notify
immediately the sender and delete the original email received, any
attachments and all copies from your system.
--
Nitin Pawar
arising
from its use. If you have received this e-mail in error please notify
immediately the sender and delete the original email received, any
attachments and all copies from your system.
--
Nitin Pawar
divided into 5 columns (col1, col2, col3, col4,
col5).
So I can’t load directly col1, col3 and col5?
If I can’t do it directly, can you provide me with an alternate solution?
Thank you.
*From:* Nitin Pawar [mailto:nitinpawar...@gmail.com]
*Sent:* Wednesday, July 30, 2014 11:37 AM
sorry hit send too soon ..
I mean without creating intermediate tables, in hive you can process the
file directly
On Wed, Jul 30, 2014 at 3:06 PM, Nitin Pawar nitinpawar...@gmail.com
wrote:
With hive, without creating a table with full data, you can do
intermediate processing like select only
...@murex.com wrote:
“With hive, without creating a table with full data, you can do
intermediate processing like select only few columns and write into another
table”. How can I do this process?
Thank you alot!
*From:* Nitin Pawar [mailto:nitinpawar...@gmail.com]
*Sent:* Wednesday, July 30, 2014
arising
from its use. If you have received this e-mail in error please notify
immediately the sender and delete the original email received, any
attachments and all copies from your system.
--
Nitin Pawar
present in the row - which matches the
Arraylist size.
What could be going wrong here?
Thanks
Suma
--
Nitin Pawar
you want to know how initializes an udtf or how to build udtf ?
On Tue, Jul 29, 2014 at 1:30 AM, Doug Christie doug.chris...@sas.com
wrote:
Can anyone point me to the source code in hive where the calls to
initialize, process and forward in a UDTF are made? Thanks.
Doug
--
Nitin
you can try with like statement
On 21 Jul 2014 19:32, fab wol darkwoll...@gmail.com wrote:
Hi everyone,
I have the following problem: I have a partitoned managed table (Partition
table is a string which represents a date, eg. log-date=2014-07-15).
Unfortunately there is one partition in
(partition by p_mfgr order by p_name)*?
Thanks,
Eric
--
Nitin Pawar
'siplogs_partitioned' 'PARTITION' 'str_date' in alter table partition
statement
Is the “ALTER TABLE” usage correct to rename the partitioned column names?
Any pointer or help is appreciated.
Thanks,
Manish
--
Nitin Pawar
org.apache.hadoop.hive.metastore.HiveMetaStoreClient
This is related to hive metastore only.
Can anyone please help me out with this.
Thanks,
Rishabh
--
Nitin Pawar
Hi,
can someone add me to hive wiki editors?
My userid is : nitinpawar432
--
Nitin Pawar
jspilt
- I'm lost in too many documents, howtos, etc. and could need some
advices...
Thank you in advance!
Best, Chris
--
Nitin Pawar
#LanguageManualVariableSubstitution-SubstitutionDuringQueryConstruction
.
Please check my wording and let me know if revisions are needed.
-- Lefty
On Fri, Jun 20, 2014 at 5:17 AM, Nitin Pawar nitinpawar...@gmail.com
wrote:
hive variables are not replaced on mapreduce jobs but when the query is
constructed
--
Nitin Pawar
)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
FAILED: ParseException line 1:19 mismatched input 'EOF' expecting FROM
near 'CURRENT_TIME' in from clause
--
Nitin Pawar
...@bateswhite.com
wrote:
hi all,
how do I write the following query to insert a note with a current system
timestamp?
I tried the following;
INSERT INTO TEST_LOG VALUES (unix_timestamp(),'THIS IS A TEST.');
thanks, Clay
--
Nitin Pawar
--
Nitin Pawar
of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--
Nitin Pawar
--
Nitin Pawar
to do a
group by on subcatg,pid.
I am not able to find a solution to my problem in hive. Any help would be
much appreciated.
Regards
Mohit
--
Nitin Pawar
to a date field) for a pid if we are able to do a
group by on subcatg,pid.
I am not able to find a solution to my problem in hive. Any help would
be much appreciated.
Regards
Mohit
--
Nitin Pawar
--
Nitin Pawar
(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
FAILED: Execution Error, return code -101 from
org.apache.hadoop.hive.ql.exec.FunctionTask
Thanks,
Rishabh.
--
Nitin Pawar
(git
linkhttps://github.com/rathboma/hive-extension-examples/blob/master/src/main/java/com/matthewrathbone/example/SimpleUDFExample.java
).
On Wednesday, 9 April 2014 12:08 PM, Nitin Pawar
nitinpawar...@gmail.com wrote:
Can you put first few lines of your code here or upload code
file is in hiveudfs.jar file.
On Wednesday, 9 April 2014 12:20 PM, Nitin Pawar
nitinpawar...@gmail.com wrote:
in your code and that code package is missing
what you need to do is
define package something like
package org.apache.hadoop.hive.ql.udf;
then your add function definition
keeping the
remaining rows intact in Hive table using Hive INSERT OVERWRITE. There is
no partition in the Hive table.
INSERT OVERWRITE TABLE tablename SELECT col1,col2,col3 from tabx where
col2='abc';
Does the above work ? Please advise.
--
Nitin Pawar
.
__
www.accenture.com
--
Nitin Pawar
understanding pig and hive internally uses hadoop. So is
there a way i can just install bare minimum hive or pig and take advantage
of already installed hadoop or i need to separately install and configure
complete hive and pig.
Thanks,
-Rahul Singh
--
Nitin Pawar
before the time runs off ...
Cheers
Wolli
--
Nitin Pawar
1 - 100 of 443 matches
Mail list logo