unsubscribe
On Thu, Dec 1, 2016 at 3:36 AM, Furcy Pin wrote:
> Hi,
>
> you should replace
>
> WITH table AS (subquery)
> SELECT ...
> FROM table
>
> with
>
> SELECT ...
> FROM(
> subquery
> ) table
>
> Regards.
>
> On Thu, Dec 1, 2016 at 12:32 PM, Priyanka Raghuvanshi <
> priyan...@winjit.com>
23:20:30*
On Fri, Sep 23, 2016 at 1:24 PM, Manish R
wrote:
> Yes Sekine I am talking about AWS ELB logs in Mumbai region. Let me try
> implementing what Andres suggested and I also in a verge of implementing
> some other solution as well. I will let you all know once any of the
&
Yes Sekine I am talking about AWS ELB logs in Mumbai region. Let me try
implementing what Andres suggested and I also in a verge of implementing
some other solution as well. I will let you all know once any of the
solution works.
On Sep 23, 2016 1:11 PM, "Sékine Coulibaly" wrote:
apache.org/confluence/display/Hive/
> LanguageManual+UDF#LanguageManualUDF-TypeConversionFunctions
>
> Use from_utc_timestamp to convert request_date to timestamp to timezone
> specified in Table 2 (join two tables using aid column)
>
> Regards,
>
> Andres Koitmäe
>
> On 22
Hi Guys,
There is a scenario here that I am trying to implement
I have a table say table1 which contains aid and request_date in ISO 8601
format. I have one more table say table2 which contains aid and timezone
details. Now I want to convert request_date from table1 to UTC and apply
the timezone
Thanks Nichole and Dudu for the reply. I got what I was looking for.
--Manish
On Thu, Sep 22, 2016 at 2:52 AM, Markovitz, Dudu
wrote:
> select to_date(ts),year(ts),month(ts),day(ts),hour(ts),minute(ts),second(ts)
> from (select from_unixtime (unix_timestamp
> ('2016-09-15T23
Guys,
I am trying to extract date, time, month, minute etc from below timestamp
format but did not find any function for this. Can anyone help me to
extract the details?
2016-09-15T23:45:22.943762Z
2016-09-15T23:45:22.948829Z
--Manish
Thanks Dudu, both the queries worked like a charm. I personally liked
second query as it is quite easy to remember.
--Manish
On Tue, Sep 20, 2016 at 8:41 PM, Markovitz, Dudu
wrote:
> Or
>
>
>
> create view elb_raw_log_detailed
>
> as
>
> select request_date, elbn
, parse_url_tuple(url,
'QUERY:did') as did, protocol, useragent, ssl_cipher, ssl_protocol from
elblogz;
*FAILED: SemanticException [Error 10081]: UDTF's are not supported outside
the SELECT clause, nor nested in expressions*
On Tue, Sep 20, 2016 at 3:56 PM, Manish Rangari <
linuxt
Yes views looks like a way to go
On Tue, Sep 20, 2016 at 3:49 PM, Damien Carol
wrote:
> The royal way to do that is a view IMHO.
>
> 2016-09-20 12:14 GMT+02:00 Manish Rangari
> :
>
>> Thanks for the reply Damien. The suggestion you gave is really useful.
>> Currentl
rotocol from elblog;
On Tue, Sep 20, 2016 at 3:12 PM, Damien Carol
wrote:
> see the udf
> *parse_url_tuple*
> SELECT b.*
> FROM src LATERAL VIEW parse_url_tuple(fullurl, 'HOST', 'PATH', 'QUERY',
> 'QUERY:id') b as host, path, query, query_id LIMIT 1;
>
Guys,
I want to get the field of elb logs. A sample elb log is given below and I
am using below create table definition. It is working fine. I am getting
what I wanted but now I want the bold part as well. For example eid, tid,
aid. Can anyone help me how can I match them as well.
NOTE: The posit
110)
856 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
857 at java.lang.Thread.run(Thread.java:722)
Regards,
Manish
On Tue, Jan 20, 2015 at 5:58 PM, Manish Malhotra <
manish.hadoop.w...@gmail.com> wrote:
> Hi All,
>
> I'm using Hive Thrift Server in Pro
dy faced the similar issue.
I found one related JIRA :https://issues.apache.org/jira/browse/HCATALOG-541
But this JIRA shows that Hive Thrift Server shows OOM error, but in my case
I didnt see any OOM error in my case.
Regards,
Manish
Full Exception Stack:
at
org.apache.thrift.pr
Hi All,
When we submit a Hive Query, essentially a MR job runs. Is there a way to
access that JAR file that HIVE creates for us?
thanks,
Manish
LED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask. str_date not found in table's partition
spec: {pcol1=str_hour, pcol2=str_date}
Am I missing something here?
Thanks,
Manish
From: D K [mailto:deepe...@gmail.com]
Sent: Wednesday, July 09, 2014 12:38 PM
To: user@hive
usage correct to rename the partitioned column names?
Any pointer or help is appreciated.
Thanks,
Manish
are running query using nohup hive -q or from multiple
machines or Oozie or Your custom code.
It boils down to your system/code is not submitting query in sequence or
not waiting and your cluster has enough resource to run MR in parallel.
Regards,
Manish
On Sun, Apr 27, 2014 at 1:58 PM, Swagatik
Thanks Brad. It worked great. I am not sure what auth i am using on Horton
Image. I have just set it up and started executing python programs via ssh
from the host.
Thanks,
Manish
On Sat, Apr 26, 2014 at 11:39 AM, Brad Ruderman wrote:
> I am guessing you are missing the plain kerb plu
ception: Could not start SASL:
Error in sasl_client_start (-4) SASL(-4): no mechanism available: No worthy
mechs found
I have check and recheck that sasl is installed.
Any ideas why this happens?
Thanks,
Manish
to run the current query.
or else exit the workflow...
I read about the conditional functions[1], but not sure how to apply to
my case above.
Any ideas?
cheers /Manish
[1]
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-ConditionalFunctions
I am getting below error when connecting to hive shell. I thought it is
because of log directory issue. But even fixing the directory permission
the error still exists.
manish@localhost:/tmp/manish$ hive
Logging initialized using configuration in
file:/etc/hive/conf.dist/hive-log4j.properties
ail.com
To: user@hive.apache.org
Modfiy the query to :
select totalcount / sum(totalcount) from daily_count_per_kg_domain
where timestamp_dt = '20140219' group by timestamp_dt;
if you dont specify the where clause, you will get result for all
partitions.
On Tue, Feb 25, 2014 a
I have a partitioned table on timestamp_dt:
> desc daily_count_per_kg_domain;
OK
ddnamesyskg string
totalcount int
timestamp_dtstring
hive> select * from daily_count_per_kg_domain;
OK
sys_kg_band 224 20140219
sys_kg_event343520140219
sys_kg_movies 44987 20140219
ounds like there is a limit (upper bound) on number of records for M-R jobs?
I am pretty new to grid technologies.. and haven’t yet come across such limits.
/Manish
On 2/13/14, 9:20 AM, John Meagher wrote:
Try changing ArrayList to just List as the argument to the evaluate function.
+1 worked thanks!
ean evaluate(Text payload, String pType, String type) {
…
}
How can I pass Hive Array as an argument to a simple UDF?
/Manish
You,
Manish.
On Wednesday 11 December 2013 04:51 AM, Adam Kawa wrote:
$ hadoop version
Sent from Rocket Mail via Android
d types everything is same but here it says
> some difference. The same query works fine without having any partitions in
> all the three tables but getting error while executing with partitions.
>
>
> please help.
>
>
>
> Thanks
> Manickam P
>
--
Regards
*Manish Dunani*
*Contact No* : +91 9408329137
*skype id* : manish.dunani*
*
quot;;
load data local inpath '/path/to/ur/file' into table dynamictable;
create table part1(a1 int,a2 string) partitioned by(a3 string,a4 string);
insert overwrite table part1 partition(a3,a4) select a.a1,a.a2 from
dynamictable a;
I hope u it is helpful to u..
Regards
Manish
On F
DR.Country
> from EMP_MASTER MAST, EMP_ADDRESS ADDR
> where
> MAST.row_create_date = (select max(row_create_date) from EMP_MASTER where
> Emp_Id = MAST.Emp_Id)
> and ADDR.row_create_date = (select max(row_create_date) from EMP_ADDRESS
> where Emp_Id = ADDR.Emp_Id)
>
>
> regards,
> Rams
>
--
MANISH DUNANI
-THANX
+91 9426881954,+91 8460656443
manishd...@gmail.com
I think these directories belong to task tracker temporary storage. I am not
very confident to conclude that go ahead with your clean up. So, wait for
similar or an expert's response 
Sent from HTC via Rocket! excuse typo.
When you are using Cli library ... it internally uses ZK or configured /
support locking service, so no extra effort is required to do that.
Though there is a patch for hiveserver leak zookeeper HIVE-3723 , which
people are trying on 0.9 and 0.10.
Regards,
Manish
On Thu, Jan 10, 2013 at 11:23
If there are efforts left over as I can see few tasks are unassigned,
anybody can give some rough idea about how much time it would take to
implement these features, so that if possible can confirm from my company
about contributing to these features.
Thanks for your time and help !!
Regards,
Manish
I, does it give
compilation problem in synchronous way instead of scheduling a query that
is not correct?
Regards,
Manish
On Sun, Jan 6, 2013 at 10:45 AM, Manish Malhotra <
manish.hadoop.w...@gmail.com> wrote:
> Thanks Edward for explaining
> Im also very much interested in buil
Thanks Edward for explaining
Im also very much interested in building a robust tool for bringing HIVE
more into Enterprise world where any Data Analyst / ETL developer can use
it.
Regards,
Manish
On Sun, Jan 6, 2013 at 9:10 AM, Edward Capriolo wrote:
> The hive code is apache licensed.
merge this or add to Apache Hive codebase.
3. Does it has synchronous mode of running queries or only scheduling /
async way?
Thanks for your reply and time,
Regards,
Manish
On Fri, Jan 4, 2013 at 9:34 PM, Qiang Wang wrote:
> Hi Manish:
>
> Glad to receive your email because we a
useful with different features.
Thanks for your time and help !!
Regards,
Manish
of
atleast 1 level of aggregated / grouped data, to reduce the data to be
queried.
There are other points as well that is obvious and well known different in
Hive and RDBMS.
So, please see the points above and take your decision and design the
system.
Hope fully this will help.
Regards,
Manish
Looks like https://issues.apache.org/jira/browse/HCATALOG-541 is also
related
though the issue looks like when dealing with large number of partitions.
Regards,
Manish
On Wed, Dec 12, 2012 at 5:59 PM, Shreepadma Venugopalan <
shreepa...@cloudera.com> wrote:
>
>
>
> On Tue,
HDFS directly.
For this can use WebHDFS or build own which can internally using FileSystem
API.
Regards,
Manish
On Wed, Dec 12, 2012 at 11:30 PM, Nitin Pawar wrote:
> Hive takes a longer time to respond to queries as the data gets larger.
>
> Best way to handle this is you process the dat
en you
can restore them from data by RECOVER PARTITIONS (if using Amazon EMR) or
an analog check command for a regular distro of Hadoop (I don't remember
what the name is).
MM: Dont want to go to EMR route, will check the hadoop/hive based way of
doing.
Cheers,
Manish
Sending again, as got no response.
Can somebody from Hive dev group please review my approach and reply?
Cheers,
Manish
On Thu, Dec 6, 2012 at 11:17 PM, Manish Malhotra <
manish.hadoop.w...@gmail.com> wrote:
> Hi,
>
> I'm building / designing a back-up and restore to
ing features.
Thanks for your time and effort.
Regards,
Manish
http://hive.apache.org/docs/r0.7.1/api/org/apache/hadoop/hive/cli/CliDriver.html
( this is bit older version link)
Most probably you need to build a server in between your client and HIVE
MetaStore.
Please this is my understanding, guys please correct me if I missed out
something.
Regards,
Manish
e for review but not
available as the one which is ready to use. Or I can try out that patch
when using the secured one?
Again thanks your help !!
Regards,
Manish
On Thu, Nov 8, 2012 at 9:39 AM, Ashutosh Chauhan wrote:
> Hi Manish,
>
> Your understanding is mostly correct, thoug
understanding is correct or not on the above 2 blogs / mail.
Is there any initial work done under HCatalog or Hive, which I can look
into and extend / patch.
Regards,
Manish
ld
is all together different than the way you define in hadoop echo system.
I would iterate first find out what data you need to move to hadoop echo system
on what reason? Believe me MySql can give you far quicker response . Don't use
hadoop to replace your MySQL datawarehouse.
Thank You,
Thanks Bejoy. I have zip file there is sense to convert into gzip again.
Chuck, I got what you are trying to say. So I need to process it outside
HDFS and bring the text file into HDFS.
On Sun, 2012-09-30 at 18:21 +0530, Bejoy KS wrote:
> Hi Manish
>
> Gzip works well if you
are not giving it a text file.
Chuck
________
From: Manish [manishbh...@rocketmail.com]
Sent: Sunday, September 30, 2012 7:38 AM
To: Savant, Keshav
Cc: user@hive.apache.org
Subject: RE: zip file or tar file cosumption
I am getting below error when loading zip file
> I am getting below error when loading zip file
>
>
> Driver returned: 9. Errors: Hive history
> file=/tmp/hue/hive_job_log_hue_201209300434_1768401171.txt
> Loading data to table default.pageview_zip
> Failed with exception Error moving:
> hdfs://localhost:543
I am getting below error when loading zip file
Driver returned: 9. Errors: Hive history
file=/tmp/hue/hive_job_log_hue_201209300434_1768401171.txt
Loading data to table default.pageview_zip
Failed with exception Error moving:
hdfs://localhost:54310/user/manish/input/zip/11sep12.zip into
Hi Sadu,
See my answer below.
Also this will help you to understand in detail about collection, MAP and Array.
http://datumengineering.wordpress.com/2012/09/27/agility-in-hive-map-array-score-for-hive/
From: Sadananda Hegde [mailto:saduhe...@gmail.com]
Sent: Friday, September 28, 2012 10:31 AM
Thanks Savant. I believe this will hold good for .zip file also.
Thank You,
Manish.
From: Savant, Keshav [mailto:keshav.c.sav...@fisglobal.com]
Sent: Thursday, September 27, 2012 10:19 AM
To: user@hive.apache.org; manishbh...@rocketmail.com
Subject: RE: zip file or tar file cosumption
Manish
don't store your metadata
in Derby. We faced issue when Hive was storing metadata in Derby.
Thank You,
Manish.
From: Chalcy Raja [mailto:chalcy.r...@careerbuilder.com]
Sent: Wednesday, September 26, 2012 9:26 PM
To: user@hive.apache.org
Subject: hive server security/authentication
Using h
Hi Richin,
Thanks! Yes this is what I wanted to understand how to load zip file to Hive
table. Now, I'll try this option.
Thank You,
Manish.
Sent from my BlackBerry, pls excuse typo
-Original Message-
From:
Date: Wed, 26 Sep 2012 14:51:39
To:
Reply-To: user@hive.apach
t to understand that would it be possible to utilize zip/tar files
directly into Hive. All the files has similar schema (structure). Say 50 *.txt
files are zipped into a single zip file can we load data directly from this zip
file OR should we need to unzip first?
Thanks & Regards
Mani
As I mention you can copy jar to your hadoop cluster at /usr/lib/hive/lib and
then us directly in Hiveql.
Thank You,
Manish.
Sent from my BlackBerry, pls excuse typo
-Original Message-
From: Manu A
Date: Wed, 26 Sep 2012 15:01:14
To:
Reply-To: user@hive.apache.org
Subject: Re: Custom
& Regards
Manish Bhoge | Technical Architect * Target DW/BI| * +919379850010 (M) Ext:
5691 VOIP: 22165 | * "Excellence is not a skill, It is an attitude."
MySite<http://mysites.target.com/personal/z063783>
reflection. That is something new for me. You
can give a try to reflection.
Thank You,
Manish
From: Tamil A [mailto:4tamil...@gmail.com]
Sent: Tuesday, September 25, 2012 6:48 PM
To: user@hive.apache.org
Subject: Re: Custom MR scripts using java in Hive
Hi Manish,
Thanks for your help.I did the same
Manu,
If you have written UDF in Java for Hive then you need to copy your JAR on your
Hadoop cluster in /usr/lib/hive/lib/ folder to hive to use this JAR.
Thank You,
Manish
From: Manu A [mailto:hadoophi...@gmail.com]
Sent: Tuesday, September 25, 2012 3:44 PM
To: user@hive.apache.org
Subject
Sarath,
Is this the external table where you have ran the query? How did you loaded the
table? Because it looks like the error is about the file related to table than
CDH Jar.
Thank You,
Manish
From: Sarath [mailto:sarathchandra.jos...@algofusiontech.com]
Sent: Tuesday, September 25, 2012 3
Hey Bejoy! Thanks a ton.
Things are so easy in Hive :)
This is how my SQL looks like after defining Map (associative Array).
select pv.c_14["+UserType"] from page_view_tmp_2 pv where
pv.c_14["+LastLogin"] IS NOT NULL
Thanks Again,
Manish.
On Fri, 2012-09-21 at 00:45 -
sure whether I should use both array and map together or not.
Let me try this out and will let you know the result. Thanks for clarifications.
Thank You,
Manish
From: Bejoy KS [mailto:bejoy...@yahoo.com]
Sent: Friday, September 21, 2012 1:16 PM
To: Manish.Bhoge; user@hive.apache.org; user
Subject: Re
,
Manish.
-Original Message-
From: Bejoy KS [mailto:bejoy...@yahoo.com]
Sent: Friday, September 21, 2012 10:50 AM
To: user@hive.apache.org; user
Subject: Re: Map issue in Hive.
Hi Manish
Couple of things to keep in mind here
if you have a column data like this "key1:value1;key2:v
Hivers,
I have a web log which i need to load into single table. But one column
has complete string of important data. However i want to extract
complete information from 1 column and do further analysis.
Issue here is that after giving ';' as a delimiter i was expecting Map
for all occ
Thanks. Got the solution, i was copying my jar in wrong folder. I should
have copied in /usr/lib/hive/lib folder.
Thank You,
Manish
On Thu, 2012-09-20 at 15:15 +, Connell, Chuck wrote:
> You might try adding “ --auxpath /path/to/jar/dir “ to the Hive
> command line.
>
>
&
ilerecordreader'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.IgnoreKeyTextOutputFormat';
Error :
FAILED: Error in metadata: Class not found:
com.tgam.hadoop.mapreduce.inputfilerecordreader
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask
Please help on this error.
Thank You,
Manish.
and run some data profiling to identify
how my final table should look like, I mean what would be datatype in final
table. Does this make sense ?
Thanks,
Manish
From: Robin Verlangen [mailto:ro...@us2.nl]
Sent: Monday, September 17, 2012 3:17 PM
To: user@hive.apache.org
Subject: Problem importing
Hi Suman,
I think you need to have another directory in hive as test. Copy the
data into s3://com.x/hive/test/
Thank You,
Manish.
On Fri, 2012-08-24 at 20:43 +, suman.adda...@sanofipasteur.com
wrote:
> Hi,
>
> I have setup a Hadoop cluster on Amazon EC2 with my data stored o
xact reason for
this error. In past though same error was coming but data was getting
copied.
manish@manish:~$ hadoop fs
-copyFromLocal /home/manish/airteldata/datadump.txt
hdfs://localhost:54310/usr/manish/input/data.csv
log4j:ERROR Could not find value for key log4j.appender.NullAppender
log
When i am creating table from comma delimited csv file then i am getting below
error:
Failed to open file '/user/manish/minfo.csv': Could not read block
Block(genStamp=1094, blockId=6807603852292275080L, numBytes=44429,
token='AA', startOffset=0, path=u'/user/
khoa,
When you run HiveQL with filter condition then it uses reducer otherwise it
just uses map tasks to select your data. There is a issue with your reducer.
Sent from my BlackBerry, pls excuse typo
-Original Message-
From: "Nguyen, Khoa"
Date: Tue, 27 Mar 2012 20:55:24
To: user@hive
Whenever you submit a Sql a job I'd get generated. You can open the job tracker
localhost:50030/jobtracker.asp
It shows jobs are running and rest of the other details.
Thanks,
Manish
Sent from my BlackBerry, pls excuse typo
-Original Message-
From: Felix.徐
Date: Tue, 20 Mar 2012
Whenever you submit a Sql a job I'd get generated. You can open the job tracker
localhost:50030/jobtracker.asp
It shows jobs are running and rest of the other details.
Thanks,
Manish
Sent from my BlackBerry, pls excuse typo
-Original Message-
From: Felix.徐
Date: Tue, 20 Mar 2012
the apache repository for HBase and Hive interface to see how both
can talk together.
Thank You,
Manish
From: Shiv Sharma [mailto:aatman.eq.brah...@gmail.com]
Sent: Wednesday, February 22, 2012 11:06 PM
To: user@hive.apache.org
Subject: HIVE vs HBASE for Datawarehousing
4 Newbie questions:
1. A
dmin' user from
command line.
Thank You,
Manish
-Original Message-
From: alo alt [mailto:wget.n...@googlemail.com]
Sent: Friday, February 03, 2012 1:37 PM
To: user@hive.apache.org
Cc: d...@hive.apache.org; mgro...@oanda.com
Subject: Re: Not able to create table in hive
Importance: High
This is the WAR file with the jsp content for Hive Web
Interface
Thank You,
Manish
-Original Message-
From: Mark Grover [mailto:mgro...@oanda.com]
Sent: Thursday, February 02, 2012 8:15 PM
To: user@hive.apache.org
Subject: Re: Not able to create table in hive
Hi Manish,
Sounds like a
read-only database is not permitted to disable read-only mode on a connection.
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask
Any idea?
Thank You,
Manish
80 matches
Mail list logo