Hi Manju,
How are you storing your data in HBase?What is your rowkey?
Warm Regards,
Tariq
cloudfront.blogspot.com
On Fri, Mar 28, 2014 at 12:03 PM, Manju M manjumohapatra1...@gmail.comwrote:
How can i scan one day's data from Hbase table and insert that into Hbase
partition ( date) ?
is
Hi Chhaya,
I totally agree with AbdelRahman. HCatalog is the correct way to do that.
See
*this*https://cwiki.apache.org/confluence/display/Hive/HCatalog+LoadStore#HCatalogLoadStore-RunningPigwithHCatalogfor
more details.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Wed, Feb 26, 2014 at
Could you please show us your query?
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Sep 12, 2013 at 1:49 AM, Siddharth Tiwari siddharth.tiw...@live.com
wrote:
Hi Team
I am getting following error when I am trying to load csv file in my hive
table:-
FAILED: Parse Error: line 1:71
/confluence/display/Hive/GettingStarted
- Hive Tutorial --
https://cwiki.apache.org/confluence/display/Hive/Tutorial
– Lefty
** **
** **
On Wed, Jun 19, 2013 at 7:02 PM, Mohammad Tariq donta...@gmail.com
wrote:
Hello ma'am,
** **
Hive queries are parsed
and jobconf.xml now will debug hadoop to get the rest of
info.
Thanks,
Warm regards,
Bharati
On Jul 17, 2013, at 12:08 PM, Mohammad Tariq donta...@gmail.com wrote:
Hello ma'm,
Apologies first of all for responding so late. Stuck with some urgent
deliverables. Was out of touch for a while
No luck.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Tue, Jul 2, 2013 at 1:03 PM, Matouk IFTISSEN
matouk.iftis...@ysance.comwrote:
Yes it is to create an external table that points your data with the
regexp passed with the SERDE.
good day
2013/7/2 Mohammad Tariq donta...@gmail.com
path_where_is_the_jar_in_hive_lib\hive-contrib-0.9.0.jar ;
Good luck
2013/7/1 Mohammad Tariq donta...@gmail.com
Hello list,
I would really appreciate if someone could show me the correct
way of using regexserde as i'm having some hard time using it. I have
verified my regex through
Do you have permissions to write to this path?And make sure you are looking
into the local FS, as Stephen has specified.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Tue, Jul 2, 2013 at 5:25 AM, Stephen Sprague sprag...@gmail.com wrote:
you gotta admit that's kinda funny. Your stderr
Hello list,
I would really appreciate if someone could show me the correct way
of using regexserde as i'm having some hard time using it. I have verified
my regex through http://www.regexplanet.com/advanced/java/index.html and
it's working fine there. But when i'm using the same pattern
try and add a semicolon to the end of the create table script ...
sanjay
From: Mohammad Tariq donta...@gmail.com
Reply-To: user@hive.apache.org user@hive.apache.org
Date: Thursday, June 20, 2013 12:52 PM
To: user user@hive.apache.org
Subject: Re: show table throwing strange error
Hello Manickam,
Please make sure your hadoop daemons are running.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 9:55 PM, Manickam P manicka...@outlook.com wrote:
Hi,
I tried to execute a query in hive but i got the below exception. I dont
know the reason.
What's your query?
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 11:17 PM, Manickam P manicka...@outlook.com wrote:
Hi,
I checked all the nodes. All are up and running. normal hive queries like
show tables are working fine. Queries with map reduce is throwing
Yes. It'll. Have you changed all the env. vars, like HADOOP_HOME, HIEV_HOME
etc according to your new environment?
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 27, 2013 at 11:43 PM, Manickam P manicka...@outlook.com wrote:
Hi Tariq,
It is a very simple query like select * from
, Mohammad Tariq donta...@gmail.comwrote:
Hello list,
I have a hive(0.9.0) setup on my Ubuntu box running
hadoop-1.0.4. Everything was going smooth till now. But today when I issued
*show tables* I got some strange error on the CLI. Here is the error :
hive show tables;
FAILED
Hello ma'am,
Hive queries are parsed using ANTLR http://www.antlr.org/ and and
are converted into corresponding MR jobs(actually a lot of things happen
under the hood). I had answered a similar
query log under /tmp to see if it says
something in the log ?
Sent from my iPhone
On Jun 19, 2013, at 5:53 PM, Mohammad Tariq donta...@gmail.com wrote:
Hello list,
I have a hive(0.9.0) setup on my Ubuntu box running hadoop-1.0.4.
Everything was going smooth till now. But today
formatted and connection strings correct ?
Sent from my iPhone
On Jun 19, 2013, at 6:30 PM, Mohammad Tariq donta...@gmail.com wrote:
Hello Anurag,
Thank you for the quick response. Log files is full of such lines along
with a trace that says it is some parsing related issue
It actually seems to be ignoring hive-site.xml. No effect of the properties
set in hive-site.xml file.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 20, 2013 at 7:12 AM, Mohammad Tariq donta...@gmail.com wrote:
It looks OK to me,
configuration
property
?
Sent from my iPhone
On Jun 19, 2013, at 6:44 PM, Mohammad Tariq donta...@gmail.com wrote:
It actually seems to be ignoring hive-site.xml. No effect of the
properties set in hive-site.xml file.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 20, 2013 at 7:12 AM, Mohammad Tariq
That error is gone after I recreated hive-site.xml and restarted hive. But
now there seems to be some problem with metastore settings. It's not going
to mysql.
Anyways, thank you both for the help.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Thu, Jun 20, 2013 at 7:29 AM, Mohammad Tariq
Capriolo edlinuxg...@gmail.comwrote:
It is not a valid URL if it does not have a scheme and can not be parsed.
SELECT if (column like 'http%', column, concat( 'http://', column) ) as
column might do what you need.
On Mon, Jun 10, 2013 at 5:59 PM, Mohammad Tariq donta...@gmail.comwrote:
Hello
Sorry for the typo in the 3rd answer. I meant, question 1 covers this.
Warm Regards,
Tariq
cloudfront.blogspot.com
On Mon, May 13, 2013 at 10:13 PM, Mohammad Tariq donta...@gmail.com wrote:
Hello Nalin,
Please find my comments embedded below :
1. What tools are available to Query
the old data (confirmed or paid for
example), right?
--
Ibrahim
On Tue, Dec 25, 2012 at 8:59 AM, Mohammad Tariq donta...@gmail.comwrote:
Also, have a look at this :
http://www.catb.org/~esr/faqs/smart-questions.html
Best Regards,
Tariq
+91-9741563634
https://mtariq.jux.com
Hello Kshiva,
Hive is batch a processing system and not meant for real time
stuff as said by Nitin. Hive queries actually get converted into MapReduce
jobs under the hood and then give you the result. If you want to query your
data in real time, then give Impala a shot. It does the
Hello Ibrahim,
A quick questio. Are you planning to replace your SQL DB with Hive? If
that is the case, I would not suggest to do that. Both are meant for
entirely different purposes. Hive is for batch processing and not for real
time system. So if you are requirements involve real time
:
Thanks Mohammad, No, we do not have any plans to replace our RDBMS with
Hive. Hadoop/Hive will be used as Data Warehouse batch processing
computing, as I said we want to use Hive for analytical queries.
--
Ibrahim
On Mon, Dec 24, 2012 at 4:19 PM, Mohammad Tariq donta...@gmail.comwrote:
Hello
to hadoop/hive, this is our
problem now.
--
Ibrahim
On Mon, Dec 24, 2012 at 4:35 PM, Mohammad Tariq donta...@gmail.comwrote:
Cool. Then go ahead :)
Just in case you need something in realtime, you can have a look at
Impala.(I know nobody likes to get preached, but just in case
, Mohammad Tariq donta...@gmail.comwrote:
You can use Apache Oozie to schedule your imports.
Alternatively, you can have an additional column in your SQL table, say
LastUpdatedTime or something. As soon as there is a change in this column
you can start the import from this point. This way you
, but Hive does not support update nor deletion of
data, so when I import the data after specific last_update_time records,
hive will append it not replace.
--
Ibrahim
On Mon, Dec 24, 2012 at 5:03 PM, Mohammad Tariq donta...@gmail.comwrote:
You can use Apache Oozie to schedule your imports
Have a look at Beeswax.
BTW, do you have access to Google at your station?Same question on the Pig
mailing list as well, that too twice.
Best Regards,
Tariq
+91-9741563634
https://mtariq.jux.com/
On Tue, Dec 25, 2012 at 11:20 AM, Kshiva Kps kshiva...@gmail.com wrote:
Hi,
Is there any Hive
Also, have a look at this :
http://www.catb.org/~esr/faqs/smart-questions.html
Best Regards,
Tariq
+91-9741563634
https://mtariq.jux.com/
On Tue, Dec 25, 2012 at 11:26 AM, Mohammad Tariq donta...@gmail.com wrote:
Have a look at Beeswax.
BTW, do you have access to Google at your station?Same
on the no. of InputSplits created by the InputFormat you are using to
process you data and the no. of reducers depend on the no of partitions
created after the map phase.
HTH
Regards,
Mohammad Tariq
On Thu, Dec 13, 2012 at 6:25 PM, imen Megdiche imen.megdi...@gmail.comwrote:
thank you
I said that because under the hood each query(Hive or Pig) gets converted
into a MapReduce job first, and gives you the result.
Regards,
Mohammad Tariq
On Thu, Dec 13, 2012 at 7:51 PM, imen Megdiche imen.megdi...@gmail.comwrote:
I don t understand why you mean with Same holds good
Hi Imen,
You can add mapred.map.tasks property in your mapred-site.xml file.
But, it is just a hint for the InputFormat. Actually no. of maps is
actually determined by the no of InputSplits created by the InputFormat.
HTH
Regards,
Mohammad Tariq
On Wed, Dec 12, 2012 at 4:11 PM, imen
?
Regards,
Mohammad Tariq
On Wed, Dec 12, 2012 at 5:11 PM, imen Megdiche imen.megdi...@gmail.comwrote:
Thank you Mohammad but the number of map tasks still the same in the
execution. Do you know how to capture the time spent on execution.
2012/12/12 Mohammad Tariq donta...@gmail.com
Hi Imen
work on ubuntu
2012/12/12 Mohammad Tariq donta...@gmail.com
Hi Imen,
You can visit the MR web UI at JobTrackerHost:50030 and see all
the useful information like no. of mappers, no of reducers, time taken for
the execution etc.
One quick question for you, what is the size of your
Any luck with localhost:50030??
Regards,
Mohammad Tariq
On Wed, Dec 12, 2012 at 5:53 PM, imen Megdiche imen.megdi...@gmail.comwrote:
i run the job through the command line
2012/12/12 Mohammad Tariq donta...@gmail.com
You have to replace JobTrackerHost in JobTrackerHost:50030
Can I have a look at your config files?
Regards,
Mohammad Tariq
On Wed, Dec 12, 2012 at 6:31 PM, imen Megdiche imen.megdi...@gmail.comwrote:
i run the start-all.sh and all daemons starts without problems. But i the
log of the tasktracker look like this :
2012-12-12 13:53:45,495 INFO
,
Mohammad Tariq
On Wed, Dec 12, 2012 at 6:46 PM, imen Megdiche imen.megdi...@gmail.comwrote:
For mapred-site.xml :
configuration
property
namemapred.map.tasks/name
value6/value
/property
/configuration
for core-site.xml :
configuration
!-- property
namefs.default.name/name
valuehdfs
Hi Imen,
I am sorry, I didn't get the question. Are you asking about
creating a distributed cluster? Yeah, I have done that.
Regards,
Mohammad Tariq
On Wed, Dec 12, 2012 at 7:45 PM, imen Megdiche imen.megdi...@gmail.comwrote:
have you please commented the configuration
Hello Dlia,
You can visit these link to see how Hadoop can be configured on Mac.
http://blogs.msdn.com/b/brandonwerner/archive/2011/11/13/how-to-set-up-hadoop-on-os-x-lion-10-7.aspx
http://ragrawal.wordpress.com/2012/04/28/installing-hadoop-on-mac-osx-lion/
Regards,
Mohammad Tariq
me out.
Many thanks.
Regards,
Mohammad Tariq
On Fri, Nov 16, 2012 at 3:49 AM, Kanna Karanam kanna...@microsoft.comwrote:
Hi Dean, HDInsight enables the developers to deploy and run Hadoop on
Windows based personal computers. You can download and install it from
http://www.microsoft.com
Thank you so much for the link Alex.
Regards,
Mohammad Tariq
On Fri, Nov 16, 2012 at 2:19 PM, Alexander Alten-Lorenz wget.n...@gmail.com
wrote:
https://help.ubuntu.com/community/VMware/Player
- Alex
On Nov 16, 2012, at 9:47 AM, Mohammad Tariq donta...@gmail.com wrote:
Hello Carl
Hello Sandeep,
I would suggest you to write a MapReduce job instead of usual sequential
program to transform your files. It would be much faster. Then use Hive to
load the data.
Regards,
Mohammad Tariq
On Fri, Sep 7, 2012 at 8:11 PM, Sandeep Reddy P sandeepreddy.3...@gmail.com
wrote
I said this assuming that a Hadoop cluster is available since Sandeep is
planning to use Hive. If that is the case then MapReduce would be faster
for such large files.
Regards,
Mohammad Tariq
On Fri, Sep 7, 2012 at 8:27 PM, Connell, Chuck chuck.conn...@nuance.comwrote:
I cannot promise
Hello Yogesh,
Are any proxy settings involved??Error code 407 indicates that
the client must first authenticate itself with the proxy in order to
proceed further. Just make sure everything is in place.
Regards,
Mohammad Tariq
On Tue, Jul 24, 2012 at 1:55 PM, yogesh.kuma
proxy settings Please explain it,
If i do operations like.
select * from dummysite,
or
select * from dummysite where id=10;
such command shows proper result and doesn't through any error.
Please suggest.
Regards
Yogesh Kumar
From: Mohammad Tariq
a file with name hive-site.xml and list down the properties you
want to override.
Regards,
Mohammad Tariq
On Thu, Jul 12, 2012 at 12:38 PM, yogesh.kuma...@wipro.com wrote:
Hi all,
I have downloaded hive-0.8.1 and even hive-0.9.0,
after extracting it I found that
hive-site.xml
Create your table using this command :
create table parthp (no INT , name STRING , result STRING , class INT
) row format delimited fields terminated by , ;
In hive the delimiters are assumed to be ^A(ctrl-a), by default.
Regards,
Mohammad Tariq
On Thu, Jul 12, 2012 at 3:46 PM
Try it out using distcp command.
Regards,
Mohammad Tariq
On Wed, Jul 11, 2012 at 8:09 PM, shaik ahamed shaik5...@gmail.com wrote:
Hi All,
As i have a data of 100GB in HDFS as i want this 100 gb file to
move or copy to the hive directory or path how can i achieve
Hello shaik,
Were you able to fetch the data earlier. I mean is it
happening for the first time or you were not able to fetch the data
even once??
Regards,
Mohammad Tariq
On Thu, Jul 5, 2012 at 12:17 PM, shaik ahamed shaik5...@gmail.com wrote:
Hi All,
Im
.
Regards,
Mohammad Tariq
Hello Nitin, Bejoy,
Thanks a lot for the quick response. Could you please tell me
what is the default criterion of split creation??How the splits for a
Hive query are created??(Pardon my ignorance).
Regards,
Mohammad Tariq
On Fri, Jun 29, 2012 at 12:22 AM, Bejoy KS bejoy
instead of taking each individual row, I can
use nlineinput format.Is it possible to do something similar at Hive's
level or do I need to look into the source code??
Regards,
Mohammad Tariq
On Fri, Jun 29, 2012 at 12:37 AM, Bejoy KS bejoy...@yahoo.com wrote:
Hi Mohammed
Splits
Ok Bejoy. I'll proceed as directed by you and get back to you in case
of any difficulty. Thanks again for the help.
Regards,
Mohammad Tariq
On Fri, Jun 29, 2012 at 12:59 AM, Bejoy KS bejoy...@yahoo.com wrote:
Hi Mohammed
If it is to control the split size and there by the number of map
change the line 127.0.1.1 in your /etc/hosts file to
127.0.0.1..Also check if there is any problem with the configuration
properties.
Regards,
Mohammad Tariq
On Wed, Jun 20, 2012 at 2:52 PM, soham sardar sohamsardart...@gmail.com wrote:
the thing is i have updated my JAVA_HOME in both
this is because you are you are not able to contact the namenode as it
is not running.not only -ls, none of the Hdfs command will work in
such a situation..was your NN stopped after getting started, or it
didn't start even once??Can you paste your logs here??
Regards,
Mohammad Tariq
On Wed
Hello Kanna,
If you are facing problem with your build, you can download
Hadoop and Hive directly from Apache and use them.
Regards,
Mohammad Tariq
On Fri, Jun 8, 2012 at 9:06 PM, Kanna Karanam kanna...@microsoft.com wrote:
Hi Gurus,
It would be a great help if anyone can help me
Kanna, you can visit the link given below..it talks about Hive Unit
Testing in detail.
http://dev.bizo.com/2011/04/hive-unit-testing.html
Regards,
Mohammad Tariq
On Sat, Jun 9, 2012 at 1:04 AM, Kanna Karanam kanna...@microsoft.com wrote:
Thanks Mohammad - I downloaded the Hadoop Hive
also visit this link on official wiki page -
https://cwiki.apache.org/Hive/developerguide.html#DeveloperGuide-Unittestsanddebugging
Regards,
Mohammad Tariq
On Sat, Jun 9, 2012 at 2:32 AM, Mohammad Tariq donta...@gmail.com wrote:
Kanna, you can visit the link given below..it talks about
a Hive or Pig job or a Mapreduce job, you can point your
browser to http://localhost:50030 to see the status and logs of your
job.
Regards,
Mohammad Tariq
On Wed, Jun 6, 2012 at 8:28 PM, Babak Bastan babak...@gmail.com wrote:
Thank you shashwat for the answer,
where should I type http
you can also use jps command at your shell to see whether Hadoop
processes are running or not.
Regards,
Mohammad Tariq
On Wed, Jun 6, 2012 at 11:12 PM, Mohammad Tariq donta...@gmail.com wrote:
Hi Babak,
You have to type it in you web browser..Hadoop provides us a web GUI
that not only
if you are getting only this, it means your hadoop is not
running..were you able to format hdfs properly???
Regards,
Mohammad Tariq
On Wed, Jun 6, 2012 at 11:17 PM, Babak Bastan babak...@gmail.com wrote:
Hi MohammadmI irun jps in my shel I can see this result:
2213 Jps
On Wed, Jun 6
)
after this use jps to check if everything is alright or point your
browser to localhost:50070..if you further find any problem provide us
with the error logs..:)
Regards,
Mohammad Tariq
On Wed, Jun 6, 2012 at 11:22 PM, Babak Bastan babak...@gmail.com wrote:
were you able to format hdfs properly
in your configuration files.
(give full path of these directories, not just the name of the
directory)
After this follow the steps provided in the previous reply.
Regards,
Mohammad Tariq
On Wed, Jun 6, 2012 at 11:42 PM, Babak Bastan babak...@gmail.com wrote:
thank's Mohammad
also change the permissions of these directories to 777.
Regards,
Mohammad Tariq
On Wed, Jun 6, 2012 at 11:54 PM, Mohammad Tariq donta...@gmail.com wrote:
create a directory /home/username/hdfs (or at some place of your
choice)..inside this hdfs directory create three sub directories
go to your HADOOP_HOME i.e your hadoop directory(that includes bin,
conf etc)..you can find logs directory there..
Regards,
Mohammad Tariq
On Thu, Jun 7, 2012 at 1:09 AM, Babak Bastan babak...@gmail.com wrote:
hoe can I get my log mohammad?
On Wed, Jun 6, 2012 at 9:36 PM, Mohammad Tariq
check your /var/log/hadoop/...also when you do something wrong your
will find your terminal full of many error messages, you can use them
as well..and by the way learning something new requires great deal of
patience
Regards,
Mohammad Tariq
On Thu, Jun 7, 2012 at 1:25 AM, Babak Bastan babak
face any problem, i'll send you a
configured copy of hadoop.
Regards,
Mohammad Tariq
On Thu, Jun 7, 2012 at 1:45 AM, Babak Bastan babak...@gmail.com wrote:
I checked it but no hadoop folder :(
yes you are right.I'm a student and I want to make a very very simple
programm hive but untill
,
Mohammad Tariq
On Thu, Jun 7, 2012 at 1:52 AM, Babak Bastan babak...@gmail.com wrote:
by the way,you are a very nice man my friend:Thank you so much :)
what do you mean aboat this post in stackoverflow?
I am assuming that is your first installation of hadoop.
At the beginning please
ok..we'll give it a final shot..then i'll email configured hadoop to
your email address..delete the hdfs directory which contains tmp, data
and name..recreate it..format hdfs again and then start the processes.
Regards,
Mohammad Tariq
On Thu, Jun 7, 2012 at 2:22 AM, Babak Bastan babak
add more properties
if you need detailed help you can also visit -
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
Regards,
Mohammad Tariq
On Thu, Jun 7, 2012 at 3:04 AM, Babak Bastan babak...@gmail.com wrote:
I try to install another one in blog
;
Regards,
Mohammad Tariq
On Wed, Jun 6, 2012 at 3:42 AM, Rafael Maffud Carlini
rmcarl...@gmail.com wrote:
Hello everyone, I develop a scientific research for my college, where
I conduct experiments involving hive and I wonder what is the easiest
way to install the hive.
I've tried
use this to set any variable permanently. Restart your system
in order to apply the changes.
Regards,
Mohammad Tariq
On Sat, Jun 2, 2012 at 2:59 PM, Nitin Pawar nitinpawar...@gmail.com wrote:
Use export command
On Jun 2, 2012 2:50 PM, Babak Bastan babak...@gmail.com wrote:
Hello experts
://permalink.gmane.org/gmane.comp.jakarta.lucene.hadoop.user/13187
Regards,
Mohammad Tariq
2012/1/22 Dalia Sobhy dalia.mohso...@hotmail.com:
H all,
I know I have asked this question before.. But I want to state more
specific...
Whats the difference in performance between Hive, Hbase, Hive
;
Regards,
Mohammad Tariq
2012/1/19 Dalia Sobhy dalia.mohso...@hotmail.com:
Hey Vikas,
I want to develop a medical API ...
I want to ask whether Hive Hbase Integration performance is good or not,
because I found that Hive queries are faster according to some blogs..
Finally
Hi Bejoy,
Thank you so much for your help again..Your way of explaining
things is really great..And the query provided by you is working
absolutely fine.
Regards,
Mohammad Tariq
On Sun, Dec 18, 2011 at 12:35 AM, Bejoy Ks bejoy...@yahoo.com wrote:
Hi Tariq
From the stack trace, I
Hello list,
Could anyone tell me the basic (and must) requirements for
integrating Hive and Hbase??? I have followed the Hive HBase
Integration link on cwiki but I am not able to do it. Need some
urgent help.Many thanks in advance.
Regards,
Mohammad Tariq
from
org.apache.hadoop.hive.ql.exec.DDLTask
Need some help.Many thanks in advance.
Regards,
Mohammad Tariq
Thanks for the valuable info.
Regards,
Mohammad Tariq
On Thu, Dec 1, 2011 at 12:50 AM, shashwat shriparv
dwivedishash...@gmail.com wrote:
One thing i know that that hive-hbase-handler jar file and hbase jar file
should be compatible. we need to compile it to make it to support latest
80 matches
Mail list logo