Hi,
I have three CentOS laptop's at home and I have decided to build a hadoop
cluster on these. I have a switch and a router to setup.
I am planning to setup a DNS server for a FQDN. How do I go about it ? Can any
one share there experiences ?
Regards,Raj
We are facing below mentioned error on storing dataset using HCatStorer.Can
someone please help us
STORE F INTO 'default.CONTENT_SVC_USED' using
org.apache.hive.hcatalog.pig.HCatStorer();
ERROR hive.log - Got exception: java.net.URISyntaxException Malformed
escape pair at index 9:
Hi,
I setup a four node VM cluster for Hadoop 2.2.0 using CentOS. On the machine (
named Monkey) when I am starting the NodeManager I am getting the following
error -
org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
java.net.NoRouteToHostException: No Route to Host from
hi all,
I am trying to find documentation relavanet to 'rhadoop' on cdh4. If there are
any one in the group who has experience in 'rhadoop' can you provide me some
details like 1) installation procedure of rhadoop on cdh4.4.
regards,
raj
Hi,
I have a cluster CDH4. How can one perform hadoop admin without root access.
Basically an account like 'john1' on the cluster want to have access to hdfs,
mapred etc.,
Should 'john1' be included in the 'sudoers' file ?
What instructions should I ask System Admin team to have 'john1'
Hello everyone,
The HiveThrift Service was started succesfully.
netstat -nl | grep 1
tcp 0 0 0.0.0.0:1 0.0.0.0:*
LISTEN
I am able to read tables from Hive through Tableau. When executing queries
through Tableau I am getting the
I am struggling on this one. Can any one throw some pointers on how to
troubelshoot this issue please?
On Thursday, March 20, 2014 3:09 PM, Raj Hadoop hadoop...@yahoo.com wrote:
Hello everyone,
The HiveThrift Service was started succesfully.
netstat -nl | grep 1
tcp 0
fails, those might
give some clue.
Thanks,
Szehon
On Thu, Mar 20, 2014 at 12:29 PM, Raj Hadoop hadoop...@yahoo.com wrote:
I am struggling on this one. Can any one throw some pointers on how to
troubelshoot this issue please?
On Thursday, March 20, 2014 3:09 PM, Raj Hadoop hadoop...@yahoo.com
All,
I have a 3 node hadoop cluster CDH 4.4 and every few days or when ever I load
some data through sqoop or query through hive , sometimes I get the following
error -
Call From server 1 to server 2 failed on connection exception:
java.net.ConnectException: Connection refused
This has
should set the
number of map slots per node:
mapred.tasktracker.map.tasks.maximum=6
Regards,
Dieter
2014-02-24 11:08 GMT+01:00 Raj hadoop raj.had...@gmail.com:
Hi All
In our Map reduce code, when we are giving more than
10 input sequence files, we are facing the java
All,
Is there any way from the command prompt I can find which hive version I am
using and Hadoop version too?
Thanks in advance.
Regards,
Raj
On Sun, Feb 9, 2014 at 8:32 AM, Raj Hadoop hadoop...@yahoo.com wrote:
All,
Is there any way from the command prompt I can find which hive version I am
using and Hadoop version too?
Thanks in advance.
Regards,
Raj
Thanks
On Sunday, February 9, 2014 11:56 AM, Ted Yu yuzhih...@gmail.com wrote:
For Hive, you can use:
bin/hive --version
Cheers
On Sun, Feb 9, 2014 at 8:48 AM, Raj Hadoop hadoop...@yahoo.com wrote:
Thanks Ted.
Also - I am looking for to find the Hive version
On Sunday, February
Hi,
My requirement is a typical Datawarehouse and ETL requirement. I need to
accomplish
1) Daily Insert transaction records to a Hive table or a HDFS file. This table
or file is not a big table ( approximately 10 records per day). I don't want to
Partition the table / file.
I am reading
Hi,
Is there a hadoop command to determine the replication factor of a hdfs file ?
Please advise.
I know that fs setrep only changes the replication factor.
Regards,
Raj
...@cloudera.com wrote:
Hi Raj,
The 2nd column of any hadoop fs -ls output, and the %r option of
the hadoop fs -stat commands both reveal the replication factor
of a given file.
On Sat, Feb 8, 2014 at 11:03 PM, Raj Hadoop hadoop...@yahoo.com wrote:
Hi,
Is there a hadoop command to determine
Hi Shalish -
This is a really wonderful work. Let me go through it.
And one more thing - Can we use this setup on a Mac computer. Is it OS
dependent ? Please advise.
Thanks,
Raj
On Tuesday, February 4, 2014 3:47 PM, VJ Shalish vjshal...@gmail.com wrote:
Hi All,
Based on my
Hi,
I am sending this to the three dist-lists of Hadoop, Hive and Sqoop as this
question is closely related to all the three areas.
I have this requirement.
I have a big table in Oracle (about 60 million rows - Primary Key Customer Id).
I want to bring this to HDFS and then create
a Hive
file format also, as that will affect the load and query time.
4. Think about compression as well before hand, as that will govern the data
split, and performance of your queries as well.
Regards,
Manish
Sent from my T-Mobile 4G LTE Device
Original message
From: Raj Hadoop
All,
I have a CentOS VM image and want to replicate it four times on my Mac
computer. How
can I set it up so that I can have 4 individual machines that can be used as
nodes
in my Hadoop cluster.
Please advise.
Thanks,
Raj
Hello All,
When we install Hadoop, when does the user group 'supergroup' gets created.
What is the significance of this ? Do we have any other groups apart from this
group ?
Thanks,
Raj
is this an actual group in Linux at the OS level or just hadoop specific?
From: kun yan yankunhad...@gmail.com
To: user@hadoop.apache.org user@hadoop.apache.org; Raj Hadoop
hadoop...@yahoo.com
Sent: Wednesday, September 11, 2013 9:19 PM
Subject: Re
-chown
user1:user1 /user/user1. After this, the user should be able to run
jobs and manipulate files in their own directory.
On Thu, Aug 29, 2013 at 10:21 AM, Hadoop Raj hadoop...@yahoo.com wrote:
Hi,
I have a hadoop learning environment on a pseudo distributed mode. It is
owned by the user
Hi,
I am trying to setup a multi node hadoop cluster. I am trying to understand
where hadoop clients like (Hive,Pig,Sqoop) would be installed in the Hadoop
Cluster.
Say - I have three Linux machines-
Node 1- Master - (Name Node , Job Tracker and Secondary Name Node)
Node 2 - Slave
Hello all,
I am getting an error while using sqoop export ( Load HDFS file to Oracle ). I
am not sure the issue might be a Sqoop or Hadoop related one. So I am sending
it to both the dist lists.
I am using -
sqoop export --connect jdbc:oracle:thin:@//dbserv:9876/OKI --table
RAJ.CUSTOMERS
Hello All,
We were planning to rent out hardware for a Hadoop project in our company. Can
any one suggest what are some good companies that are offering this type of
service ? Also, any suggestions or best practices to follow when we go for a
Lease or Rent option.
Regards,
Raj
hardware for lease
What about amazon EC2 and cloudera CDH4.. You might want to research a bit more
about these
Regards,
Pavan
On Aug 15, 2013 9:26 PM, Raj Hadoop hadoop...@yahoo.com wrote:
Hello All,
We were planning to rent out hardware for a Hadoop project in our company. Can
any one
on different devices. Directories that do not exist are
ignored.
On Tue, Jun 11, 2013 at 1:08 PM, Raj Hadoop hadoop...@yahoo.com wrote:
Hi Tariq,
What is the default value of dfs.data.dir? My hdfs-site.xml doesnt have this
value defined. So what is the default value?
Thanks,
Raj
From: Mohammad Tariq
: Mohammad Tariq donta...@gmail.com
To: user@hadoop.apache.org user@hadoop.apache.org; Raj Hadoop
hadoop...@yahoo.com
Sent: Friday, June 14, 2013 1:44 PM
Subject: Re: HDFS to a different location other than HADOOP HOME
Change the permissions of /SD1/hadoop_data to 755 and restart the process.
Warm
Hi -
I wanted to know on TeaLeaf WebLog files / database. Is the data from TeaLeaf
proprietary or Is it in a readable foramat by other tools? Can any one one
advise who have experience on this product.
Thanks,
Raj
Hi Tariq,
What is the default value of dfs.data.dir? My hdfs-site.xml doesnt have this
value defined. So what is the default value?
Thanks,
Raj
From: Mohammad Tariq donta...@gmail.com
To: user@hadoop.apache.org user@hadoop.apache.org; Raj Hadoop
hadoop
Hi,
I just installed Apache Flume 1.3.1 and trying to run a small example to test.
Can any one suggest me how can I do this? I am going through the documentation
right now.
Thanks,
Raj
To: u...@hive.apache.org; Raj Hadoop hadoop...@yahoo.com
Sent: Friday, May 24, 2013 6:32 PM
Subject: Re: Apache Flume Properties File
so you spammed three big lists there, eh? with a general question for somebody
to serve up a solution on a silver platter for you -- all before you even read
Hi,
With all due to respect to the senior members of this site, I wanted to first
congratulate Lokesh for his interest in Hadoop. I want to know how many fresh
graduates are interested in this technology. I guess not many. So we have to
welcome Lokesh to Hadoop world.
I agree to the
Hi,
My hive job logs are being written to /tmp/hadoop directory. I want to change
it to a different location i.e. a sub directory somehere under the 'hadoop'
user home directory.
How do I change it.
Thanks,
Ra
Hi,
I just finished setting up Apache sqoop 1.4.3. I am trying to test basic sqoop
import on Oracle.
sqoop import --connect jdbc:oracle:thin:@//intelli.dmn.com:1521/DBT --table
usr1.testonetwo --username usr123 --password passwd123
I am getting the error as
13/05/22 17:18:16 INFO
Hi,
I am configurinig Hive. I ahve a question on the property
hive.metastore.warehouse.dir.
Should this point to a physical directory. I am guessing it is a logical
directory under Hadoop fs.default.name. Please advise whether I need to create
any directory for the variable
; Raj Hadoop hadoop...@yahoo.com
Cc: User user@hadoop.apache.org
Sent: Tuesday, May 21, 2013 1:44 PM
Subject: Re: hive.metastore.warehouse.dir - Should it point to a physical
directory
The name is misleading; this is the directory within HDFS where Hive stores the
data, by default. (External
create the HDFS directory ?
From: Sanjay Subramanian sanjay.subraman...@wizecommerce.com
To: u...@hive.apache.org u...@hive.apache.org; Raj Hadoop
hadoop...@yahoo.com; Dean Wampler deanwamp...@gmail.com
Cc: User user@hadoop.apache.org
Sent: Tuesday, May 21
yes thats what i meant. local physical directory. thanks.
From: bharath vissapragada bharathvissapragada1...@gmail.com
To: u...@hive.apache.org; Raj Hadoop hadoop...@yahoo.com
Cc: User user@hadoop.apache.org
Sent: Tuesday, May 21, 2013 1:59 PM
Subject: Re
So that means I need to create a HDFS ( Not an OS physical directory )
directory under Hadoop that need to be used in the Hive config file for this
property. Right?
From: Dean Wampler deanwamp...@gmail.com
To: Raj Hadoop hadoop...@yahoo.com
Cc: Sanjay
Thanks Sanjay
From: Sanjay Subramanian sanjay.subraman...@wizecommerce.com
To: bharath vissapragada bharathvissapragada1...@gmail.com;
u...@hive.apache.org u...@hive.apache.org; Raj Hadoop hadoop...@yahoo.com
Cc: User user@hadoop.apache.org
Sent: Tuesday, May
I am trying to get Oracle scripts for Hive Metastore.
http://mail-archives.apache.org/mod_mbox/hive-commits/201204.mbox/%3c20120423201303.9742b2388...@eris.apache.org%3E
The scripts in the above link has a + at the begining of each line. How should
I supposed to execute scripts like this
I got it. This is the link.
http://svn.apache.org/viewvc/hive/trunk/metastore/scripts/upgrade/oracle/hive-schema-0.9.0.oracle.sql?revision=1329416view=copathrev=1329416
From: Raj Hadoop hadoop...@yahoo.com
To: Hive u...@hive.apache.org; User user
a file in this directory called hive-schema-0.9.0.oracle.sql
Use this
sanjay
From: Raj Hadoop hadoop...@yahoo.com
Reply-To: user@hadoop.apache.org user@hadoop.apache.org, Raj Hadoop
hadoop...@yahoo.com
Date: Tuesday, May 21, 2013 12:08 PM
To: Hive u...@hive.apache.org, User user
I am setting up a metastore on Oracle for Hive. I executed the script
hive-schema-0.9.0-sql file too succesfully.
When i ran this
hive show tables;
I am getting the following error.
ORA-01950: no privileges on tablespace
What kind of Oracle privileges are required (Quota wise for Hive)
Hi,
I have a basic question on HDFS. I was reading that HDFS doesnt work well with
low latency data access. Rather it is designed for the high throughput
of data. Can you please explain in simple words the difference between
Low latency data access Vs High throughput of data.
Thanks,
Raj
Hi Chris,
Thanks for the explaination.
Regards,
Raj
From: Chris Embree cemb...@gmail.com
To: user@hadoop.apache.org; Raj Hadoop hadoop...@yahoo.com
Sent: Monday, May 20, 2013 1:51 PM
Subject: Re: Low latency data access Vs High throughput of data
I'll
Hi,
I was not able to stopThrift Server after performing the following steps.
$ bin/hive --service hiveserver
Starting Hive Thrift Server
$ netstat -nl | grep 1
tcp 0 0 :::1 :::* LISTEN
I gave the following to stop. but not working.
hive --service hiveserver --action stop 1
Hi Sanjay,
I am using 0.9 version.
I do not have a sudo access. is there any other command to stop the service.
thanks,
raj
From: Sanjay Subramanian sanjay.subraman...@wizecommerce.com
To: u...@hive.apache.org u...@hive.apache.org; Raj Hadoop
hadoop
Hi,
I want to explore R for Hadoop. Where can I get the download ? Any suggested
material on the website to explore ? Please advise.
Thanks,
Raj
From: Amal G Jose amalg...@gmail.com
To: user@hadoop.apache.org
Sent: Saturday, May 18, 2013 11:47 PM
Subject:
look att
he website you recommedned. But a few working scripts from experts like you -
gets me jump started on this.
Thanks,
Raj
From: David Ritch david.ri...@gmail.com
To: user@hadoop.apache.org
Cc: Raj Hadoop hadoop...@yahoo.com
Sent: Sunday, May 19, 2013
Hi,
I wanted to know whether any one used Hive on Oracle Metastore? Can you please
share your experiences?
Thanks,
Raj
Hi,
I am planning to install Hive and want to set up Meta store on Oracle. What is
the procedure? Which driver (JDBC) do I need to use it?
Thanks,
Raj
...@hadoop.apache.org user@hadoop.apache.org
Cc: Raj Hadoop hadoop...@yahoo.com
Sent: Thursday, May 16, 2013 11:34 AM
Subject: Re: Configuring SSH - is it required? for a psedo distriburted mode?
Actually, I should amend my statement -- SSH is required, but passwordless ssh
(i guess) you can live
From: Mohammad Tariq donta...@gmail.com
To: user@hadoop.apache.org user@hadoop.apache.org; Raj Hadoop
hadoop...@yahoo.com
Sent: Thursday, May 16, 2013 12:02 PM
Subject: Re: Configuring SSH - is it required? for a psedo distriburted mode?
Hello Raj,
ssh
I am getting the following error. Can anyone please advise?
$ hadoop fs -mkdir /projects/wordcount/input/
13/05/16 15:28:47 INFO ipc.Client: Retrying connect to server:
intelliserver/172.25.181.117:54310. Already tried 0 time(s); retry policy is
:45 PM
Subject: Re: hadoop fs -mkdir /projects/wordcount/input/ error
Did not notice you are trying to do HDFS action... So just check namenode
master service is up able to connect.
Sent from iPhone: Be the change
On May 16, 2013, at 12:32 PM, Raj Hadoop hadoop...@yahoo.com wrote:
I am
Hi,
I have installed Hadoop on a Linux server psedo-distributed mode. The map
reduce word count example also succesfully ran. But i was not able to access
the UI from my local windows browser machine when i use like
http://intelliserver:54310/ or http://intelliserver:54311/
$ cat
Thanks. It is working on 50070 and 50030.
From: Sandy Ryza sandy.r...@cloudera.com
To: user@hadoop.apache.org; Raj Hadoop hadoop...@yahoo.com
Sent: Thursday, May 16, 2013 6:02 PM
Subject: Re: Installed Hadoop on Linux server - not able to see web UI
Hi Raj
Hi,
I am looking for suggestions from the hadoop and hive user community on the
following -
1) How good is the choice of choosing Oracle for Hive Metastore ?
In my organization, we only use Oracle database and so we wanted to know
whether there are any known issues with Oracle Hive Metastore.
I am thinking to install both CDH and Apache version. So are you saying if i
install CDH - do i require root privielges?
From: Mohammad Tariq donta...@gmail.com
To: user@hadoop.apache.org user@hadoop.apache.org; Raj Hadoop
hadoop...@yahoo.com
Sent: Monday
between a person
who is installing Hadoop and the actual unix admin guy. Please advise.
From: Nitin Pawar nitinpawar...@gmail.com
To: user@hadoop.apache.org; Raj Hadoop hadoop...@yahoo.com
Cc: Mohammad Tariq donta...@gmail.com
Sent: Monday, May 13, 2013 10:56 AM
Hi,
Can anyone suggest how to configure Eclipse on Mac for Hadoop? Hadoop is
running like a Pseudo-distributed moded. Please provide any reference articles
or other best practices that need to be followed in this case.
Thanks,
Raj
Hi,
I have to propose some hardware requirements in my company for a Proof of
Concept with Hadoop. I was reading Hadoop Operations and also saw Cloudera
Website. But just wanted to know from the group - what is the requirements if I
have to plan for a 5 node cluster. I dont know at this time,
Sent: Monday, April 29, 2013 2:49 PM
Subject: Re: Hardware Selection for Hadoop
2 x Quad cores Intel
2-3 TB x 6 SATA
64GB mem
2 NICs teaming
my 2 cents
On Apr 29, 2013, at 9:24 AM, Raj Hadoop hadoop...@yahoo.com
wrote:
Hi,
I have to propose some hardware requirements in my company
Sandeep,
Java is also free.
Thanks,
Raj
From: Sandeep Jain sandeep_jai...@infosys.com
To: user@hadoop.apache.org user@hadoop.apache.org
Sent: Wednesday, April 24, 2013 2:17 AM
Subject: Query on Cost estimates on Hadoop and Java
Dear Hadoopers,
As per
From: Mohammad Tariq donta...@gmail.com
To: user@hadoop.apache.org user@hadoop.apache.org; Raj Hadoop
hadoop...@yahoo.com
Sent: Saturday, April 20, 2013 6:30 PM
Subject: Re: Very basic question
Hello Raj,
Could you show me the lines where you have set the i/o paths?
Warm Regards
bin/hadoop dfs - mkdir input1
From: Mohammad Tariq donta...@gmail.com
To: user@hadoop.apache.org user@hadoop.apache.org; Raj Hadoop
hadoop...@yahoo.com
Sent: Saturday, April 20, 2013 7:22 PM
Subject: Re: Very basic question
ok..do u remember the command
-rwx-- 1 hadoop staff 66 Apr 20 18:05 run_raj2.sh
drwxr-xr-x 26 hadoop staff 884 Apr 20 18:05 logs
From: Mohammad Tariq donta...@gmail.com
To: user@hadoop.apache.org user@hadoop.apache.org; Raj Hadoop
hadoop...@yahoo.com
Sent: Saturday, April
/job_201304201653_0004_conf.xml
-rw-r--r-- 1 hadoop supergroup 60 2013-04-20 18:06
/user/hadoop/output1/part-r-0
From: Mohammad Tariq donta...@gmail.com
To: user@hadoop.apache.org user@hadoop.apache.org; Raj Hadoop
hadoop...@yahoo.com
Sent: Saturday
Hi,
I am new to Hadoop. I started reading the standard Wordcount program. I got
this basic doubt in Hadoop.
After the Map - Reduce is done, where is the output generated? Does the
reducer ouput sit on individual DataNodes ? Please advise.
Thanks,
Raj
Hi,
Can you please suggest me what is the good way to move 1 peta byte of data
from one cluster to another cluster?
Thanks
Raj
73 matches
Mail list logo