uong hoang minh
> wrote:
>> I am researching Hadoop technology. And I don't know how to access
>> and copy data from HDFS to the local machine by Java. Can you help me
>> , step by step?
>> Thank you very much.
>> --
>> Hoàng Minh Hương
>&
Hi Huong,
See http://developer.yahoo.com/hadoop/tutorial/module2.html#programmatically
On Wed, Aug 15, 2012 at 10:24 AM, huong hoang minh
wrote:
> I am researching Hadoop technology. And I don't know how to access
> and copy data from HDFS to the local machine by Java. Can you help
You mean like that:
hadoop jar Rdg.jar my.hadoop.Rdg -libjars Rdg_lib/* tester rdg_output
Where Rdg_lib is the a folder containing all reqd classes/jars stored on
HDFS.
We get this error though. We do something wrong?
12/08/10 08:16:24 ERROR security.UserGroupInformation:
PriviledgedActionExcep
Hi Bertrand
-libjars option works well with the 'hadoop jar' command. Instead of
executing your runnable with the plain java 'jar' command use 'hadoop jar'
. When you use hadoop jar you can ship the dependent jars/files etc as
1) include them in the /lib folder in your jar
2) use -libjars / -files
ernal jar file/class while running a
> jar
> > file.
> >
> > $ mkdir Rdg_classes
> > $ javac -classpath ${HADOOP_HOME}/hadoop-${HADOOP_VERSION}-core.jar -d
> > Rdg_classes Rdg.java
> > $ jar -cvf Rdg.jar -C Rdg_classes/ .
> > We have tried the following op
PM, Sandeep Reddy P <
> > > sandeepreddy.3...@gmail.com> wrote:
> > >
> > > > Hi all,
> > > > I'm using textbook example (page 56) to move data from local file
> > system
> > > to
> > > > HDFS. But there is an
On Fri, Jun 22, 2012 at 8:34 PM, Sandeep Reddy P <
> > sandeepreddy.3...@gmail.com> wrote:
> >
> > > Hi all,
> > > I'm using textbook example (page 56) to move data from local file
> system
> > to
> > > HDFS. But there is an err
he line FileSystem fs =
> > FileSystem.get(URI.create(dst), conf);
> > Error is the method create string is undefined for the type uri. Please
> > help me with this issue.
> >
> > import java.io.BufferedInputStream;
> > import java.io.FileInputStream;
t(URI.create(dst), conf);
> Error is the method create string is undefined for the type uri. Please
> help me with this issue.
>
> import java.io.BufferedInputStream;
> import java.io.FileInputStream;
> import java.io.InputStream;
> import java.io.OutputStream;
>
>
Hi all,
I'm using textbook example (page 56) to move data from local file system to
HDFS. But there is an error in the line FileSystem fs =
FileSystem.get(URI.create(dst), conf);
Error is the method create string is undefined for the type uri. Please
help me with this issue.
i
Tue, Jun 12, 2012 at 5:22 AM, Sheng Guo wrote:
>
> > Hi,
> >
> > Sorry to bother you all, this is my first question here in hadoop user
> > mailing list.
> > Can anyone help me with the memory configuration if distributed cache is
> > very large and requi
my first question here in hadoop user
> mailing list.
> Can anyone help me with the memory configuration if distributed cache is
> very large and requires more memory? (2GB)
>
> And also in this case when distributed cache is very large, how do we
> handle this normally? By confi
Hi,
Sorry to bother you all, this is my first question here in hadoop user
mailing list.
Can anyone help me with the memory configuration if distributed cache is
very large and requires more memory? (2GB)
And also in this case when distributed cache is very large, how do we
handle this normally
Hi,
We are frequently observing the exception
java.io.IOException: DFSClient_attempt_201205232329_28133_r_02_0 could not
complete file
/output/tmp/test/_temporary/_attempt_201205232329_28133_r_02_0/part-r-2.
Giving up.
on our cluster. The exception occurs during writing a file. W
>
> What is min replication factor in general used in industry.
>
> Let me know if any further inputs required.
>
> Thanks,
> -Akshay
>
>
>
> --
> View this message in context:
> http://old.nabble.com/Help-with-DFSClient-Exception.-tp33918949p33918949.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>
--
Nitin Pawar
http://old.nabble.com/Help-with-DFSClient-Exception.-tp33918949p33918949.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
Hi,
You can go through the code of this project (
https://github.com/zinnia-phatak-dev/Nectar) to understand how the complex
algorithms are implemented using M/R.
On Fri, May 18, 2012 at 12:16 PM, Ravi Joshi wrote:
> I am writing my own map and reduce method for implementing K Means
> algorithm
Hi
I wanted to implement one Workflow within the MAPPER. I am Sharing my
concept through the Architecture Diagram, Please correct me if I am wrong
and suggest me any Good Approach for that
Many thanks in advance)
Thanks
I am writing my own map and reduce method for implementing K Means algorithm in
Hadoop-1.0.1 in java language. Although i got some example link of K Means
algorithm in Hadoop over blogs but i don't want to copy their code, as a lerner
i want to implement it my self. So i just need some ideas/clu
Thanks folks for your help.
I tried to use hive to analyze the apachelog. It is fine if I just select *
from apachelog and I can get the results. But if I do anything like count,
group by,.., It just shows " map = 0%, reduce = 0%" message again and again
endlessly. I had to stop it.
apache.org
Subject: Re: Help me with architecture of a somewhat non-trivial mapreduce
implementation
If the file is small enough you could read it in to a java object like a
list and write your own input format that takes a list object as its input
and then lets you specify the number of mappers.
On Ap
Thanks! That helped!
-Original Message-
From: Michael Segel
Sent: Thursday, April 19, 2012 9:38 PM
To: common-user@hadoop.apache.org
Subject: Re: Help me with architecture of a somewhat non-trivial mapreduce
implementation
If the file is small enough you could read it in to a java
idle, and I don’t know how to deal with dynamically
> increasing/decreasing cores.
>
> Thx
> - Sky
>
> -Original Message- From: Michael Segel
> Sent: Thursday, April 19, 2012 8:49 PM
> To: common-user@hadoop.apache.org
> Subject: Re: Help me with architect
and
stay idle, and I don’t know how to deal with dynamically
increasing/decreasing cores.
Thx
- Sky
-Original Message-
From: Michael Segel
Sent: Thursday, April 19, 2012 8:49 PM
To: common-user@hadoop.apache.org
Subject: Re: Help me with architecture of a somewhat non-trivial mapreduce
n launcher, but not
> sure how to enable it for mappers and reducers.
>
> Thx
> - Akash
>
>
> -Original Message- From: Robert Evans
> Sent: Thursday, April 19, 2012 2:08 PM
> To: common-user@hadoop.apache.org
> Subject: Re: Help me with architecture of a somewhat non-triv
Akash
-Original Message-
From: Robert Evans
Sent: Thursday, April 19, 2012 2:08 PM
To: common-user@hadoop.apache.org
Subject: Re: Help me with architecture of a somewhat non-trivial mapreduce
implementation
From what I can see your implementation seems OK, especially from a
perfor
4/18/12 4:56 PM, "Sky USC" wrote:
Please help me architect the design of my first significant MR task beyond
"word count". My program works well. but I am trying to optimize performance to
maximize use of available computing resources. I have 3 questions at the botto
Please help me architect the design of my first significant MR task beyond
"word count". My program works well. but I am trying to optimize performance to
maximize use of available computing resources. I have 3 questions at the
bottom.
Project description in an abstract sense (
apache.org
Reply-To: common-user@hadoop.apache.org
Subject: Hive Thrift help
we need to connect to HIVE from Microstrategy reports, and it requires the Hive
Thrift server. But I
tried to start it, and it just hangs as below.
# hive --service hiveserver
Starting Hive Thrift Server
Any ideas?
Thank
Sent: Monday, April 16, 2012 2:55 PM
>Subject: Re: Hive Thrift help
>
>You can NOT connect to hive thrift to confirm it's status. Thrift is
>thrift not http. But you are right to say HiveServer does not produce
>and output by default.
>
>if
>netstat -nl | grep 1
>
You can NOT connect to hive thrift to confirm it's status. Thrift is
thrift not http. But you are right to say HiveServer does not produce
and output by default.
if
netstat -nl | grep 1
shows status it is up.
On Mon, Apr 16, 2012 at 5:18 PM, Rahul Jain wrote:
> I am assuming you read thru:
I am assuming you read thru:
https://cwiki.apache.org/Hive/hiveserver.html
The server comes up on port 10,000 by default, did you verify that it is
actually listening on the port ? You can also connect to hive server using
web browser to confirm its status.
-Rahul
On Mon, Apr 16, 2012 at 1:53
we need to connect to HIVE from Microstrategy reports, and it requires the Hive
Thrift server. But I
tried to start it, and it just hangs as below.
# hive --service hiveserver
Starting Hive Thrift Server
Any ideas?
Thanks,
Michael
This electronic message, including any attachments, may contain pr
http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/
>
> From: hellooperator
>To: core-u...@hadoop.apache.org
>Sent: Wednesday, April 11, 2012 11:15 AM
>Subject: Map Reduce Job Help
>
>
>Hello,
>
Hello,
I'm just starting out with Hadoop and writing some Map Reduce jobs. I was
looking for help on writing a MR job in python that allows me to take some
emails and put them into HDFS so I can search on the text or attachments of
the email?
Thank you!
--
View this message in context:
Hi,
On top of the Hadoop eclipse plugin, you already need to have a Hadoop
Virtual Machine, where eclipse can run the code, and a local library of
hadoop, which will be used by Eclipse to compile your code.
Below are the steps that I used to run a sample wordcount example on the
Hadoop VM.
1. Down
Hi,
I am newbie to the hadoop, and trying to configure eclipse plugin of
hadoop but its resposnse is very awkward and giving me the error.
Error : Unable to login. when trying to connect to hadoop DFS using Hadoop
plugin for eclipse
using hadoop : hadoop-0.20.203.0
eclipse plugin : hadoop
> > > java.net.ConnectException: Connection refused
> > > >
> > > > **
> > > >
> > > > When I access base and try to list the entries this is what I see..
> > > >
> > > >
> > > > ERROR: org.a
; >
> > > When I access base and try to list the entries this is what I see..
> > >
> > >
> > > ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is
> > able
> > > to connect to ZooKeeper but the connection closes immediately.
**
> >
> > When I access base and try to list the entries this is what I see..
> >
> >
> > ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is
> able
> > to connect to ZooKeeper but the connection closes immediately. This coul
Consider inspecting your ZK server logs for that error and then make sure
> you are reusing HBaseConfiguration as often as you can. See HTable's
> javadoc for more information.
>
>
> Can someone please help me...
>
> Thanks
>
--
View this message in context:
http://old.nabble.com/help-to-fix-this-issue-tp33457865p33458308.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
Consider inspecting your ZK server logs for that error and then make sure
> you are reusing HBaseConfiguration as often as you can. See HTable's
> javadoc for more information.
>
>
> Can someone please help me...
>
> Thanks
>
--
View this message in context:
http://old.nabble.com/help-to-fix-this-issue-tp33457865p33458306.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
here the snapshot is stored.
>
>
>
> hbase.zookeeper.property.tickTime
> 2000
> Property from ZooKeeper's config zoo.cfg.
> The port at which the clients will connect.
>
>
>
>
> Is this correct.
>
> Please let
> > When I access base and try to list the entries this is what I see..
> >
> >
> > ERROR: org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is
> able
> > to connect to ZooKeeper but the connection closes immediately. This could
> > be a sign that the s
ver has too many connections (30 is the default).
> Consider inspecting your ZK server logs for that error and then make sure
> you are reusing HBaseConfiguration as often as you can. See HTable's
> javadoc for more information.
>
>
> Can someone please help me...
>
> Thanks
>
--
View this message in context:
http://old.nabble.com/help-to-fix-this-issue-tp33457865p33457931.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
erConnectionException: HBase is able
> to connect to ZooKeeper but the connection closes immediately. This could
> be
> a sign that the server has too many connections (30 is the default).
> Consider inspecting your ZK server logs for that error and then make sure
> you are reusing H
s for that error and then make sure
you are reusing HBaseConfiguration as often as you can. See HTable's javadoc
for more information.
Can someone please help me...
Thanks
--
View this message in context:
http://old.nabble.com/help-to-fix-this-issue-tp33457865p33457865.html
Sent from the H
I tried the patch MAPREDUCE-2457 but it didn't work for my hadoop 0.20.205.
Are you sure this patch will work for 0.20.205?
According to the description it says that the patch works for 0.21 and 0.22
and it says that 0.20 supports group.name without this patch...
So does this patch also apply to 0
The group.name scheduler support was introduced in
https://issues.apache.org/jira/browse/HADOOP-3892 but may have been
broken by the security changes present in 0.20.205. You'll need the
fix presented in https://issues.apache.org/jira/browse/MAPREDUCE-2457
to have group.name support.
On Thu, Mar
I am running fair scheduler on hadoop 0.20.205.0
http://hadoop.apache.org/common/docs/r0.20.205.0/fair_scheduler.html
The above page talks about the following property
*mapred.fairscheduler.poolnameproperty*
**
which I can set to *group.name*
The default is user.name and when a user submits a jo
ror.
Can anyone help me in debugging this issue ?
Thanks,
Praveenesh
On Tue, Feb 28, 2012 at 1:12 PM, praveenesh kumar wrote:
> Hi all,
>
> I am trying to use hadoop eclipse plugin on my windows machine to connect
> to the my remote hadoop cluster. I am currently using putty to login
each reducer get a unique key after the shuffling?
> > I am using Hadoop 0.20.205.0 and below is the command that I am using to
> > run hadoop streaming. Is there some more options that I should specify
> for
> > hadoop streaming to work properly if I am using a custom separator?
> >
> > hadoop jar
> > $HADOOP_PREFIX/contrib/streaming/hadoop-streaming-0.20.205.0.jar
> > -D stream.mapred.output.field.separator=*
> > -D mapred.reduce.tasks=2
> > -mapper ./map.py
> > -reducer ./reducer.py
> > -file ./map.py
> > -file ./reducer.py
> > -input /user/inputdata
> > -output /user/outputdata
> > -verbose
> >
> >
> > Any help is much appreciated,
> > Thanks,
> > Austin
> >
>
hadoop jar
> $HADOOP_PREFIX/contrib/streaming/hadoop-streaming-0.20.205.0.jar
> -D stream.mapred.output.field.separator=*
> -D mapred.reduce.tasks=2
> -mapper ./map.py
> -reducer ./reducer.py
> -file ./map.py
> -file ./reducer.py
> -input /user/inputdata
> -output /user/outputdata
> -verbose
>
>
> Any help is much appreciated,
> Thanks,
> Austin
>
r/inputdata
-output /user/outputdata
-verbose
Any help is much appreciated,
Thanks,
Austin
nd in this case However, it
didn't give me any error.
But now if I am trying to do put this jar file in /plugins folder and use
"eclipse -clean" I am not able to see Hadoop map-reduce perspective.
Can someone please help me to debug where I am making mistake ?
Thanks,
Praveenesh
okk.. can i built it with ANT
On Mon, Feb 27, 2012 at 12:49 PM, alo alt wrote:
> hive?
> You are then on the wrong list, for hive related questions refer to:
> u...@hive.apache.org
>
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
>
> On Feb 27, 2012, at 8:14 AM, hadoop hive wrote:
>
> >
hive?
You are then on the wrong list, for hive related questions refer to:
u...@hive.apache.org
--
Alexander Lorenz
http://mapredit.blogspot.com
On Feb 27, 2012, at 8:14 AM, hadoop hive wrote:
> hey Alex,
>
> i cant you hive for that.???
>
> On Mon, Feb 27, 2012 at 12:29 PM, alo alt wrote:
>
hey Alex,
i cant you hive for that.???
On Mon, Feb 27, 2012 at 12:29 PM, alo alt wrote:
> After you have installed snappy you have to configure the codec like the
> URL I posted before or you can reference them in your MR jobs. Be sure you
> have the jars in your classpath for.
>
> For storing
After you have installed snappy you have to configure the codec like the URL I
posted before or you can reference them in your MR jobs. Be sure you have the
jars in your classpath for.
For storing snappy compressed files in HDFS you should use Pig or Flume.
--
Alexander Lorenz
http://mapredit
thanks Alex,
i m using Apache hadoop, steps i followed
1:- untar snappy
2:- entry in mapred site
this can be used like deflate only(like only on overwriting file)
On Mon, Feb 27, 2012 at 11:50 AM, alo alt wrote:
> Hi,
>
>
> https://ccp.cloudera.com/display/CDHDOC/Snappy+Installation#SnappyI
Hi,
https://ccp.cloudera.com/display/CDHDOC/Snappy+Installation#SnappyInstallation-UsingSnappyforMapReduceCompression
best,
Alex
--
Alexander Lorenz
http://mapredit.blogspot.com
On Feb 27, 2012, at 7:16 AM, hadoop hive wrote:
> Hey folks,
>
> i m using hadoop 0.20.2 + r911707 , please tell
Hey folks,
i m using hadoop 0.20.2 + r911707 , please tell me the installation and how
to use snappy for compression and decompression
Regards
Vikas Srivastava
Kyongho,
Can you confirm the files are indeed in HDFS?
Ron
On Mon, Feb 13, 2012 at 3:42 PM, Kyong-Ho Min wrote:
> Dear users,
>
> I tried to set up Hadoop in terms of Single Node docs.
> I ran Pseudo-Distributed Operation under Cygwin and Window 7.
>ssh localhost
> bin/hadoop namenode -fo
m> wrote:
>>
>>> Dear Guruprasad,
>>>
>>> it would be very helpful to provide details from your configuration
>>> files as well as more details on your setup.
>>> It seems to be that the connection from slave to master cannot be
>>> est
s on your setup.
>> It seems to be that the connection from slave to master cannot be
>> established ("Connection reset by peer").
>> Do you use a virtual environment, physical master/slaves or all on one
>> machine ?
>> Please paste also the output of "k
ase paste also the output of "kingul2" namenode logs.
>
> Regards,
>
> Robin
>
>
> On 02/08/12 13:06, Guruprasad B wrote:
>
> Hi,
>
> I am Guruprasad from Bangalore (India). I need help in setting up hadoop
> platform. I am very much new to Hadoop Platform.
&
your configuration
> >> files as well as more details on your setup.
> >>> It seems to be that the connection from slave to master cannot be
> >> established ("Connection reset by peer").
> >>> Do you use a virtual environment, physical master/slav
It seems to be that the connection from slave to master cannot be
>> established ("Connection reset by peer").
>>> Do you use a virtual environment, physical master/slaves or all on one
>> machine ?
>>> Please paste also the output of "kingul2" nam
r cannot be
> established ("Connection reset by peer").
> > Do you use a virtual environment, physical master/slaves or all on one
> machine ?
> > Please paste also the output of "kingul2" namenode logs.
> >
> > Regards,
> >
> > Robin
>
lished
> ("Connection reset by peer").
> Do you use a virtual environment, physical master/slaves or all on one
> machine ?
> Please paste also the output of "kingul2" namenode logs.
>
> Regards,
>
> Robin
>
>
> On 02/08/12 13:06, Guruprasad
hine ?
> Please paste also the output of "kingul2" namenode logs.
>
> Regards,
>
> Robin
>
>
> On 02/08/12 13:06, Guruprasad B wrote:
>
> Hi,
>
> I am Guruprasad from Bangalore (India). I need help in setting up hadoop
> platform. I am very much new
a virtual environment, physical master/slaves or all on
one machine ?
Please paste also the output of "kingul2" namenode logs.
Regards,
Robin
On 02/08/12 13:06, Guruprasad B wrote:
Hi,
I am Guruprasad from Bangalore (India). I need help in settin
Hi,
I am Guruprasad from Bangalore (India). I need help in setting up hadoop
platform. I am very much new to Hadoop Platform.
I am following the below given articles and I was able to set up
"Single-Node Cluster"
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-s
Yes I have setup SSH properly on my mac. I have utilized ssh-keygen to generate
pub and private key pair and was able to ssh to my machine using ssh localhost.
When I start Hadoop I see that it has established an ssh connection to
localhost.
> Subject: Re: Help with Hadoop Eclipse Plugin
ot;.
> org/apache/commons/configuration/Configuration
>
>> From: seventeen_reas...@hotmail.com
>> To: common-user@hadoop.apache.org
>> Subject: RE: Help with Hadoop Eclipse Plugin on Mac OS X Lion
>> Date: Fri, 2 Dec 2011 20:51:02 -0800
>>
>>
>>
he.org
> Subject: RE: Help with Hadoop Eclipse Plugin on Mac OS X Lion
> Date: Fri, 2 Dec 2011 20:51:02 -0800
>
>
> What version of Hadoop are you running on OS X Lion and are you running
> 32-bit or 64-bit version of Eclipse?
>
> > Subject: Re: Help with Hadoop Ecl
What version of Hadoop are you running on OS X Lion and are you running 32-bit
or 64-bit version of Eclipse?
> Subject: Re: Help with Hadoop Eclipse Plugin on Mac OS X Lion
> From: jign...@websoft.com
> Date: Fri, 2 Dec 2011 14:37:28 -0500
> To: common-user@hadoop.apache.org
>
any problems and I would like to use my new laptop running OS X Lion.
>>
>> The plugin is helpful in that I can see hadoop output being dumped to the
>> eclipse console and it used to integrate well with the Eclipse IDE making my
>> development life a little easier.
>>
and I would like to use my new laptop running OS X Lion.
>
> The plugin is helpful in that I can see hadoop output being dumped to the
> eclipse console and it used to integrate well with the Eclipse IDE making my
> development life a little easier.
>
> Thank you for your time and
my
development life a little easier.
Thank you for your time and help.
Sincerely,
Will Lieu
> Date: Fri, 2 Dec 2011 21:44:36 +0530
> Subject: Re: Help with Hadoop Eclipse Plugin on Mac OS X Lion
> From: prashant.ii...@gmail.com
> To: common-user@hadoop.apache.org
>
> Why do
gt; > From: seventeen_reas...@hotmail.com
> > To: common-user@hadoop.apache.org
> > Subject: Help with Hadoop Eclipse Plugin on Mac OS X Lion
> > Date: Fri, 2 Dec 2011 00:26:28 -0800
> >
> >
> >
> >
> >
> >
> > Hello,
> > I am having
he.org
> Subject: Help with Hadoop Eclipse Plugin on Mac OS X Lion
> Date: Fri, 2 Dec 2011 00:26:28 -0800
>
>
>
>
>
>
> Hello,
> I am having problems getting my hadoop eclipse plugin to work on Mac OS X
> Lion.
>
> I have tried the following combinati
plugin to work on Mac OS X Lion?
Thank you for your time and help I greatly appreciate it!
Sincerely,
Will
ino-German Joint Software Institute,
> * School of Computer Science&Engineering, Beihang University
> * Phone: (86-010)82315908
> * Email: hailong.yang1...@gmail.com
> * Address: G413, New Main Building in Beihang University,
> * No.37 XueYuan Road,HaiDian District,
Thank you for your help.
I can use /sbin/hadoop-daemon.sh {start|stop} {service} script to start a
namenode, but I can't start a resourcemanager.
2011/11/30 Harsh J
> I simply use the /sbin/hadoop-daemon.sh {start|stop} {service} script
> to control daemons at my end.
>
> Does
On 30/11/11 04:29, Nitin Khandelwal wrote:
Thanks,
I missed the "sbin" directory, was using the normal bin directory.
Thanks,
Nitin
On 30 November 2011 09:54, Harsh J wrote:
Like I wrote earlier, its in the $HADOOP_HOME/sbin directory. Not the
regular bin/ directory.
On Wed, Nov 30, 2011 at
g in Beihang University,
* No.37 XueYuan Road,HaiDian District,
* Beijing,P.R.China,100191
***
发件人: cat fa
发送时间: 2011-11-30 10:28
收件人: common-user
主题: Re: Re: [help]how to stop HDFS
In fact it's me to say sorry. I used
n
> >> was
> >> >> > based on the assumption that you have done a tar extract install
> where
> >> >> all
> >> >> > three distribution have to extracted and then export variables.
> >> >> > Also I have no experien
tar extract install where
>> >> all
>> >> > three distribution have to extracted and then export variables.
>> >> > Also I have no experience with rpm based installs - so no comments
>> about
>> >> > what went wrong in your case.
>&g
e error i can say that it is not able to find the
> jars
> >> > needed on classpath which is referred by scripts through
> >> > HADOOP_COMMON_HOME. I would say check with the access permission as in
> >> > which user was it installed with and which user is it ru
the jars
>> > needed on classpath which is referred by scripts through
>> > HADOOP_COMMON_HOME. I would say check with the access permission as in
>> > which user was it installed with and which user is it running with ?
>> >
>> > On Tue, Nov
erred by scripts through
> > HADOOP_COMMON_HOME. I would say check with the access permission as in
> > which user was it installed with and which user is it running with ?
> >
> > On Tue, Nov 29, 2011 at 10:48 PM, cat fa >wrote:
> >
> >> Thank you fo
> On Tue, Nov 29, 2011 at 10:48 PM, cat fa wrote:
>
>> Thank you for your help, but I'm still a little confused.
>> Suppose I installed hadoop in /usr/bin/hadoop/ .Should I
>> point HADOOP_COMMON_HOME to /usr/bin/hadoop ? Where should I
>> point HADOOP_HDFS_HOME? Also
29, 2011 at 10:48 PM, cat fa wrote:
> Thank you for your help, but I'm still a little confused.
> Suppose I installed hadoop in /usr/bin/hadoop/ .Should I
> point HADOOP_COMMON_HOME to /usr/bin/hadoop ? Where should I
> point HADOOP_HDFS_HOME? Also to /usr/bin/hadoop/ ?
>
&
Thank you for your help, but I'm still a little confused.
Suppose I installed hadoop in /usr/bin/hadoop/ .Should I
point HADOOP_COMMON_HOME to /usr/bin/hadoop ? Where should I
point HADOOP_HDFS_HOME? Also to /usr/bin/hadoop/ ?
2011/11/30 Prashant Sharma
> I mean, you have to ex
> * Beijing,P.R.China,100191
> > ***
> >
> > From: cat fa
> > Date: 2011-11-29 20:22
> > To: common-user
> > Subject: Re: [help]how to stop HDFS
> > use $HADOOP_CONF or $HADOOP_CONF_DIR ? I'm usi
**
>
> From: cat fa
> Date: 2011-11-29 20:22
> To: common-user
> Subject: Re: [help]how to stop HDFS
> use $HADOOP_CONF or $HADOOP_CONF_DIR ? I'm using hadoop 0.23.
>
> you mean which class? the class of hadoop or of java?
>
> 2011/11/29 Prashan
sity,
* No.37 XueYuan Road,HaiDian District,
* Beijing,P.R.China,100191
***
From: cat fa
Date: 2011-11-29 20:22
To: common-user
Subject: Re: [help]how to stop HDFS
use $HADOOP_CONF or $HADOOP_CONF_DIR ? I'm using hadoop 0.23.
you m
use $HADOOP_CONF or $HADOOP_CONF_DIR ? I'm using hadoop 0.23.
you mean which class? the class of hadoop or of java?
2011/11/29 Prashant Sharma
> Try making $HADOOP_CONF point to right classpath including your
> configuration folder.
>
>
> On Tue, Nov 29, 2011 at 3:58 PM, cat fa
> wrote:
>
> >
Try making $HADOOP_CONF point to right classpath including your
configuration folder.
On Tue, Nov 29, 2011 at 3:58 PM, cat fa wrote:
> I used the command :
>
> $HADOOP_PREFIX_HOME/bin/hdfs start namenode --config $HADOOP_CONF_DIR
>
> to sart HDFS.
>
> This command is in Hadoop document (here
>
1 - 100 of 490 matches
Mail list logo