On Fri, May 3, 2013 at 12:15 AM, mouna laroussi
wrote:
> Hi,
>
> I want to configure my Hadoop in tne pseudo distributed mode.
> when i arrive to the step to format namenode, i foind at the web page 50070
> "there are no namenode in the cluster.
> what shouled i do?
> is there any path to change?
After formatting the NN, start the daemons using "bin/start-hdfs.sh" and
"bin/start-mapred.sh". If it still doesn't work show us the logs.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Fri, May 3, 2013 at 10:29 PM, Nitin Pawar wrote:
> once you format the namenode, it w
once you format the namenode, it will need to started again for the normal
purpose usage
On Fri, May 3, 2013 at 12:45 PM, mouna laroussi wrote:
> Hi,
>
> I want to configure my Hadoop in tne pseudo distributed mode.
> when i arrive to the step to format namenode, i foind at the web page
> 50070
Hi,
I want to configure my Hadoop in tne pseudo distributed mode.
when i arrive to the step to format namenode, i foind at the web page 50070
"there are no namenode in the cluster.
what shouled i do?
is there any path to change?
Thanks
--
LAROUSSI Mouna
Élève ingénieur en Génie Logiciel - INSAT
Anyone has similar experience? Any suggestion welcome. I am stuck here for
a week now...
Thanks,
Alice
On Sun, Apr 28, 2013 at 5:38 PM, Xun TANG wrote:
> According to this link
> http://wiki.apache.org/hadoop/HowToDebugMapReducePrograms
>
> I am trying to find out where the downlink.data file
go for ext3 or ext4
On Fri, May 3, 2013 at 8:32 AM, Joarder KAMAL wrote:
> Hi,
>
> I have a running HDFS cluster (Hadoop/HBase) consists of 4 nodes and the
> initial hard disk (/dev/vda1) size is 10G only. Now I have a second hard
> drive /dev/vdb of 60GB size and want to add it into my existi
Thanks Yanbo. I my doubt is got clarified now.
On Fri, May 3, 2013 at 2:38 PM, Yanbo Liang wrote:
> load data to different partitions parallel is OK, because it equivalent to
> write to different file on HDFS
>
>
> 2013/5/3 selva
>
>> Hi All,
>>
>> I need to load a month worth of processed dat
load data to different partitions parallel is OK, because it equivalent to
write to different file on HDFS
2013/5/3 selva
> Hi All,
>
> I need to load a month worth of processed data into a hive table. Table
> have 10 partitions. Each day have many files to load and each file is
> taking two se
You probably need to be using a release that has
https://issues.apache.org/jira/browse/MAPREDUCE-3678 in it. It will
print the input split onto the task logs, letting you know therefore
what it processed at all times (so long as the input split type, such
as file splits, have intelligible outputs f
you can change the setting of data.dfs.dir in hdfs-site.xml if your version
is 1.x
data.dfs.dir
/usr/hadoop/tmp/dfs/data, /dev/vdb
2013/5/3 Joarder KAMAL
> Hi,
>
> I have a running HDFS cluster (Hadoop/HBase) consists of 4 nodes and the
> initial hard disk (/dev/vda1) si
Hi,
I have a 3-node cluster, with JobTracker running on one machine and
TaskTrackers on other two. Instead of using HDFS, I have written my own
FileSystem implementation. I am able to run a MapReduce job on this cluster but
I am not able to make out from logs or TaskTracker UI, which data sets
11 matches
Mail list logo