Try exporting HADOOP_HOME in the launching user's environment to the
right path, on every one of your slave nodes where there is no such
symlink. Then try starting all again.
On Tue, Jan 31, 2012 at 2:57 AM, Mohamed Riadh Trad
wrote:
> Hi,
>
> I am upgraded my cluster to hadoop 1.0.0, however, hd
Thanks a lot for all the responses.:)
On Tue, Jan 31, 2012 at 5:48 AM, Matei Zaharia wrote:
> Spark (http://www.spark-project.org) aims to provide a higher-level
> programming interface as well as higher performance than Hadoop.
>
> Matei
>
> On Jan 30, 2012, at 2:24 PM, Ronald Petty wrote:
>
> R
This is a somewhat late announcement, but I thought it might be interesting to
people on this list. We're holding the first user meetup for Spark
(www.spark-project.org), the in-memory cluster computing framework that lets
you do interactive and iterative data mining on Hadoop data, in San Franc
Spark (http://www.spark-project.org) aims to provide a higher-level programming
interface as well as higher performance than Hadoop.
Matei
On Jan 30, 2012, at 2:24 PM, Ronald Petty wrote:
> R.V.,
>
> Are you looking for the platforms that due distributed computation or the
> larger ecosystems
Sector-Sphere
On Mon, Jan 30, 2012 at 4:24 PM, Ronald Petty wrote:
> R.V.,
>
> Are you looking for the platforms that due distributed computation or the
> larger ecosystems like programming apis, etc.?
>
> Here are some platforms:
>
> C-Squared
> Globus
> Condor
>
> Here are some libraries:
>
>
R.V.,
Are you looking for the platforms that due distributed computation or the
larger ecosystems like programming apis, etc.?
Here are some platforms:
C-Squared
Globus
Condor
Here are some libraries:
MPI/PVM
Kindest regards.
Ron
On Mon, Jan 30, 2012 at 5:47 AM, real great..
wrote:
> Thank
Luiz,
What is in this file hdfs://10.22.1.2:54310/user/hadoop/out.ivory.small
Kindest regards.
Ron
On Mon, Jan 30, 2012 at 7:24 AM, Luiz Antonio Falaguasta Barbosa <
lafbarb...@gmail.com> wrote:
> Hi Ronald,
>
> I didn't try to run it locally. I used a cluster in the university where I
> study
Hi,
I am upgraded my cluster to hadoop 1.0.0, however, hdfs fails to start and I
get the following message:
###
starting namenode, logging to
/home/local/trad/hadoop/cluster/hadoop-1.0.0.dfs/bin/../logs/hadoop-trad-namenode-master_dfs.out
slave001: /home/local/t
Thanks. Are there any discussions/pages about this new architecture? What
is the difference?
Zhu, Guojun
Modeling Sr Graduate
571-3824370
guojun_...@freddiemac.com
Financial Engineering
Freddie Mac
Yang
01/30/2012 01:24 PM
Please respond to
mapreduce-user@hadoop.apache.org
To
ma
lib.input is the new architecture, .mapred is for legacy backward
compatibility
if u use .mapreduce , you should use mapreduces.lib..
On Mon, Jan 30, 2012 at 10:21 AM, GUOJUN Zhu wrote:
>
> Hi,
>
> I am learning Hadoop now. I am trying to write a customized inputformat.
> I found out that
Hi,
I am learning Hadoop now. I am trying to write a customized inputformat.
I found out that there are two FileInputFormat's in the package,
org.apache.hadoop.mapred and org.apache.hadoop.mapred.lib.input. Both
look the same. So do a few other classes, such as FileSplit,.. .I am
confu
We would like to announce that YSmart Release 12.01 (effectively
version 0.1) is available. YSmart is a software that translates an SQL
query to Hadoop Java programs. Compared to other existing
SQL-to-MapReduce translators, YSmart has the following advantages:
(1) High Performance: YSmart can dete
Hi Ronald,
I didn't try to run it locally. I used a cluster in the university where I
study.
The console of Eclipse returns the following:
2012-01-28 10:34:54.450 java[22689:1903] Unable to load realm info from
SCDynamicStore
12/01/28 10:34:58 INFO mapred.FileInputFormat: Total input paths to p
How about GridGain? Not sure abouts its liveliness, though.
Regards,
Christoph
-Ursprüngliche Nachricht-
Von: real great.. [mailto:greatness.hardn...@gmail.com]
Gesendet: Montag, 30. Januar 2012 14:48
An: mapreduce-user@hadoop.apache.org; ashwanthku...@googlemail.com
Betreff: Re: Other t
Thanks! Please do pour in replies as i am trying to make a survey of
related technologies.
On Mon, Jan 30, 2012 at 7:14 PM, Ashwanth Kumar <
ashwanthku...@googlemail.com> wrote:
> If you're trying something at real-time processing, try Storm (
> https://github.com/nathanmarz/storm)
>
> - Ashwant
If you're trying something at real-time processing, try Storm (
https://github.com/nathanmarz/storm)
- Ashwanth
On Mon, Jan 30, 2012 at 7:11 PM, real great..
wrote:
> Hi,
> an out of record question, but apart from hadoop which are the other
> distributed computing platforms?
>
> --
> Regards,
Hi,
an out of record question, but apart from hadoop which are the other
distributed computing platforms?
--
Regards,
R.V.
Thanks a lot. I am looking at it.
Regards,
Oliaei.
On Mon, Jan 30, 2012 at 3:47 PM, Ioan Eugen Stan wrote:
> Pe 30.01.2012 14:10, Hamid Oliaei a scris:
>
> Hi,
>>
>> I want to send some data and messages to all nodes after I run a MR job.
>> then begin another job.
>> Is there any straight wa
Pe 30.01.2012 14:10, Hamid Oliaei a scris:
Hi,
I want to send some data and messages to all nodes after I run a MR job.
then begin another job.
Is there any straight way to broadcast data under hadoop framework?
See if DistributedCahche [1] can do what you need. It makes application
specifi
Hi,
I want to send some data and messages to all nodes after I run a MR job.
then begin another job.
Is there any straight way to broadcast data under hadoop framework?
Pe 30.01.2012 07:47, aliyeh saeedi a scris:
I want to save them with my own names, How NameNode will keep their names?
From: Joey Echeverria
To: mapreduce-user@hadoop.apache.org; aliyeh saeedi
Sent: Sunday, 29 January 2012, 17:10
Subject: Re: reducers o
21 matches
Mail list logo