Hi,
I'm trying to create a unique identifier of the node that I am sitting
on no matter if it is a VM or metal. I've been scannning the API but
have not found a possibility to get something I was looking for.
Anyone have an idea?
Cheers and thanks,
Peter
As announced on Apache Con US 09, the next Apache Hadoop Get Together
Berlin is scheduled for next Wednesday:
When: Wednesday December 16, 2009 at 5:00pm
Where: newthinking store, Tucholskystr. 48, Berlin
Talks scheduled so far:
Richard Hutton (nugg.ad): Moving from five days to one hour. -
samuellawrence wrote:
Hai,
I have to start the HADOOP environment using java code (inprocess). I would
like to use the APIs to start it.
Could anyone please give me snippet or a link.
Hi
1. I've been starting/stopping Hadoop with SmartFrog, in JVM. Email me
direct and I will point you at
Greets,
Does anyone run Hadoop without SSH?
Windows/Vista has a lot of problems with CYGWIN and SSHD. Unless the
phase of the moon is just right and you have a magic rabbits foot it
just doesn't work. I've spent much time trying to fix it just so I can
do some Hadoop development.
You don't
Greetings,
I would like to let everyone know that the next Hadoop DC User Group
Meetup is scheduled for Tuesday, December 15th, 2009 from 6:30 - 8:30 at UMD
campus. Please take a look at the agenda below for details. I hope to see
you there, please RSVP here:
You basically can't use the out-of-the-box start/stop scripts when you have
multiple DN or TT processes per node. You'll need to hack them to support
multiple confs.
On 12/6/09 10:28 PM, Yuzhe Tang tristar...@gmail.com wrote:
Hi, I am setting up hadoop clusters. How can I configure system
Hi Horson,
Quite unfortunately, there is no documentation available for PIpes
API. Its not just that, the API itself is quite weak and unstable.
Only few examples given in PIpes distro work. And it appears there are
very few ppl who use Hadoop Map/Reduce through the PIpes API. I have
myself been
Thanks. It helps.
-Gang
- 原始邮件
发件人: Amogh Vasekar am...@yahoo-inc.com
收件人: common-user@hadoop.apache.org common-user@hadoop.apache.org
发送日期: 2009/12/7 (周一) 12:43:07 上午
主 题: Re: Re: return in map
Hi,
If the file doesn’t exist, java will error out.
For partial skips,
try this
http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Multi-Node_Cluster%29
On Sat, Dec 5, 2009 at 7:33 PM, Yuzhe Tang tristar...@gmail.com wrote:
Hi All,
I am trying to set up a hadoop cluster. I have started hdfs on one machine,
while mapreduce on the other machine.
Hello. My name is Alex Levin and I am the COO of Brilig (www.brilig.com) a
startup in New York focused on the online advertising space. We are looking
to hire a Hadoop / Data Migration Specialist to play a crucial role in
converting new client's data onto Brilig's service platforms. We are
Precisely.(Check for a covert 'tardis' for the acct) HAL
--Original Message--
From: Habermaas, William
To: common-user@hadoop.apache.org
ReplyTo: common-user@hadoop.apache.org
Subject: Why DrWho
Sent: Dec 7, 2009 3:30 PM
I am running Hadoop-0.20.1 on a Solaris box with dfs.permissions
2 days ago i have met the same problem. It is because java can't allocate
memory to call it )
After that i was playing with -Xmx options for this variables in
hadoop-env.sh, and now they are:
export HADOOP_HEAPSIZE=1000
export HADOOP_NAMENODE_OPTS=-Xmx612 -Dcom.sun.management.jmxremote
On 12/2/09 12:22 PM, Vasilis Liaskovitis vlias...@gmail.com wrote:
Hi,
I am using hadoop-0.20.1 to run terasort and randsort benchmarking
tests on a small 8-node linux cluster. Most runs consist of usually
low (50%) core utilizations in the map and reduce phase, as well as
heavy I/O
Hello. My name is Alex Levin and I am the COO of Brilig, a technology
startup in New York focusing on the online advertising industry. We are
currently in need of a Hadoop developer for a key client services position
in our fast and exciting company. For more information, please see the full
On Solaris, you may also want to change:
export HADOOP_IDENT_STRING=`/usr/xpg4/bin/id -u -n`
On 12/7/09 4:42 PM, pavel kolodin pavelkolodinhad...@gmail.com wrote:
2 days ago i have met the same problem. It is because java can't allocate
memory to call it )
After that i was playing with
On Dec 7, 2009, at 10:44 AM, Prakhar Sharma wrote:
Quite unfortunately, there is no documentation available for PIpes
API. Its not just that, the API itself is quite weak and unstable.
*sigh* I agree that there should be more documentation. I'd love it if
someone could write some up and
On Dec 7, 2009, at 10:05 AM, horson wrote:
i want to write a file to hdfs, using hadoop pipes. can anyone tell
me how
to do that?
You either use a Java OutputFormat, which is the easiest, or you use
libhdfs to write to HDFS from C++.
i looked at the hadoop pipes source and it looked
Hi, guys,
first of all, I have added this section to hadoop-site.xml
property
namemapred.child.java.opts/name
value-Xmx1024m/value
/property
Secondly, I am running on the EC2 Hadoop clusters using Apache distribution,
and I have modified the
hadoop-ec2-init-remote.sh in the
oops, 2048 instead of 1024 did it. Even though on my machine 1024 was enough
- but that's not such a big puzzle
On Mon, Dec 7, 2009 at 6:26 PM, Mark Kerzner markkerz...@gmail.com wrote:
Hi, guys,
first of all, I have added this section to hadoop-site.xml
property
Hi,
I used to be able to create a Java project using the build.xml file that comes
with hadoop distribution. It seemed that the 0.20 version of Hadoop uses some
ivy related stuff. And now when I tried to create a project using the Ant build
file, Eclipse gave me an error of problem setting the
If it is hadoop 0.20 the files to modify are core-site.xml, hdfs-site.xml and
mapred-site.xml, while the default configs are in
core-default.xml,hdfs-default.xml and mapred-default.xml.
Otherwise also, are you saying that providing -D works with same memory but not
via onfig?
If not, for
Dear All,
Can anybody please let me know about some of the current features of
hadoop on which development work is going on / or planning to go in
future, like :
1. Record append
2. Snapshot
3. Erasure coding
Etc.
Thanks and Best Regards,
Krishna Kumar
On Mon, Dec 7, 2009 at 10:58 PM, Krishna Kumar krishna.ku...@nechclst.inwrote:
Dear All,
Can anybody please let me know about some of the current features of
hadoop on which development work is going on / or planning to go in
future, like :
1. Record append
Not implemented and
Hi,
I am facing some problems with using distributed cache archive with Pipes
job. In my configuration file I have the following two properties:
property
namemapred.create.symlink/name
valueyes/value
/property
property
namemapred.cache.archives/name
24 matches
Mail list logo