Tubemogul is one of them.
On Thu, Feb 23, 2012 at 11:00 AM, shreya@cognizant.com wrote:
Hi,
Could someone provide some links on Clickstream and video Analysis in
Hadoop.
Thanks and Regards,
Shreya Pal
This e-mail and any files transmitted with it are for the sole use of the
+ you will not necessarily need vertical systems for speeding up
things(totally depends on your query) . Give a thought of having commodity
hardware(much cheaper) and hadoop being suited for them, *I hope* your
infrastructure can be cheaper in terms of price to performance ratio.
Having said that,
I think you have misunderstood something. AFAIK or understand these
variables are set automatically when you run a script. it's name is obscure
for some strange reason. ;).
Warning: $HADOOP_HOME is deprecated is always there. whether the variable
is set or not. Why?
Because the hadoop-config is
the export HADOOP_HOME=${HADOOP_PREFIX} in
hadoop-conf.sh, does it make any difference ?
Thanks,
Praveenesh
On Wed, Feb 1, 2012 at 6:04 PM, Prashant Sharma prashan...@imaginea.com
wrote:
I think you have misunderstood something. AFAIK or understand these
variables are set
Praveenesh,
Well, It gives you more convenience :). If you have worked on R, then you
might notice with R you can write mapper as a lapply(using rmr). They have
already abstracted a lot of stuff for you so you have less control over
things. But still as far as convenience is concerned its damn
Edmon,
I made some effort but got bored eventually 'cause of no interest. I
think i made some progress and perhaps you can take it forward from there
in MAPREDUCE-3131 https://issues.apache.org/jira/browse/MAPREDUCE-3131 I
am ready to help incase anything I can with. Also it works perfect for a
Why do you need a plugin at all?
you can do away with it by having a maven project i.e. having a pom.xml and
setting hadoop as one of the dependencies. Then use regular maven commands
to build etc.. e.g. mvn eclipse:eclipse would be an interesting command.
On Fri, Dec 2, 2011 at 1:59 PM, Will L
nice to know Will, well the way i said you have the same luxury as far as
you are running in stand-alone mode which is ideal for development.
On Fri, Dec 2, 2011 at 10:02 PM, Will L seventeen_reas...@hotmail.comwrote:
I got the setup working under my laptop running OS X Snow Leopard without
Try making $HADOOP_CONF point to right classpath including your
configuration folder.
On Tue, Nov 29, 2011 at 3:58 PM, cat fa boost.subscrib...@gmail.com wrote:
I used the command :
$HADOOP_PREFIX_HOME/bin/hdfs start namenode --config $HADOOP_CONF_DIR
to sart HDFS.
This command is in
-29 20:22
To: common-user
Subject: Re: [help]how to stop HDFS
use $HADOOP_CONF or $HADOOP_CONF_DIR ? I'm using hadoop 0.23.
you mean which class? the class of hadoop or of java?
2011/11/29 Prashant Sharma prashant.ii...@gmail.com
Try making $HADOOP_CONF point to right classpath
/ ?
2011/11/30 Prashant Sharma prashant.ii...@gmail.com
I mean, you have to export the variables
export HADOOP_CONF_DIR=/path/to/your/configdirectory.
also export HADOOP_HDFS_HOME ,HADOOP_COMMON_HOME. before your run your
command. I suppose this should fix the problem.
-P
On Tue, Nov 29
yes pallets library. https://github.com/pallet/pallet-hadoop-example
On Wed, Nov 30, 2011 at 1:58 AM, Periya.Data periya.d...@gmail.com wrote:
Hi All,
I am just beginning to learn how to deploy a small cluster (a 3
node cluster) on EC2. After some quick Googling, I see the following
Can you check your userlogs/xyz_attempt_xyz.log and also jobtracker and
datanode logs.
-P
On Tue, Nov 29, 2011 at 4:17 AM, Nitika Gupta ngu...@rocketfuelinc.comwrote:
Hi All,
I am trying to run a mapreduce job to process the Amazon S3 logs.
However, the code hangs at INFO mapred.JobClient:
Please see my mail on common-dev.
Also you may not send the same mail on all mailing lists, be patient for
people to reply.
On Sat, Nov 26, 2011 at 6:35 PM, madhu_sushmi madhu_sus...@yahoo.comwrote:
Hi,
I need to implement distributed sorting using Hadoop. I am quite new to
Hadoop and I am
Wont be that easy but its possible to write.
I did something like this.
$HADOOP_HOME/bin/hadoop fs -rmr `$HADOOP_HOME/bin/hadoop fs -ls | grep
'.*2011.11.1[1-8].*' | cut -f 19 -d \ `
Notice a space in -d \SPACE.
-P
On Sat, Nov 26, 2011 at 8:46 PM, Uma Maheswara Rao G
mahesw...@huawei.comwrote:
some code in hadoop as in?
well you can read
http://svn.apache.org/repos/asf/hadoop/common/trunk/BUILDING.txt
basically to build entire repo and make distributions.
mvn clean package -Pdist -Dtar -DskipTests
you will find all the jars/tars etc.
On Thu, Nov 17, 2011 at 3:57 PM, seven garfee
Jay,
And if you are willing to work on the trunk version. you might wanna
compile the documents using mvn site:site. And then follow the guide.
-P
On Fri, Nov 18, 2011 at 3:11 AM, GOEKE, MATTHEW (AG/1000)
matthew.go...@monsanto.com wrote:
Jay,
Did you download stable (0.20.203.X) or 0.23?
Richard and Ramon
Yes, I think there should be a way as you see there is a class named
JobClient in org.apache.hadoop.mapred which is basically invoked from
commandline , if you open hadoop shell script my point will be clearer.
Also I suggest you take a look at oozie there using java apis you
Hi Mathias,
I wrote a small introduction or a quick ramp up for starting out with
hadoop while learning it at my institute.
http://functionalprograming.files.wordpress.com/2011/07/hadoop-2.pdf
thanks
-P
On Mon, Oct 31, 2011 at 6:44 PM, Mathias Herberts
mathias.herbe...@gmail.com wrote:
Hi,
19 matches
Mail list logo