1) Java uses different variables than 'malloc'. Look up 'Java garbage
collection' to find out how it all works.
2) Is this a 32-bit kernel? Or Java version? Those top out at 2.1g address
space. You need to run with a 64-bit kernel Java to get real work done with
Hadoop.
- Original
This is a Hadoop benchmark suite. You can decide which benchmarks match your
needs.
https://github.com/intel-hadoop/hibench
(Haven't used it yet!)
- Original Message -
| From: Brian Bockelman bbock...@cse.unl.edu
| To: common-user@hadoop.apache.org
| Sent: Tuesday, October 23, 2012
Try Google. It' not the correct place to ask this question.
Regards,
Mohammad Tariq
On Fri, Oct 26, 2012 at 1:44 PM, suneel hadoop suneel.bigd...@gmail.comwrote:
Hi All,
Does anyone have interview questions on Hadoop+hbase+pig+hive..
Just wanted to see how the questions are..
We are using NFS for Shared storage, Can we use linux nfslcok service to
implement IO Fencing ?
2012/10/26 Steve Loughran ste...@hortonworks.com
On 25 October 2012 14:08, Todd Lipcon t...@cloudera.com wrote:
Hi Liu,
Locks are not sufficient, because there is no way to enforce a lock in a
Hive: Know SQL internals - how joins work, data structures and disk
algorithms, etc.. And how those would be implemented in MapReduce. Know
what a projection, aggregation, etc.. is.
Hadoop: Know how terasort works, know how word count works, and know about
why java serialization is non ideal.
Hi All,
I am trying to run Hadoop cluster but TaskTracker is not running,
I have cluster of two machines
1st Machine Namenode+Datanode
2nd Machine DataNode.
here is the TaskTracker's Log file.
2012-10-26 15:45:35,405 INFO org.apache.hadoop.mapred.TaskTracker: STARTUP_MSG:
Does anyone know how to unsubscribe from this group? Even Google search is
futile.
All methods to unsubscribe suggested earlier has failed.
Does anyone know how to unsubscribe from this group? Even Google search is
futile.
All methods to unsubscribe suggested earlier has failed.
NFS Locks typically last forever if you disconnect abruptly. So they are
not sufficient -- your standby wouldn't be able to take over without manual
intervention to remove the lock.
If you want to build an unreliable system that might corrupt your data, you
could set up 'shell(/bin/true)' as a
Gents,
Need to share with you my embarrassment... Solved this issue.. How?
Well, while following the installation instructions I thought I installed all
the daemons, but, after checking the init.d folder I could not find
hadoop-hdfs-datanode script so (thinking I acciddentslly deleted it) I
Gents,
1.
- do you put Master's node hostname under fs.default.name in core-site.xml on
the slave machines or slaves' hostnames?
- do you need to run sudo -u hdfs hadoop namenode -format and create /tmp
/var folders on the HDFS of the slave machines that will be running only DN and
TT or not?
On 26 October 2012 15:37, Todd Lipcon t...@cloudera.com wrote:
NFS Locks typically last forever if you disconnect abruptly. So they are
not sufficient -- your standby wouldn't be able to take over without manual
intervention to remove the lock.
+1. This is why you are told to mount your
On 25 October 2012 23:17, Daniel Käfer d.kae...@hs-furtwangen.de wrote:
Am Donnerstag, den 25.10.2012, 22:10 +0100 schrieb Steve Loughran:
Regarding storing DB data, HBase-on-HDFS is where people keep it; Pig
and Hive can work with that as well as rawer data kept in HDFS
directly
But is
On Fri, Oct 26, 2012 at 9:40 AM, Kartashov, Andy andy.kartas...@mpac.ca wrote:
Gents,
We're not all male here. :) I prefer Hadoopers or hi all,.
1.
- do you put Master's node hostname under fs.default.name in core-site.xml
on the slave machines or slaves' hostnames?
Master. I have a
questions
1) Have you setup password less ssh between both hosts for the user
who owns the hadoop processes (or root)
2) If answer to questions 1 is yes, how did you start NN, JT DN and TT
3) If you started them one by one, there is no reason running a
command on one node will execute it on
Thanks a lot for clearing this up.
On Thu, Oct 25, 2012 at 4:55 PM, Harsh J ha...@cloudera.com wrote:
Jaco,
There is no BackupNode in 1.x. The docs were by mistake, see
http://search-hadoop.com/m/RMLq71stu6W.
Also, we're removing BackupNode and CheckpointNode in future (trunk
for now) in
Hadoopers,
The problem was in EC2 security. While I could passwordlessly ssh into another
node and back I could not telnet to it due to EC2 firewall. Needed to open
ports for the NN and JT. :)
Now I can see 2 DNs running hadoop fsck and can also -ls into NN from the
slave. Sweet!!!
Is
On Fri, Oct 26, 2012 at 11:47 AM, Kartashov, Andy
andy.kartas...@mpac.ca wrote:
I successfully ran a job on a cluster on foo1 in pseudo-distributed mode and
are now trying to try fully-dist'ed one.
a. I created another instance foo2 on EC2.
It seems like you're trying to use the start-dfs.sh
How can we manage cluster-wide atomic operations? Such as maintaining an
auto-increment counter.
Does Hadoop provide native support for these kinds of operations?
An in case ultimate answer involves zookeeper, I'd love to work out doing
this in AWS/EMR.
This is better asked on the Zookeeper lists.
The first answer is that global atomic operations are a generally bad idea.
The second answer is that if you an batch these operations up then you can
cut the evilness of global atomicity by a substantial factor.
Are you sure you need a global
Hi Andy,
you should definitely give a try to whirr for hadoop on aws. It solves
all issues and works smoothly.
Thanks,
nitin
On Sat, Oct 27, 2012 at 1:25 AM, Kartashov, Andy andy.kartas...@mpac.ca wrote:
Hadoopers,
The problem was in EC2 security. While I could passwordlessly ssh into
21 matches
Mail list logo