go thru the steps mentioned in this doc it will help you..
On Wed, Oct 10, 2012 at 4:15 PM, Sujit Dhamale sujitdhamal...@gmail.comwrote:
Hi
Please help me out in this ??
is there is any other Way to run Hadoop on Window ??
On Tue, Oct 9, 2012 at 11:00 AM, Sujit Dhamale
Hey Alexey,
Have you noticed this right from the start itself? Also, what exactly
do you mean by Limited replication bandwidth between datanodes -
5Mb. - Are you talking of dfs.balance.bandwidthPerSec property?
On Wed, Oct 10, 2012 at 10:53 AM, Alexey alexx...@gmail.com wrote:
Additional info:
Hello Harsh,
I notices such issues from the start.
Yes, I mean dfs.balance.bandwidthPerSec property, I set this property to
500.
On 10/09/12 11:50 PM, Harsh J wrote:
Hey Alexey,
Have you noticed this right from the start itself? Also, what exactly
do you mean by Limited replication
Hi,
OK, can you detail your network infrastructure used here, and also
make sure your daemons are binding to the right interfaces as well
(use netstat to check perhaps)? What rate of transfer do you get for
simple file transfers (ftp, scp, etc.)?
On Wed, Oct 10, 2012 at 12:24 PM, Alexey
Hi
Please help me out in this ??
is there is any other Way to run Hadoop on Window ??
On Tue, Oct 9, 2012 at 11:00 AM, Sujit Dhamale sujitdhamal...@gmail.comwrote:
Hi ,
i install Hadoop on window with the help of Cygwin .
*data node and Task tracker is not starting .*
can some one help me
Hi All,
Iam new to Hadoop, i just want to know the writing of files into
datanodes in depth.
means the file is divided into blocks again the blocks are divided into packets.
i need some detailed doc abt the packets movement by using Datapackets
Acknowledge packets.
Thanks Regards,
Hi Murthy
Hadoop - The definitive Guide by Tom White has the details on file write
anatomy.
Regards
Bejoy KS
Sent from handheld, please excuse typos.
-Original Message-
From: murthy nvvs murthy_n1...@yahoo.com
Date: Wed, 10 Oct 2012 04:27:58
To:
Have you created a sub-dir under user/ as user/robing for user robing?
Depending on your version of hadoop it is import to set up your directory
structure users/groups properly.
Here is just an example:
drwxrwxrwt - hdfs supergroup 0 2012-04-19 15:14 /tmp
drwxr-xr-x - hdfs
I prefer to create indexes in the reducer personally.
Also you can avoid the copies if you use an advanced hadoop-derived distro.
Email me off list for details.
Sent from my iPhone
On Oct 9, 2012, at 7:47 PM, Mark Kerzner mark.kerz...@shmsoft.com wrote:
Hi,
if I create a Lucene index in
Hi Steve,
Thank you for sharing information of JIRA with me.
Topology easy setting and confirmation will be necessary.
I use JIRA for an idea about toplogy setting and confirmation.
Regards,
Shinichi
2012/10/10 Ted Dunning tdunn...@maprtech.com
On Tue, Oct 9, 2012 at 12:17 PM, Steve
There is no /hadoop1 directory. It is //hadoop1 which is the name of the
server running the name node daemon:
valuehdfs://hadoop1/mapred/value
Per offline conversations with Arpit, it appears this problem is related to the
fact that I am using the fair scheduler. The fair scheduler is
Robin
I will try to investigate the issue with fair scheduler. Do let us know if
switching to default or the capacity scheduler solved the issue.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Oct 10, 2012, at 9:32 AM, Goldstone, Robin J. goldsto...@llnl.gov wrote:
There is no
Hi,
Can you please add Document in this mail :)
Kind Regards
Sujit Dhamale
(+91 9970086652)
On Wed, Oct 10, 2012 at 4:30 PM, Visioner Sadak visioner.sa...@gmail.comwrote:
go thru the steps mentioned in this doc it will help you..
On Wed, Oct 10, 2012 at 4:15 PM, Sujit Dhamale
Hi,
I'm storing sequence files in the distributed cache which seems to be
stored somewher under each node's /tmp .../local/archive/ ... path.
In mapper code, I tried using SequenceFile.Reader with all possible
configurations (locally, distribtued) however, it can't find it. Are
sequence files
I believe not many folks have deployed it. Although some testing has been
done it may not be stable. If you are really stuck on using that release,
you can check on the release notes and bugs filed for that version to see
if there are any potential show stoppers for you. As always, test as much
In the LucidWorks Big Data product, we handle this with a reducer that sends
documents to a SolrCloud cluster. This way the index files are not managed by
Hadoop.
- Original Message -
| From: Ted Dunning tdunn...@maprtech.com
| To: user@hadoop.apache.org
| Cc: Hadoop User
That is very interesting, Lance, thank you.
Mark
On Wed, Oct 10, 2012 at 9:15 PM, Lance Norskog goks...@gmail.com wrote:
In the LucidWorks Big Data product, we handle this with a reducer that
sends documents to a SolrCloud cluster. This way the index files are not
managed by Hadoop.
-
Hi Mark,
DistributedCache files, when accessed from a Task, exist on the local
file system. You should make sure the SequenceFile.Reader tries to
read it with a LocalFS than a HDFS instance.
On Thu, Oct 11, 2012 at 5:15 AM, Mark Olimpiati markq2...@gmail.com wrote:
Hi,
I'm storing sequence
Interestingly, a few MapR customers have gone the other way, deliberately
having the indexer put the Solr shards directly into MapR and letting it
distribute it. Has made index-management a cinch.
Otherwise they do run into what Tim alludes to.
On Wed, Oct 10, 2012 at 7:27 PM, Tim Williams
How can you store Solr shards in hadoop? Is each data node running a Solr
server? If so - is the reducer doing a trick to write to a local fs?
Sent from my iPad
On Oct 11, 2012, at 12:04 AM, M. C. Srivas mcsri...@gmail.com wrote:
Interestingly, a few MapR customers have gone the other way,
20 matches
Mail list logo