I think your best bet might be to check out a particular release tag for 0.22
release and checking the docs out there. Perhaps you might want to run 'ant
docs' of whatever the target used to be back then.
Cos
On Mon, Jul 28, 2014 at 04:06PM, Jane Wayne wrote:
> where can i get the old hadoop docu
[Cc bigtop-dev@]
We have stack tests as a part of Bigtop project. We don't do fault injection
tests
like you describe just yet, but that be a great contribution to the project.
Cos
On Wed, Oct 16, 2013 at 02:12PM, hdev ml wrote:
> Hi all,
>
> Are there automated tests available for testing san
There's also BigTop distrubution that includes Hadoop
https://incubator.apache.org/bigtop/
Also, it seems that 1.0.3 based stack in the binary form is available for
download from this site
http://www.magnatempusgroup.net/ftphost/releases/latest/ubuntu/
Cos
On Tue, Jul 03, 2012 at 10:03PM,
Hi Vseslava.
This part of the ASF FAQ explains everything in this regard I think
https://www.apache.org/foundation/license-faq.html#Translation
In other words "Sure!" ;)
Cos
On Tue, Jun 19, 2012 at 04:34AM, vseslava.kavch...@gmail.com wrote:
> Hey there,
>
> I am a student at the Department o
TLDR :/
Besides, it isn't a job list
Cos
On Mon, Apr 09, 2012 at 10:59PM, Bing Li wrote:
> 国际著名大型IT企业(排名前3位)开发中心招聘Hadoop技术专家(北京)-非猎头
>
> 职位描述:
> Hadoop系统和平台开发(架构师,资深开发人员)
>
>
> 职位要求:
>
> 1.有设计开发大型分布式系统的经验(工作年限3年以上,架构师5年以上),hadoop大型实际应用经验优先
>
> 2.良好的编程和调试经验(java or c++/c),扎实的计算机理论基础,快速的学习能力
I suggest to start with fault injection tests. They can found under
src/test/aop/org/apache/hadoop
for HDFS in 0.22. Hdfs has been has the best coverage by fault injection.
Test exists in the similar location in the trunk, but they aren't hooked up to
maven build system yet.
Cos
On Thu, Dec 2
Do you have some strict performance requirement or something? Cause 5Gb is
pretty much nothing, really. I'd say copyFromLocal will do just fine.
Cos
On Tue, Dec 20, 2011 at 10:32PM, Edmon Begoli wrote:
> Hi,
>
> We are going to be loading 4-5 GB text, delimited file from a RHEL file
> system int
On Wed, Dec 14, 2011 at 10:09AM, Scott Carey wrote:
>
> On 12/13/11 11:28 PM, "Konstantin Boudnik" wrote:
>
> >On Tue, Dec 13, 2011 at 11:00PM, M. C. Srivas wrote:
> >> Suresh,
> >>
> >> As of today, there is no option except to use
On Tue, Dec 13, 2011 at 11:00PM, M. C. Srivas wrote:
> Suresh,
>
> As of today, there is no option except to use NFS. And as you yourself
> mention, the first HA prototype when it comes out will require NFS.
Well, in the interest of full disclosure NFS is just one of the options and
not the only
These that great project called BigTop (in the apache incubator) which
provides for building of Hadoop stack.
The part of what it provides is a set of Puppet recipes which will allow you
to do exactly what you're looking for with perhaps some minor corrections.
Serious, look at Puppet - otherwise
I'd suggest you use BigTop (cross-posting to bigtop-dev@ list) produced bit
which also posses Puppet recipes allowing for fully automated deployment and
configuration. BigTop also uses Jenkins EC2 plugin for deployment part and it
seems to work real great!
Cos
On Tue, Nov 29, 2011 at 12:28PM, Per
We are expecting to release 0.22 very shortly. 0.22 is suppose to be
considered stable because it has been heavily tested at scale by eBay team
(as far as I know). However, I will let 0.22's RM to comment on that.
Cos
On Tue, Nov 22, 2011 at 12:05PM, Niranjan Balasubramanian wrote:
> Hello
>
> W
You masta been using some awkward version of Hadoop...
The issues has been fixed a number of times (see HDFS-1943 for example).
Cos
On Sun, Oct 16, 2011 at 12:21AM, Majid Azimi wrote:
> Hi guys,
>
> I'm realy new to hadoop. I have configured a single node hadoop cluster. but
> seems that my dat
Matt,
I'd like to re-enforce the inquiry about posting (or blogging perhaps ;) details
about Hbase/0.20.205 coexistence. I am sure lotta people will benefit from
this.
Thanks in advance,
Cos
On Tue, Oct 11, 2011 at 08:29PM, jigneshmpatel wrote:
> Matt,
> Thanks a lot. Just wanted to have some
I am sure if you ask at provider's specific list you'll get a better answer
than from common Hadoop list ;)
Cos
On Wed, Sep 14, 2011 at 09:48PM, Mark Kerzner wrote:
> Hi,
>
> I am using the latest Cloudera distribution, and with that I am able to use
> the latest Hadoop API, which I believe is 0
[addressing to common-users@]
this target is there to actually kick-off tests execution. Once you have
instrumented cluster bits are deployed you can start system tests by the
command you've mentioned.
Basically this is exactly what Wiki page is saying, I guess.
Cos
On Thu, Jun 30, 2011 at 05:2
tarted on slave3 whatever the configuration says: start-mapred.sh isn't that
smart and doesn't check your configs.
Cos
> Thanks for helping!
> Pony
>
> On 31/05/2011, at 18:12, Konstantin Boudnik wrote:
>
> > This seems to be your problem, really...
> > * mapr
This seems to be your problem, really...
* mapred.job.tracker*
* slave2:9001*
On Tue, May 31, 2011 at 06:07PM, Juan P. wrote:
> Hi Guys,
> I recently configured my cluster to have 2 VMs. I configured 1
> machine (slave3) to be the namenode and another to be the
> jobtracker (slave2). They both wor
On Thu, May 26, 2011 at 07:01PM, Xu, Richard wrote:
> 2011-05-26 12:30:29,175 INFO org.apache.hadoop.ipc.Server: IPC Server handler
> 4 on 9000, call addBlock(/tmp/hadoop-cfadm/mapred/system/jobtracker.info,
> DFSCl
> ient_2146408809) from 169.193.181.212:55334: error: java.io.IOException: File
Try cloudera specific lisls with your questions.
--
Take care,
Konstantin (Cos) Boudnik
2CAC 8312 4870 D885 8616 6115 220F 6980 1F27 E622
Disclaimer: Opinions expressed in this email are those of the author,
and do not necessarily represent the views of any company the author
might be affiliate
On Sun, May 22, 2011 at 15:30, Edward Capriolo wrote:
but for the
> reasons I outlined above I would not want to be associated with them at all.
"I give no damn about your opinion, but I will defend your right to
express it with my blood..."
That said, please express such opinions not in the had
You are, perhaps, aware that now your name will be associated with
WikiLeaks too because this mailing list is archived and publicly
searchable? I think you are a hero, man!
--
Take care,
Konstantin (Cos) Boudnik
2CAC 8312 4870 D885 8616 6115 220F 6980 1F27 E622
Disclaimer: Opinions expressed in
Also, it seems like Ganglia would be very well complemented by Nagios
to allow you to monitor an overall health of your cluster.
--
Take care,
Konstantin (Cos) Boudnik
2CAC 8312 4870 D885 8616 6115 220F 6980 1F27 E622
Disclaimer: Opinions expressed in this email are those of the author,
and do
[taking common-@ and hdfs-@ lists to Bcc:]
Please do not cross-post.On Tue, May 3, 2011 at 03:26, baran cakici
wrote:
> Hi,
>
> I want to know I/O Performance of my Hadoop Cluster. Because of that I ran
> test.jar, hier is my Results;
>
> - TestDFSIO - : write
> Date & time: Mon May 02 14:
Yes, this seems to be a dependency declaration bug. Not a big deal, but still.
Do you care to open a JIRA under https://issues.apache.org/jira/browse/HADOOP
Thanks,
Cos
On Fri, Apr 29, 2011 at 07:03, Juan P. wrote:
> I was putting together a maven project and imported hadoop-core as a
> depen
Seems like something is setting fs.default.name programmatically.
Another possibility that $HADOOP_CONF_DIR isn't in the classpath in
the second case.
Hope it helps,
Cos
On Thu, Apr 14, 2011 at 20:24, Gang Luo wrote:
> Hi all,
>
> a tricky problem here. When we prepare an input path, it should
This
Apache Ant version 1.8.0 compiled on February 1 2010
should be just fine. I think you need to have something later than 1.7.2 or so
--
Take care,
Konstantin (Cos) Boudnik
On Sat, Mar 26, 2011 at 18:01, Daniel McEnnis wrote:
> Dear Hadoop,
>
> Which version of ant do I need to keep the
[Moving to common-user@, Bcc'ing general@]
If you know where you need to have your print statements you can use
AspectJ to do runtime injection of needed java code into desirable
spots. You don't need to even touch the source code for that - just
instrument (weave) the jar file
--
Take care,
Kon
We just have pushed an update of stack validation framework (the one
Roman and I have presented at eBay a few weeks ago) which allows you
to formalize and simplify Hadoop testing. This is still a pre-Beta
stage (e.g. no user docs are ready yet), but it is working and has a
lot of merit in it as of
Hi Hao.
Yes, you should be able to instrument any part of Hadoop including
mapreduce daemons. A good examples of how to inject faults to Hadoop
are fault inject tests you can find in Hdfs (under
src/test/aop/org/apache/hadoop). I believe Mapreduce doesn't have any
fault injections tests yet for mo
v3 might be of
> more interest to Yahoo.
>
> I would appreciate if someone can comment more on this.
>
> Thanks,
> -Shrinivas
>
> On Fri, Feb 18, 2011 at 4:50 PM, Konstantin Boudnik wrote:
>>
>> On Fri, Feb 18, 2011 at 14:35, Ted Dunning wrote:
>> > I ju
Make sure the instances' ports aren't conflicting and all directories
(NN, JT, etc.) are unique. That should do it.
--
Take care,
Konstantin (Cos) Boudnik
On Mon, Feb 21, 2011 at 20:09, Gang Luo wrote:
> Hello folks,
> I am trying to run multiple hadoop instances on the same cluster. I find it
On Fri, Feb 18, 2011 at 14:35, Ted Dunning wrote:
> I just read the malstone report. They report times for a Java version that
> is many (5x) times slower than for a streaming implementation. That single
> fact indicates that the Java code is so appallingly bad that this is a very
> bad benchmar
'cause email is a soft real-time system.
A bank application would be a hard real-time system.
All the difference is in guarantees.
--
Take care,
Konstantin (Cos) Boudnik
On Thu, Feb 17, 2011 at 05:22, Michael Segel wrote:
>
> Uhm...
>
> 'Realtime' is relative.
>
> Facebook uses HBase for e-mai
Cross posts are bad
to: common-...@hadoop.apache.org,
cc common-user@hadoop.apache.org,
Your urgency is understandable but sending a question to different
(and wrong lists) won't help you.
First of all this is HDFS question.
Second of all for CDH related questions please use cdh-u...@cl
You might also want to check append design doc published at HDFS-265
--
Take care,
Konstantin (Cos) Boudnik
On Thu, Feb 10, 2011 at 07:11, Gokulakannan M wrote:
> Hi All,
>
> I have run the hadoop 0.20 append branch . Can someone please clarify the
> following behavior?
>
> A writer writing
are trying to achieve in iTest I have mentioned
earlier.
Cos
> Thanks in Advance
>
> --
> Edson Ramiro Lucas Filho
> {skype, twitter, gtalk}: erlfilho
> http://www.inf.ufpr.br/erlf07/
>
>
> On Mon, Feb 7, 2011 at 10:29 PM, Konstantin Boudnik wrote:
>
>> On Mon, Feb
On Wed, Feb 9, 2011 at 02:37, Steve Loughran wrote:
> On 08/02/11 15:45, Oleg Ruchovets wrote:
...
>> 2) Currently adding additional machine to the greed we need manually
>> maintain all files and configurations.
>> Is it possible to auto-deploy hadoop servers without the need to
>> m
ested in your initial version, is there a link?
Not at the moment, but I will send it here as soon as a initial
version is pushed out.
>
> Thanks in advance
>
> --
> Edson Ramiro Lucas Filho
> {skype, twitter, gtalk}: erlfilho
> http://www.inf.ufpr.br/erlf07/
>
>
> On Fr
, integration or other tests on our MR
> jobs?
>
> Do you know another test tool or test framework for Hadoop?
>
> Thanks in Advance
>
> --
> Edson Ramiro Lucas Filho
> {skype, twitter, gtalk}: erlfilho
> http://www.inf.ufpr.br/erlf07/
>
>
> On Wed, Feb 2,
(Moving to common-user where this belongs)
Herriot is system test framework which runs against a real physical
cluster deployed with a specially crafted build of Hadoop. That
instrumented build of provides an extra APIs not available in Hadoop
otherwise. These APIs are created to facilitate cluste
This has been discussed in great details here:
http://lmgtfy.com/?q=ssh_exchange_identification%3A+Connection+closed+by+remote+host
--
Take care,
Konstantin (Cos) Boudnik
On Mon, Jan 24, 2011 at 22:07, real great..
wrote:
> Hi,
> Am trying to install Hadoop on a linux cluster(Fedora 12).
Bcc'ing common-user, adding mapreduce-user@ list instead. You have a
better chance to get your question answered if you send it to the
correct list.
For the answer see https://issues.apache.org/jira/browse/MAPREDUCE-2282
--
Take care,
Konstantin (Cos) Boudnik
On Fri, Jan 21, 2011 at 09:08, Ed
(Moving general@ to Bcc: list)
Bo, you can try to run TeraSort from Hadoop examples: you'll see if the
cluster is up and running and cen compare its performance between upgrades, if
needed.
Also, please don't use general@ for user questions: there's common-user@ list
exactly for these purposes.
Yeah, that's pretty crazy all right. In your case looks like that 3
patches on the top are the latest for 0.20-append branch, 0.21 branch
and trunk (which perhaps 0.22 branch at the moment). It doesn't look
like you need to apply all of them - just try the latest for your
particular branch.
The me
There's a supported tool with all bells and whistles:
http://www.cloudera.com/downloads/sqoop/
--
Take care,
Konstantin (Cos) Boudnik
On Sat, Jan 8, 2011 at 18:57, Sonal Goyal wrote:
> Hi Brian,
>
> You can check HIHO at https://github.com/sonalgoyal/hiho which can help you
> load data from
Another possibility to fix it is to install rng-tools which will allow
you to increase the amount of entropy in your system.
--
Take care,
Konstantin (Cos) Boudnik
On Mon, Jan 3, 2011 at 16:48, Jon Lederman wrote:
> Thanks. Will try that. One final question, based on the jstack output I
>
The Java5 dependency is about to go from Hadoop. See HADOOP-7072. I
will try to commit it first thing next year. So, wait a couple of days
and you'll be all right.
Happy New Year everyone!
On Thu, Dec 30, 2010 at 22:08, Da Zheng wrote:
> Hello,
>
> I need to build hadoop in Linux as I need to m
Hi there.
What are looking at is fault injection.
I am not sure what version of Hadoop you're looking at, but here's at
what you take a look in 0.21 and forward:
- Herriot system testing framework (which does code instrumentation
to add special APIs) on a real clusters. Here's some starting
poin
On Wed, Dec 15, 2010 at 09:35, Steve Loughran wrote:
> On 15/12/10 17:26, Konstantin Boudnik wrote:
>>
>> Hey, commit rights won't give you a nice looking certificate, would it? ;)
>>
>
> Depends on what hudson says about the quality of your patches. I mean, if
>
g
certificates rather than real creds such as apache commit rights.
> James
>
>
> On 2010-12-15, at 10:26 AM, Konstantin Boudnik wrote:
>
>> Hey, commit rights won't give you a nice looking certificate, would it? ;)
>>
>> On Wed, Dec 15, 2010 at 09:12,
Hey, commit rights won't give you a nice looking certificate, would it? ;)
On Wed, Dec 15, 2010 at 09:12, Steve Loughran wrote:
> On 09/12/10 03:40, Matthew John wrote:
>>
>> Hi all,.
>>
>> Is there any valid Hadoop Certification available ? Something which adds
>> credibility to your Hadoop expe
On Thu, Dec 9, 2010 at 19:55, Praveen Bathala wrote:
> I did this
> prav...@praveen-desktop:~/hadoop/hadoop-0.20.2$ bin/hadoop dfsadmin
> -safemode leave
> Safe mode is OFF
> prav...@praveen-desktop:~/hadoop/hadoop-0.20.2$ bin/hadoop dfsadmin
> -safemode get
> Safe mode is OFF
This is not a confi
; But I do created manually the directory path and grant with 755.
> Weird
> Richard.
>
> On Wed, Dec 8, 2010 at 6:51 PM, Konstantin Boudnik wrote:
>
>> it seems that you are looking at 2 different directories:
>>
>> first post: /your/path/t
it seems that you are looking at 2 different directories:
first post: /your/path/to/hadoop/tmp/dir/hadoop-hadoop/dfs/name/current
second: ls -l tmp/dir/hadoop-hadoop/dfs/hadoop
--
Take care,
Konstantin (Cos) Boudnik
On Wed, Dec 8, 2010 at 14:19, Richard Zhang wro
Feel free to update https://issues.apache.org/jira/browse/HDFS-1519 if
you find it suitable.
2010/12/7 Petrucci Andreas :
>
> thanks for the replies, this solved my problems
>
> http://mail-archives.apache.org/mod_mbox/hadoop-common-user/200909.mbox/%3c6f5c1d715b2da5498a628e6b9c124f040145221...@h
It is seems that you're trying to run ant with java5. Make sure your
JAVA_HOME is set properly.
--
Take care,
Konstantin (Cos) Boudnik
2010/12/7 Petrucci Andreas :
>
> hello there, im trying to compile libhdfs in order but there are some
> problems. According to http://wiki.apache.org/hadoop
Line like this
log4j.logger.org.apache.hadoop=DEBUG
works for 0.20.* and for 0.21+. Therefore it should work for all others :)
So, are you trying to see your program's debug or from Hadoop ?
--
Cos
On Tue, Nov 23, 2010 at 05:59PM, Tali K wrote:
>
>
>
>
> I am trying to debug my map/red
you have an obligation
> > to tell them when you're going to jerk the rug out from under their
> > feet.
> >
> > On Sat, Nov 13, 2010 at 3:27 PM, Konstantin Boudnik
> > wrote:
> > > It doesn't answer my question. I guess I will have to look for the an
8 packages are deprecated, they are separate from
> the 0.20 packages allowing
> 0.18 code to run on 0.20 systems - this is true of virtually all Java
> libraries
>
> On Sat, Nov 13, 2010 at 3:08 PM, Konstantin Boudnik wrote:
>
> > As much as I love ranting I can't hel
As much as I love ranting I can't help but wonder if there were any promises
to make 0.21+ be backward compatible with <0.20 ?
Just curious?
On Sat, Nov 13, 2010 at 02:50PM, Steve Lewis wrote:
> I have a long rant at http://lordjoesoftware.blogspot.com/ on this but
> the moral is that there seems
You can either:
- build it for yourself (I wouldn't recommend it unless you know whatca doing)
- get ready Apache realease:
http://hadoop.apache.org/common/releases.html
http://hadoop.apache.org/hdfs/releases.html
http://hadoop.apache.org/mapreduce/releases.html
- get Cloudera or Y! dis
0 | 0 || 13 | 0 |
> > -
> > [ivy:resolve] :: problems summary ::
> > [ivy:resolve] WARNINGS
> > [ivy:resolve] problem while downloading module descriptor:
> > http://rep
y thingy .. So is there any
> solution to this ?
>
> Thanks
>
>
> On Sun, Oct 31, 2010 at 3:42 AM, Konstantin Boudnik wrote:
> > I assume you're trying to build 0.20+. Later projects uses later version of
> > junit... Running the build...
> >
> > [i
I assume you're trying to build 0.20+. Later projects uses later version of
junit... Running the build...
[ivy:resolve] downloading
http://repo1.maven.org/maven2/junit/junit/3.8.1/junit-3.8.1.jar ...
[ivy:resolve]
..
I have quickly looked if a similar bug has been filed already and couldn't
find one. Do you mind opening a JIRA for this?
Thanks,
Cos
On Thu, Oct 14, 2010 at 02:42PM, Vitaliy Semochkin wrote:
> Hi,
>
> during map phase I recieved following expcetion
>
> java.lang.NullPointerException
>
You should have no space here "-D HADOOP_CLIENT_OPTS"
On Wed, Oct 13, 2010 at 04:21PM, Shi Yu wrote:
> Hi, thanks for the advice. I tried with your settings,
> $ bin/hadoop jar Test.jar OOloadtest -D HADOOP_CLIENT_OPTS=-Xmx4000m
>
> still no effect. Or this is a system variable? Should I export
To second your point ;-) Reminds me of times when Sun Micro bought GridEngine
(C-app). Me and a couple other folks were developing Distributed Task execution
Framework (written in Java on top of JINI).
Every time new version of eh... Windows was coming around the corner Grid
people were screaming
I had an experiment with block size of 10 bytes (sic!). This was _very_ slow
on NN side. Like writing 5 Mb was happening for 25 minutes or so :( No fun to
say the least...
On Tue, May 18, 2010 at 10:56AM, Konstantin Shvachko wrote:
> You can also get some performance numbers and answers to the blo
In order to post any artifacts to the central repository one needs to have
special access rights. Those are available on to build-masters basically.
On Fri, Feb 26, 2010 at 11:01AM, Massoud Mazar wrote:
> Thanks Owen,
>
> Now hadoop-common builds, but if I use the same target (mvn-install) when
Oh you might consider a colo which cost you 1/4 of EC2 :)
On Fri, Feb 05, 2010 at 10:59AM, Sirota, Peter wrote:
> Hi Justin,
>
> Have you guys considered running inside Amazon Elastic MapReduce? With this
> service you don't have to choose your hardware across all jobs but rather pic
> out of
71 matches
Mail list logo