[jira] [Resolved] (HADOOP-7431) Test DiskChecker's functionality in identifying bad directories (Part 2 of testing DiskChecker)

2011-07-25 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HADOOP-7431.
-

Resolution: Not A Problem

See my earlier comment. This was already covered.

 Test DiskChecker's functionality in identifying bad directories (Part 2 of 
 testing DiskChecker)
 ---

 Key: HADOOP-7431
 URL: https://issues.apache.org/jira/browse/HADOOP-7431
 Project: Hadoop Common
  Issue Type: Test
  Components: test, util
Affects Versions: 0.23.0
Reporter: Harsh J
Assignee: Harsh J
  Labels: test
 Fix For: 0.23.0


 Add a test for the DiskChecker#checkDir method used in other projects (HDFS).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




compilation of CDH3

2011-07-25 Thread Keren Ouaknine
Hello,

I am compiling CDH3, and expect it to finish within 5-10 minutes.
However the compilation process get stuck. It happens during forrest
(probably documentation generation).

I killed the forrest process (dont need documentation at this stage), and
expected to see compilation continue but it didn't help.
These are my flags:

ant -Dversion=0.20.ka0 -Dcompile.native=true -Dcompile.c++=true -Dlibhdfs=1
-Dlibrecordio=true clean api-report tar test test-c++-libhdfs

Thanks,
Keren

-- 
Keren Ouaknine
Cell: +972 54 2565404
Web: www.kereno.com


Re: compilation of CDH3

2011-07-25 Thread Steve Loughran

On 25/07/11 14:31, Keren Ouaknine wrote:

Hello,

I am compiling CDH3, and expect it to finish within 5-10 minutes.
However the compilation process get stuck. It happens during forrest
(probably documentation generation).

I killed the forrest process (dont need documentation at this stage), and
expected to see compilation continue but it didn't help.
These are my flags:

ant -Dversion=0.20.ka0 -Dcompile.native=true -Dcompile.c++=true -Dlibhdfs=1
-Dlibrecordio=true clean api-report tar test test-c++-libhdfs

Thanks,
Keren


I'd take this to the Cloudera forums.


Re: compilation of CDH3

2011-07-25 Thread Eli Collins
+cdh-user  -common-dev(bcc)

Perhaps you're using forrest 0.9, the hadoop docs don't build with
forrest 0.9 (eg see HADOOP-7303), you need to use v 0.8 (and can
explicitly set it via -Dforrest.home).

You can use also use the binary target to build a tarball w/o docs.

Thanks,
Eli

On Mon, Jul 25, 2011 at 6:31 AM, Keren Ouaknine ker...@gmail.com wrote:
 Hello,

 I am compiling CDH3, and expect it to finish within 5-10 minutes.
 However the compilation process get stuck. It happens during forrest
 (probably documentation generation).

 I killed the forrest process (dont need documentation at this stage), and
 expected to see compilation continue but it didn't help.
 These are my flags:

 ant -Dversion=0.20.ka0 -Dcompile.native=true -Dcompile.c++=true -Dlibhdfs=1
 -Dlibrecordio=true clean api-report tar test test-c++-libhdfs

 Thanks,
 Keren

 --
 Keren Ouaknine
 Cell: +972 54 2565404
 Web: www.kereno.com



[jira] [Created] (HADOOP-7484) Update HDFS dependency of Java for deb package

2011-07-25 Thread Eric Yang (JIRA)
Update HDFS dependency of Java for deb package
--

 Key: HADOOP-7484
 URL: https://issues.apache.org/jira/browse/HADOOP-7484
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 0.23.0
 Environment: Java 6, Ubuntu/Debian
Reporter: Eric Yang
 Fix For: 0.23.0


Java dependency for Debian package is specified as open JDK, but it should 
depends on Sun version of Java.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-7484) Update HDFS dependency of Java for deb package

2011-07-25 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang resolved HADOOP-7484.
---

Resolution: Duplicate

This is duplicate of HDFS-2192.

 Update HDFS dependency of Java for deb package
 --

 Key: HADOOP-7484
 URL: https://issues.apache.org/jira/browse/HADOOP-7484
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 0.23.0
 Environment: Java 6, Ubuntu/Debian
Reporter: Eric Yang
 Fix For: 0.23.0


 Java dependency for Debian package is specified as open JDK, but it should 
 depends on Sun version of Java.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Interested in helping a research study on Eclipse?

2011-07-25 Thread Mohsen Vakilian
Hi,

We're happy to announce that CodingSpectator now supports Eclipse Indigo (3.7) 
-- 
the latest version of Eclipse.

We are looking for more participants. Your participation in our study will help 
to keep 
the Eclipse platform innovative. If you couldn't participate in the study 
because you 
had to use Indigo, you can now install CodingSpectator.

If you are interested, please sign up at 
http://codingspectator.cs.illinois.edu/ConsentForm to participate in our 
study.

We are thankful to those of you who have already participated in our study.

Regards,
Mohsen Vakilian

 Original message 
Date: Wed,  8 Jun 2011 23:30:05 -0500 (CDT)
From: Mohsen Vakilian mvaki...@illinois.edu  
Subject: Interested in helping a research study on Eclipse?  
To: common-dev@hadoop.apache.org
Cc: Ralph Johnson rjohn...@illinois.edu, Balaji Ambresh 
rajku...@illinois.edu, Roshanak Zilouchian rzilo...@illinois.edu, Stas 
Negara 
snega...@illinois.edu, Nicholas Chen nc...@illinois.edu

Hi

I'm Mohsen, a PhD student working with Prof. Ralph Johnson at the University 
of 
Illinois at Urbana-Champaign (UIUC). Ralph is a co-author of the seminal book 
on 
design patterns (GoF) and his research group has a history of important 
contributions 
to IDE's.

Our team [1] is studying how developers interact with the Eclipse IDE for 
evolving 
and maintaining their code. Since Hadoop comes with some Eclipse support [2], I 
assume that Eclipse has some users in the Hadoop community. Thus, we are 
extending this invitation to Hadoop developers and would greatly appreciate and 
value 
your participation and help in our research study.

To participate you should be at least 18 years old and use Eclipse Helios for 
Java 
development. As a participant, we ask that you complete a short survey and 
install 
our Eclipse plug-in called CodingSpectator [3].

CodingSpectator monitors programming interactions non-intrusively in the 
background and periodically uploads it to a secure server at UIUC. To get a 
representative perspective of how you interact with Eclipse, we would 
appreciate if 
you could install CodingSpectator for two months. Rest assured that we are 
taking 
the utmost measures to protect your privacy and confidentiality.

If you are interested, you may sign up at 
http://codingspectator.cs.illinois.edu/ConsentForm, which contains our 
consent 
form with all the details and procedures of our research study.

Your participation will help us greatly as we try to better understand how 
developers 
interact with their IDE's so we can propose improvements which fit better with 
their 
mindsets.

Thanks in advance for your time! Please do not hesitate to contact me 
(mvaki...@illinois.edu) if you have any questions or comments. More information 
can 
also be found at our FAQ [4]. Feel free to forward this invitation to anyone 
who might 
be interested in participating in this study.

--
Mohsen Vakilian
 the CodingSpectator team

[1] http://codingspectator.cs.illinois.edu/People
[2] http://wiki.apache.org/hadoop/EclipsePlugIn
[3] http://codingspectator.cs.illinois.edu
[4] http://codingspectator.cs.illinois.edu/FAQ


[VOTE] Release 0.20.204.0-rc0

2011-07-25 Thread Owen O'Malley
I've created a release candidate for 0.20.204.0 that I would like to release.

It is available at: http://people.apache.org/~omalley/hadoop-0.20.204.0-rc0/

0.20.204.0 has many fixes including disk fail in place and the new rpm and deb 
packages. Fail in place allows the DataNode and TaskTracker to continue after a 
hard drive fails.

-- Owen