[Hadoop Wiki] Update of PoweredBy by Arnaud GUILLAUME

2011-06-21 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Hadoop Wiki for change 
notification.

The PoweredBy page has been changed by Arnaud GUILLAUME:
http://wiki.apache.org/hadoop/PoweredBy?action=diffrev1=315rev2=316

* Hardware: 5 nodes
* We use Hadoop to process user resume data and run algorithms for our 
recommendation engine.
  
-  * [[http://www.obtenir-rio.info|Rio Orange]]
+  * [[http://www.portabilite-du-numero.com|Portabilité du numéro]]
* 50 node cluster in Colo.
* Also used as a proof of concept cluster for a cloud based ERP syste.
  


[Hadoop Wiki] Update of PoweredBy by Arnaud GUILLAUME

2011-06-21 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Hadoop Wiki for change 
notification.

The PoweredBy page has been changed by Arnaud GUILLAUME:
http://wiki.apache.org/hadoop/PoweredBy?action=diffrev1=316rev2=317

* We use Hadoop for searching and analysis of millions of bookkeeping 
postings
* Also used as a proof of concept cluster for a cloud based ERP system
  
+  * [[http://www.portabilite-du-numero.com|Portabilité du numéro]]
+   * 50 node cluster in Colo.
+   * Also used as a proof of concept cluster for a cloud based ERP syste.
+ 
   * [[http://www.psgtech.edu/|PSG Tech, Coimbatore, India]]
* Multiple alignment of protein sequences helps to determine evolutionary 
linkages and to predict molecular structures. The dynamic nature of the 
algorithm coupled with data and compute parallelism of hadoop data grids 
improves the accuracy and speed of sequence alignment. Parallelism at the 
sequence and block level reduces the time complexity of MSA problems. Scalable 
nature of Hadoop makes it apt to solve large scale alignment problems.
* Our cluster size varies from 5 to 10 nodes. Cluster nodes vary from 2950 
Quad Core Rack Server, with 2x6MB Cache and 4 x 500 GB SATA Hard Drive to E7200 
/ E7400 processors with 4 GB RAM and 160 GB HDD.
@@ -443, +447 @@

   * [[http://resu.me/|Resu.me]]
* Hardware: 5 nodes
* We use Hadoop to process user resume data and run algorithms for our 
recommendation engine.
- 
-  * [[http://www.portabilite-du-numero.com|Portabilité du numéro]]
-   * 50 node cluster in Colo.
-   * Also used as a proof of concept cluster for a cloud based ERP syste.
  
  = S =
   * 
[[http://www.sara.nl/news/recent/20101103/Hadoop_proof-of-concept.html|SARA, 
Netherlands]]


[Hadoop Wiki] Update of GangliaMetrics by DominicDunlop

2011-06-21 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Hadoop Wiki for change 
notification.

The GangliaMetrics page has been changed by DominicDunlop:
http://wiki.apache.org/hadoop/GangliaMetrics?action=diffrev1=6rev2=7

Comment:
Note that Ganglia 3.1 support now in main line. Regret I could not work out 
exactly when patch applied. Help, someone?

  
  Be aware that versions 0.18.1, 0.19.0, and prior need to be patched in order 
to get Ganglia working; refer to 
[[https://issues.apache.org/jira/browse/HADOOP-3422|JIRA issue 3422]].
  
- Additionally, the Ganglia protocol change significantly between Ganglia 3.0 
and Ganglia 3.1 (i.e., Ganglia 3.1 is not compatible with Ganglia 3.0 clients). 
 This caused Hadoop to not work with Ganglia 3.1; there is currently a patch 
available for this, 
[[https://issues.apache.org/jira/browse/HADOOP-4675|HADOOP-4675]].
+ Additionally, the Ganglia protocol change significantly between Ganglia 3.0 
and Ganglia 3.1 (i.e., Ganglia 3.1 is not compatible with Ganglia 3.0 clients). 
 This caused Hadoop to not work with Ganglia 3.1; there is a patch available 
for this, [[https://issues.apache.org/jira/browse/HADOOP-4675|HADOOP-4675]]. As 
of November 2010, this patch has been rolled into the mainline for 0.20.2 and 
later. To use the Ganglia 3.1 protocol in place of the 3.0, substitute 
{{{org.apache.hadoop.metrics.ganglia.GangliaContext31}}} for 
{{{org.apache.hadoop.metrics.ganglia.GangliaContext}}} in the 
hadoop-metrics.properties lines above.
  


[Hadoop Wiki] Update of LocalBadContent by ToddLipcon

2011-06-21 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Hadoop Wiki for change 
notification.

The LocalBadContent page has been changed by ToddLipcon:
http://wiki.apache.org/hadoop/LocalBadContent?action=diffrev1=20rev2=21

Comment:
worldlingo isn't spam, they're a real user of HBase

  vergleich-riester-rente\.net
  whatisdetox\.com
  wm-u\.com
- worldlingo\.com
  wtcsites\.com
  x2\.top\.tc
  x25\.us\.to


[Hadoop Wiki] Update of Hive/HudsonBuild by JohnSichi

2011-06-21 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Hadoop Wiki for change 
notification.

The Hive/HudsonBuild page has been changed by JohnSichi:
http://wiki.apache.org/hadoop/Hive/HudsonBuild?action=diffrev1=6rev2=7

- There are nightly Hive builds on Jenkins (formerly Hudson), building against 
different supported Hadoop versions (currently only 0.20).
+ There are nightly Hive builds on Jenkins (formerly Hudson), building against 
different supported Hadoop versions (currently only 0.20.1).
  
- http://builds.apache.org/hudson/job/Hive-trunk-h0.20
+ http://builds.apache.org/hudson/job/Hive-trunk-h0.21
  
  === Issues ===
   * Sometimes the temporary directories fill up, causing issues with the 
build. See this ticket for more information. 
https://issues.apache.org/jira/browse/HIVE-473


[Hadoop Wiki] Update of Hive/HudsonBuild by CarlSteinbach

2011-06-21 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Hadoop Wiki for change 
notification.

The Hive/HudsonBuild page has been changed by CarlSteinbach:
http://wiki.apache.org/hadoop/Hive/HudsonBuild?action=diffrev1=7rev2=8

  There are nightly Hive builds on Jenkins (formerly Hudson), building against 
different supported Hadoop versions (currently only 0.20.1).
  
- http://builds.apache.org/hudson/job/Hive-trunk-h0.21
+ http://builds.apache.org/view/G-L/view/Hive/
  
  === Issues ===
   * Sometimes the temporary directories fill up, causing issues with the 
build. See this ticket for more information. 
https://issues.apache.org/jira/browse/HIVE-473


svn commit: r1138278 - in /hadoop/common/branches/MR-279/common: CHANGES.txt src/saveVersion.sh

2011-06-21 Thread llu
Author: llu
Date: Wed Jun 22 02:02:24 2011
New Revision: 1138278

URL: http://svn.apache.org/viewvc?rev=1138278view=rev
Log:
HADOOP-7390. VersionInfo not generated properly in git after unsplit. (todd via 
atm)

Modified:
hadoop/common/branches/MR-279/common/CHANGES.txt
hadoop/common/branches/MR-279/common/src/saveVersion.sh

Modified: hadoop/common/branches/MR-279/common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/MR-279/common/CHANGES.txt?rev=1138278r1=1138277r2=1138278view=diff
==
--- hadoop/common/branches/MR-279/common/CHANGES.txt (original)
+++ hadoop/common/branches/MR-279/common/CHANGES.txt Wed Jun 22 02:02:24 2011
@@ -142,6 +142,9 @@ Trunk (unreleased changes)
 HADOOP-7082. Configuration.writeXML should not hold lock while outputting.
 (todd)
 
+HADOOP-7390. VersionInfo not generated properly in git after unsplit. (todd
+via atm)
+
 Release 0.22.0 - Unreleased
 
   INCOMPATIBLE CHANGES

Modified: hadoop/common/branches/MR-279/common/src/saveVersion.sh
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/MR-279/common/src/saveVersion.sh?rev=1138278r1=1138277r2=1138278view=diff
==
--- hadoop/common/branches/MR-279/common/src/saveVersion.sh (original)
+++ hadoop/common/branches/MR-279/common/src/saveVersion.sh Wed Jun 22 02:02:24 
2011
@@ -26,7 +26,7 @@ build_dir=$2
 user=`whoami`
 date=`date`
 cwd=`pwd`
-if [ -d .git ]; then
+if git rev-parse HEAD 2/dev/null  /dev/null ; then
   revision=`git log -1 --pretty=format:%H`
   hostname=`hostname`
   branch=`git branch | sed -n -e 's/^* //p'`