Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "PoweredBy" page has been changed by tufee:
http://wiki.apache.org/hadoop/PoweredBy?action=diff&rev1=340&rev2=341

    * Hardware: 15 nodes
    * We use Hadoop to process company and job data and run Machine learning 
algorithms for our recommendation engine.
  
+  * [[http://www.cdunow.de/|CDU now!]]
+   * We use Hadoop for our internal searching, filtering and indexing
+ 
   * [[http://www.charlestontraveler.com/|Charleston]]
    * Hardware: 15 nodes
    * We use Hadoop to process company and job data and run Machine learning 
algorithms for our recommendation engine.
- 
  
   * [[http://www.cloudspace.com/|Cloudspace]]
    * Used on client projects and internal log reporting/parsing systems 
designed to scale to infinity and beyond.
@@ -469, +471 @@

    * Hardware: 50 nodes (2*4cpu 2TB*4 disk 16GB RAM each)
    * We use Hadoop(Hive) to analyze logs and mine data for recommendation.
  
+  * [[http://www.reisevision.com/|reisevision]]
+   * We use Hadoop for our internal search
+ 
   * [[http://code.google.com/p/redpoll/|Redpoll]]
    * Hardware: 35 nodes (2*4cpu 10TB disk 16GB RAM each)
    * We intend to parallelize some traditional classification, clustering 
algorithms like Naive Bayes, K-Means, EM so that can deal with large-scale data 
sets.
@@ -559, +564 @@

  
   * [[http://www.tianya.cn/|Tianya]]
    * We use Hadoop for log analysis.
+ 
+  * [[http://www.tufee.de/|tufee]]
+   * We use Hadoop for searching and indexing
  
   * [[http://www.twitter.com|Twitter]]
    * We use Hadoop to store and process tweets, log files, and many other 
types of data generated across Twitter. We use Cloudera's CDH2 distribution of 
Hadoop, and store all data as compressed LZO files.

Reply via email to