http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_pages/docs/0.13.0/quickstart.md
----------------------------------------------------------------------
diff --git a/website/_pages/docs/0.13.0/quickstart.md 
b/website/_pages/docs/0.13.0/quickstart.md
deleted file mode 100644
index fdb9ceb..0000000
--- a/website/_pages/docs/0.13.0/quickstart.md
+++ /dev/null
@@ -1,62 +0,0 @@
----
-layout: mahoutdoc
-title: Quickstart
-permalink: /docs/0.13.0/quickstart/
----
-# Mahout Quick Start 
-# TODO : Fill this in with the bare essential basics
-
-
-
-# Mahout MapReduce Overview
-
-## Getting Mahout
-
-#### Download the latest release
-
-Download the latest release 
[here](http://www.apache.org/dyn/closer.cgi/mahout/).
-
-Or checkout the latest code from 
[here](http://mahout.apache.org/developers/version-control.html)
-
-#### Alternatively: Add Mahout 0.13.0 to a maven project
-
-Mahout is also available via a [maven 
repository](http://mvnrepository.com/artifact/org.apache.mahout) under the 
group id *org.apache.mahout*.
-If you would like to import the latest release of mahout into a java project, 
add the following dependency in your *pom.xml*:
-
-    <dependency>
-        <groupId>org.apache.mahout</groupId>
-        <artifactId>mahout-mr</artifactId>
-        <version>0.13.0</version>
-    </dependency>
- 
-
-## Features
-
-For a full list of Mahout's features see our [Features by 
Engine](http://mahout.apache.org/users/basics/algorithms.html) page.
-
-    
-## Using Mahout
-
-Mahout has prepared a bunch of examples and tutorials for users to quickly 
learn how to use its machine learning algorithms.
-
-#### Recommendations
-
-Check the [Recommender Quickstart](/users/recommender/quickstart.html) or the 
tutorial on [creating a userbased recommender in 5 
minutes](/users/recommender/userbased-5-minutes.html).
-
-If you are building a recommender system for the first time, please also refer 
to a list of [Dos and 
Don'ts](/users/recommender/recommender-first-timer-faq.html) that might be 
helpful.
-
-#### Clustering
-
-Check the [Synthetic 
data](/users/clustering/clustering-of-synthetic-control-data.html) example.
-
-#### Classification
-
-If you are interested in how to train a **Naive Bayes** model, look at the [20 
newsgroups](/users/classification/twenty-newsgroups.html) example.
-
-If you plan to build a **Hidden Markov Model** for speech recognition, the 
example [here](/users/classification/hidden-markov-models.html) might be 
instructive. 
-
-Or you could build a **Random Forest** model by following this [quick start 
page](/users/classification/partial-implementation.html).
-
-#### Working with Text 
-
-If you need to convert raw text into word vectors as input to clustering or 
classification algorithms, please refer to this page on [how to create vectors 
from text](/users/basics/creating-vectors-from-text.html).

http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_pages/docs/0.13.0/tutorials/classify-a-doc-from-the-shell.md
----------------------------------------------------------------------
diff --git 
a/website/_pages/docs/0.13.0/tutorials/classify-a-doc-from-the-shell.md 
b/website/_pages/docs/0.13.0/tutorials/classify-a-doc-from-the-shell.md
deleted file mode 100644
index b565b61..0000000
--- a/website/_pages/docs/0.13.0/tutorials/classify-a-doc-from-the-shell.md
+++ /dev/null
@@ -1,257 +0,0 @@
----
-layout: mahoutdoc
-title: Text Classification Example
-permalink: /docs/0.13.0/tutorials/text-classification
----
-
-#Building a text classifier in Mahout's Spark Shell
-
-This tutorial will take you through the steps used to train a Multinomial 
Naive Bayes model and create a text classifier based on that model using the 
```mahout spark-shell```. 
-
-## Prerequisites
-This tutorial assumes that you have your Spark environment variables set for 
the ```mahout spark-shell``` see: [Playing with Mahout's 
Shell](http://mahout.apache.org/users/sparkbindings/play-with-shell.html).  As 
well we assume that Mahout is running in cluster mode (i.e. with the 
```MAHOUT_LOCAL``` environment variable **unset**) as we'll be reading and 
writing to HDFS.
-
-## Downloading and Vectorizing the Wikipedia dataset
-*As of Mahout v. 0.10.0, we are still reliant on the MapReduce versions of 
```mahout seqwiki``` and ```mahout seq2sparse``` to extract and vectorize our 
text.  A* [*Spark implementation of 
seq2sparse*](https://issues.apache.org/jira/browse/MAHOUT-1663) *is in the 
works for Mahout v. 0.11.* However, to download the Wikipedia dataset, extract 
the bodies of the documentation, label each document and vectorize the text 
into TF-IDF vectors, we can simpmly run the 
[wikipedia-classifier.sh](https://github.com/apache/mahout/blob/master/examples/bin/classify-wikipedia.sh)
 example.  
-
-    Please select a number to choose the corresponding task to run
-    1. CBayes (may require increased heap space on yarn)
-    2. BinaryCBayes
-    3. clean -- cleans up the work area in /tmp/mahout-work-wiki
-    Enter your choice :
-
-Enter (2). This will download a large recent XML dump of the Wikipedia 
database, into a ```/tmp/mahout-work-wiki``` directory, unzip it and  place it 
into HDFS.  It will run a [MapReduce job to parse the wikipedia 
set](http://mahout.apache.org/users/classification/wikipedia-classifier-example.html),
 extracting and labeling only pages with category tags for [United States] and 
[United Kingdom] (~11600 documents). It will then run ```mahout seq2sparse``` 
to convert the documents into TF-IDF vectors.  The script will also a build and 
test a [Naive Bayes model using 
MapReduce](http://mahout.apache.org/users/classification/bayesian.html).  When 
it is completed, you should see a confusion matrix on your screen.  For this 
tutorial, we will ignore the MapReduce model, and build a new model using Spark 
based on the vectorized text output by ```seq2sparse```.
-
-## Getting Started
-
-Launch the ```mahout spark-shell```.  There is an example script: 
```spark-document-classifier.mscala``` (.mscala denotes a Mahout-Scala script 
which can be run similarly to an R script).   We will be walking through this 
script for this tutorial but if you wanted to simply run the script, you could 
just issue the command: 
-
-    mahout> :load /path/to/mahout/examples/bin/spark-document-classifier.mscala
-
-For now, lets take the script apart piece by piece.  You can cut and paste the 
following code blocks into the ```mahout spark-shell```.
-
-## Imports
-
-Our Mahout Naive Bayes imports:
-
-    import org.apache.mahout.classifier.naivebayes._
-    import org.apache.mahout.classifier.stats._
-    import org.apache.mahout.nlp.tfidf._
-
-Hadoop imports needed to read our dictionary:
-
-    import org.apache.hadoop.io.Text
-    import org.apache.hadoop.io.IntWritable
-    import org.apache.hadoop.io.LongWritable
-
-## Read in our full set from HDFS as vectorized by seq2sparse in 
classify-wikipedia.sh
-
-    val pathToData = "/tmp/mahout-work-wiki/"
-    val fullData = drmDfsRead(pathToData + "wikipediaVecs/tfidf-vectors")
-
-## Extract the category of each observation and aggregate those observations 
by category
-
-    val (labelIndex, aggregatedObservations) = 
SparkNaiveBayes.extractLabelsAndAggregateObservations(
-                                                                 fullData)
-
-## Build a Muitinomial Naive Bayes model and self test on the training set
-
-    val model = SparkNaiveBayes.train(aggregatedObservations, labelIndex, 
false)
-    val resAnalyzer = SparkNaiveBayes.test(model, fullData, false)
-    println(resAnalyzer)
-    
-printing the ```ResultAnalyzer``` will display the confusion matrix.
-
-## Read in the dictionary and document frequency count from HDFS
-    
-    val dictionary = sdc.sequenceFile(pathToData + 
"wikipediaVecs/dictionary.file-0",
-                                      classOf[Text],
-                                      classOf[IntWritable])
-    val documentFrequencyCount = sdc.sequenceFile(pathToData + 
"wikipediaVecs/df-count",
-                                                  classOf[IntWritable],
-                                                  classOf[LongWritable])
-
-    // setup the dictionary and document frequency count as maps
-    val dictionaryRDD = dictionary.map { 
-                                    case (wKey, wVal) => 
wKey.asInstanceOf[Text]
-                                                             .toString() -> 
wVal.get() 
-                                       }
-                                       
-    val documentFrequencyCountRDD = documentFrequencyCount.map {
-                                            case (wKey, wVal) => 
wKey.asInstanceOf[IntWritable]
-                                                                     .get() -> 
wVal.get() 
-                                                               }
-    
-    val dictionaryMap = dictionaryRDD.collect.map(x => x._1.toString -> 
x._2.toInt).toMap
-    val dfCountMap = documentFrequencyCountRDD.collect.map(x => x._1.toInt -> 
x._2.toLong).toMap
-
-## Define a function to tokenize and vectorize new text using our current 
dictionary
-
-For this simple example, our function ```vectorizeDocument(...)``` will 
tokenize a new document into unigrams using native Java String methods and 
vectorize using our dictionary and document frequencies. You could also use a 
[Lucene](https://lucene.apache.org/core/) analyzer for bigrams, trigrams, etc., 
and integrate Apache [Tika](https://tika.apache.org/) to extract text from 
different document types (PDF, PPT, XLS, etc.).  Here, however we will keep it 
simple, stripping and tokenizing our text using regexs and native String 
methods.
-
-    def vectorizeDocument(document: String,
-                            dictionaryMap: Map[String,Int],
-                            dfMap: Map[Int,Long]): Vector = {
-        val wordCounts = document.replaceAll("[^\\p{L}\\p{Nd}]+", " ")
-                                    .toLowerCase
-                                    .split(" ")
-                                    .groupBy(identity)
-                                    .mapValues(_.length)         
-        val vec = new RandomAccessSparseVector(dictionaryMap.size)
-        val totalDFSize = dfMap(-1)
-        val docSize = wordCounts.size
-        for (word <- wordCounts) {
-            val term = word._1
-            if (dictionaryMap.contains(term)) {
-                val tfidf: TermWeight = new TFIDF()
-                val termFreq = word._2
-                val dictIndex = dictionaryMap(term)
-                val docFreq = dfCountMap(dictIndex)
-                val currentTfIdf = tfidf.calculate(termFreq,
-                                                   docFreq.toInt,
-                                                   docSize,
-                                                   totalDFSize.toInt)
-                vec.setQuick(dictIndex, currentTfIdf)
-            }
-        }
-        vec
-    }
-
-## Setup our classifier
-
-    val labelMap = model.labelIndex
-    val numLabels = model.numLabels
-    val reverseLabelMap = labelMap.map(x => x._2 -> x._1)
-    
-    // instantiate the correct type of classifier
-    val classifier = model.isComplementary match {
-        case true => new ComplementaryNBClassifier(model)
-        case _ => new StandardNBClassifier(model)
-    }
-
-## Define an argmax function 
-
-The label with the highest score wins the classification for a given document.
-    
-    def argmax(v: Vector): (Int, Double) = {
-        var bestIdx: Int = Integer.MIN_VALUE
-        var bestScore: Double = Integer.MIN_VALUE.asInstanceOf[Int].toDouble
-        for(i <- 0 until v.size) {
-            if(v(i) > bestScore){
-                bestScore = v(i)
-                bestIdx = i
-            }
-        }
-        (bestIdx, bestScore)
-    }
-
-## Define our TF(-IDF) vector classifier
-
-    def classifyDocument(clvec: Vector) : String = {
-        val cvec = classifier.classifyFull(clvec)
-        val (bestIdx, bestScore) = argmax(cvec)
-        reverseLabelMap(bestIdx)
-    }
-
-## Two sample news articles: United States Football and United Kingdom Football
-    
-    // A random United States football article
-    // 
http://www.reuters.com/article/2015/01/28/us-nfl-superbowl-security-idUSKBN0L12JR20150128
-    val UStextToClassify = new String("(Reuters) - Super Bowl security 
officials acknowledge" +
-        " the NFL championship game represents a high profile target on a 
world stage but are" +
-        " unaware of any specific credible threats against Sunday's showcase. 
In advance of" +
-        " one of the world's biggest single day sporting events, Homeland 
Security Secretary" +
-        " Jeh Johnson was in Glendale on Wednesday to review security 
preparations and tour" +
-        " University of Phoenix Stadium where the Seattle Seahawks and New 
England Patriots" +
-        " will battle. Deadly shootings in Paris and arrest of suspects in 
Belgium, Greece and" +
-        " Germany heightened fears of more attacks around the world and social 
media accounts" +
-        " linked to Middle East militant groups have carried a number of 
threats to attack" +
-        " high-profile U.S. events. There is no specific credible threat, said 
Johnson, who" + 
-        " has appointed a federal coordination team to work with local, state 
and federal" +
-        " agencies to ensure safety of fans, players and other workers 
associated with the" + 
-        " Super Bowl. I'm confident we will have a safe and secure and 
successful event." +
-        " Sunday's game has been given a Special Event Assessment Rating 
(SEAR) 1 rating, the" +
-        " same as in previous years, except for the year after the Sept. 11, 
2001 attacks, when" +
-        " a higher level was declared. But security will be tight and visible 
around Super" +
-        " Bowl-related events as well as during the game itself. All fans will 
pass through" +
-        " metal detectors and pat downs. Over 4,000 private security personnel 
will be deployed" +
-        " and the almost 3,000 member Phoenix police force will be on Super 
Bowl duty. Nuclear" +
-        " device sniffing teams will be deployed and a network of Bio-Watch 
detectors will be" +
-        " set up to provide a warning in the event of a biological attack. The 
Department of" +
-        " Homeland Security (DHS) said in a press release it had held special 
cyber-security" +
-        " and anti-sniper training sessions. A U.S. official said the 
Transportation Security" +
-        " Administration, which is responsible for screening airline 
passengers, will add" +
-        " screeners and checkpoint lanes at airports. Federal air marshals, 
behavior detection" +
-        " officers and dog teams will help to secure transportation systems in 
the area. We" +
-        " will be ramping it (security) up on Sunday, there is no doubt about 
that, said Federal"+
-        " Coordinator Matthew Allen, the DHS point of contact for planning and 
support. I have" +
-        " every confidence the public safety agencies that represented in the 
planning process" +
-        " are going to have their best and brightest out there this weekend 
and we will have" +
-        " a very safe Super Bowl.")
-    
-    // A random United Kingdom football article
-    // 
http://www.reuters.com/article/2015/01/26/manchester-united-swissquote-idUSL6N0V52RZ20150126
-    val UKtextToClassify = new String("(Reuters) - Manchester United have 
signed a sponsorship" +
-        " deal with online financial trading company Swissquote, expanding the 
commercial" +
-        " partnerships that have helped to make the English club one of the 
richest teams in" +
-        " world soccer. United did not give a value for the deal, the club's 
first in the sector," +
-        " but said on Monday it was a multi-year agreement. The Premier League 
club, 20 times" +
-        " English champions, claim to have 659 million followers around the 
globe, making the" +
-        " United name attractive to major brands like Chevrolet cars and 
sportswear group Adidas." +
-        " Swissquote said the global deal would allow it to use United's 
popularity in Asia to" +
-        " help it meet its targets for expansion in China. Among benefits from 
the deal," +
-        " Swissquote's clients will have a chance to meet United players and 
get behind the scenes" +
-        " at the Old Trafford stadium. Swissquote is a Geneva-based online 
trading company that" +
-        " allows retail investors to buy and sell foreign exchange, equities, 
bonds and other asset" +
-        " classes. Like other retail FX brokers, Swissquote was left nursing 
losses on the Swiss" +
-        " franc after Switzerland's central bank stunned markets this month by 
abandoning its cap" +
-        " on the currency. The fallout from the abrupt move put rival and West 
Ham United shirt" +
-        " sponsor Alpari UK into administration. Swissquote itself was forced 
to book a 25 million" +
-        " Swiss francs ($28 million) provision for its clients who were left 
out of pocket" +
-        " following the franc's surge. United's ability to grow revenues off 
the pitch has made" +
-        " them the second richest club in the world behind Spain's Real 
Madrid, despite a" +
-        " downturn in their playing fortunes. United Managing Director Richard 
Arnold said" +
-        " there was still lots of scope for United to develop sponsorships in 
other areas of" +
-        " business. The last quoted statistics that we had showed that of the 
top 25 sponsorship" +
-        " categories, we were only active in 15 of those, Arnold told Reuters. 
I think there is a" +
-        " huge potential still for the club, and the other thing we have seen 
is there is very" +
-        " significant growth even within categories. United have endured a 
tricky transition" +
-        " following the retirement of manager Alex Ferguson in 2013, finishing 
seventh in the" +
-        " Premier League last season and missing out on a place in the 
lucrative Champions League." +
-        " ($1 = 0.8910 Swiss francs) (Writing by Neil Maidment, additional 
reporting by Jemima" + 
-        " Kelly; editing by Keith Weir)")
-
-## Vectorize and classify our documents
-
-    val usVec = vectorizeDocument(UStextToClassify, dictionaryMap, dfCountMap)
-    val ukVec = vectorizeDocument(UKtextToClassify, dictionaryMap, dfCountMap)
-    
-    println("Classifying the news article about superbowl security (united 
states)")
-    classifyDocument(usVec)
-    
-    println("Classifying the news article about Manchester United (united 
kingdom)")
-    classifyDocument(ukVec)
-
-## Tie everything together in a new method to classify text 
-    
-    def classifyText(txt: String): String = {
-        val v = vectorizeDocument(txt, dictionaryMap, dfCountMap)
-        classifyDocument(v)
-    }
-
-## Now we can simply call our classifyText(...) method on any String
-
-    classifyText("Hello world from Queens")
-    classifyText("Hello world from London")
-    
-## Model persistance
-
-You can save the model to HDFS:
-
-    model.dfsWrite("/path/to/model")
-    
-And retrieve it with:
-
-    val model =  NBModel.dfsRead("/path/to/model")
-
-The trained model can now be embedded in an external application.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_pages/docs/0.13.0/tutorials/how-to-build-an-app.md
----------------------------------------------------------------------
diff --git a/website/_pages/docs/0.13.0/tutorials/how-to-build-an-app.md 
b/website/_pages/docs/0.13.0/tutorials/how-to-build-an-app.md
deleted file mode 100644
index 9cf624b..0000000
--- a/website/_pages/docs/0.13.0/tutorials/how-to-build-an-app.md
+++ /dev/null
@@ -1,255 +0,0 @@
----
-layout: mahoutdoc
-title: Mahout Samsara In Core
-permalink: /docs/0.13.0/tutorials/build-app
----
-#How to create and App using Mahout
-
-This is an example of how to create a simple app using Mahout as a Library. 
The source is available on Github in the [3-input-cooc 
project](https://github.com/pferrel/3-input-cooc) with more explanation about 
what it does (has to do with collaborative filtering). For this tutorial we'll 
concentrate on the app rather than the data science.
-
-The app reads in three user-item interactions types and creats indicators for 
them using cooccurrence and cross-cooccurrence. The indicators will be written 
to text files in a format ready for search engine indexing in search engine 
based recommender.
-
-##Setup
-In order to build and run the CooccurrenceDriver you need to install the 
following:
-
-* Install the Java 7 JDK from Oracle. Mac users look here: [Java SE 
Development Kit 
7u72](http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html).
-* Install sbt (simple build tool) 0.13.x for 
[Mac](http://www.scala-sbt.org/release/tutorial/Installing-sbt-on-Mac.html), 
[Linux](http://www.scala-sbt.org/release/tutorial/Installing-sbt-on-Linux.html) 
or [manual 
instalation](http://www.scala-sbt.org/release/tutorial/Manual-Installation.html).
-* Install [Spark 
1.1.1](https://spark.apache.org/docs/1.1.1/spark-standalone.html). Don't forget 
to setup SPARK_HOME
-* Install [Mahout 0.10.0](http://mahout.apache.org/general/downloads.html). 
Don't forget to setup MAHOUT_HOME and MAHOUT_LOCAL
-
-Why install if you are only using them as a library? Certain binaries and 
scripts are required by the libraries to get information about the environment 
like discovering where jars are located.
-
-Spark requires a set of jars on the classpath for the client side part of an 
app and another set of jars must be passed to the Spark Context for running 
distributed code. The example should discover all the neccessary classes 
automatically.
-
-##Application
-Using Mahout as a library in an application will require a little Scala code. 
Scala has an App trait so we'll create an object, which inherits from ```App```
-
-
-    object CooccurrenceDriver extends App {
-    }
-    
-
-This will look a little different than Java since ```App``` does delayed 
initialization, which causes the body to be executed when the App is launched, 
just as in Java you would create a main method.
-
-Before we can execute something on Spark we'll need to create a context. We 
could use raw Spark calls here but default values are setup for a Mahout 
context by using the Mahout helper function.
-
-    implicit val mc = mahoutSparkContext(masterUrl = "local", 
-      appName = "CooccurrenceDriver")
-    
-We need to read in three files containing different interaction types. The 
files will each be read into a Mahout IndexedDataset. This allows us to 
preserve application-specific user and item IDs throughout the calculations.
-
-For example, here is data/purchase.csv:
-
-    u1,iphone
-    u1,ipad
-    u2,nexus
-    u2,galaxy
-    u3,surface
-    u4,iphone
-    u4,galaxy
-
-Mahout has a helper function that reads the text delimited files  
SparkEngine.indexedDatasetDFSReadElements. The function reads single element 
tuples (user-id,item-id) in a distributed way to create the IndexedDataset. 
Distributed Row Matrices (DRM) and Vectors are important data types supplied by 
Mahout and IndexedDataset is like a very lightweight Dataframe in R, it wraps a 
DRM with HashBiMaps for row and column IDs. 
-
-One important thing to note about this example is that we read in all datasets 
before we adjust the number of rows in them to match the total number of users 
in the data. This is so the math works out [(A'A, A'B, 
A'C)](http://mahout.apache.org/users/algorithms/intro-cooccurrence-spark.html) 
even if some users took one action but not another there must be the same 
number of rows in all matrices.
-
-    /**
-     * Read files of element tuples and create IndexedDatasets one per action. 
These 
-     * share a userID BiMap but have their own itemID BiMaps
-     */
-    def readActions(actionInput: Array[(String, String)]): Array[(String, 
IndexedDataset)] = {
-      var actions = Array[(String, IndexedDataset)]()
-
-      val userDictionary: BiMap[String, Int] = HashBiMap.create()
-
-      // The first action named in the sequence is the "primary" action and 
-      // begins to fill up the user dictionary
-      for ( actionDescription <- actionInput ) {// grab the path to actions
-        val action: IndexedDataset = SparkEngine.indexedDatasetDFSReadElements(
-          actionDescription._2,
-          schema = DefaultIndexedDatasetElementReadSchema,
-          existingRowIDs = userDictionary)
-        userDictionary.putAll(action.rowIDs)
-        // put the name in the tuple with the indexedDataset
-        actions = actions :+ (actionDescription._1, action) 
-      }
-
-      // After all actions are read in the userDictonary will contain every 
user seen, 
-      // even if they may not have taken all actions . Now we adjust the row 
rank of 
-      // all IndxedDataset's to have this number of rows
-      // Note: this is very important or the cooccurrence calc may fail
-      val numUsers = userDictionary.size() // one more than the cardinality
-
-      val resizedNameActionPairs = actions.map { a =>
-        //resize the matrix by, in effect by adding empty rows
-        val resizedMatrix = a._2.create(a._2.matrix, userDictionary, 
a._2.columnIDs).newRowCardinality(numUsers)
-        (a._1, resizedMatrix) // return the Tuple of (name, IndexedDataset)
-      }
-      resizedNameActionPairs // return the array of Tuples
-    }
-
-
-Now that we have the data read in we can perform the cooccurrence calculation.
-
-    // actions.map creates an array of just the IndeedDatasets
-    val indicatorMatrices = SimilarityAnalysis.cooccurrencesIDSs(
-      actions.map(a => a._2)) 
-
-All we need to do now is write the indicators.
-
-    // zip a pair of arrays into an array of pairs, reattaching the action 
names
-    val indicatorDescriptions = actions.map(a => a._1).zip(indicatorMatrices)
-    writeIndicators(indicatorDescriptions)
-
-
-The ```writeIndicators``` method uses the default write function 
```dfsWrite```.
-
-    /**
-     * Write indicatorMatrices to the output dir in the default format
-     * for indexing by a search engine.
-     */
-    def writeIndicators( indicators: Array[(String, IndexedDataset)]) = {
-      for (indicator <- indicators ) {
-        // create a name based on the type of indicator
-        val indicatorDir = OutputPath + indicator._1
-        indicator._2.dfsWrite(
-          indicatorDir,
-          // Schema tells the writer to omit LLR strengths 
-          // and format for search engine indexing
-          IndexedDatasetWriteBooleanSchema) 
-      }
-    }
- 
-
-See the Github project for the full source. Now we create a build.sbt to build 
the example. 
-
-    name := "cooccurrence-driver"
-
-    organization := "com.finderbots"
-
-    version := "0.1"
-
-    scalaVersion := "2.10.4"
-
-    val sparkVersion = "1.1.1"
-
-    libraryDependencies ++= Seq(
-      "log4j" % "log4j" % "1.2.17",
-      // Mahout's Spark code
-      "commons-io" % "commons-io" % "2.4",
-      "org.apache.mahout" % "mahout-math-scala_2.10" % "0.10.0",
-      "org.apache.mahout" % "mahout-spark_2.10" % "0.10.0",
-      "org.apache.mahout" % "mahout-math" % "0.10.0",
-      "org.apache.mahout" % "mahout-hdfs" % "0.10.0",
-      // Google collections, AKA Guava
-      "com.google.guava" % "guava" % "16.0")
-
-    resolvers += "typesafe repo" at " 
http://repo.typesafe.com/typesafe/releases/";
-
-    resolvers += Resolver.mavenLocal
-
-    packSettings
-
-    packMain := Map(
-      "cooc" -> "CooccurrenceDriver")
-
-
-##Build
-Building the examples from project's root folder:
-
-    $ sbt pack
-
-This will automatically set up some launcher scripts for the driver. To run 
execute
-
-    $ target/pack/bin/cooc
-    
-The driver will execute in Spark standalone mode and put the data in 
/path/to/3-input-cooc/data/indicators/*indicator-type*
-
-##Using a Debugger
-To build and run this example in a debugger like IntelliJ IDEA. Install from 
the IntelliJ site and add the Scala plugin.
-
-Open IDEA and go to the menu File->New->Project from existing 
sources->SBT->/path/to/3-input-cooc. This will create an IDEA project from 
```build.sbt``` in the root directory.
-
-At this point you may create a "Debug Configuration" to run. In the menu 
choose Run->Edit Configurations. Under "Default" choose "Application". In the 
dialog hit the elipsis button "..." to the right of "Environment Variables" and 
fill in your versions of JAVA_HOME, SPARK_HOME, and MAHOUT_HOME. In 
configuration editor under "Use classpath from" choose root-3-input-cooc 
module. 
-
-![image](http://mahout.apache.org/images/debug-config.png)
-
-Now choose "Application" in the left pane and hit the plus sign "+". give the 
config a name and hit the elipsis button to the right of the "Main class" field 
as shown.
-
-![image](http://mahout.apache.org/images/debug-config-2.png)
-
-
-After setting breakpoints you are now ready to debug the configuration. Go to 
the Run->Debug... menu and pick your configuration. This will execute using a 
local standalone instance of Spark.
-
-##The Mahout Shell
-
-For small script-like apps you may wish to use the Mahout shell. It is a Scala 
REPL type interactive shell built on the Spark shell with Mahout-Samsara 
extensions.
-
-To make the CooccurrenceDriver.scala into a script make the following changes:
-
-* You won't need the context, since it is created when the shell is launched, 
comment that line out.
-* Replace the logger.info lines with println
-* Remove the package info since it's not needed, this will produce the file in 
```path/to/3-input-cooc/bin/CooccurrenceDriver.mscala```. 
-
-Note the extension ```.mscala``` to indicate we are using Mahout's scala 
extensions for math, otherwise known as 
[Mahout-Samsara](http://mahout.apache.org/users/environment/out-of-core-reference.html)
-
-To run the code make sure the output does not exist already
-
-    $ rm -r /path/to/3-input-cooc/data/indicators
-    
-Launch the Mahout + Spark shell:
-
-    $ mahout spark-shell
-    
-You'll see the Mahout splash:
-
-    MAHOUT_LOCAL is set, so we don't add HADOOP_CONF_DIR to classpath.
-
-                         _                 _
-             _ __ ___   __ _| |__   ___  _   _| |_
-            | '_ ` _ \ / _` | '_ \ / _ \| | | | __|
-            | | | | | | (_| | | | | (_) | |_| | |_
-            |_| |_| |_|\__,_|_| |_|\___/ \__,_|\__|  version 0.10.0
-
-      
-    Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 
1.7.0_72)
-    Type in expressions to have them evaluated.
-    Type :help for more information.
-    15/04/26 09:30:48 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
-    Created spark context..
-    Mahout distributed context is available as "implicit val sdc".
-    mahout> 
-
-To load the driver type:
-
-    mahout> :load /path/to/3-input-cooc/bin/CooccurrenceDriver.mscala
-    Loading ./bin/CooccurrenceDriver.mscala...
-    import com.google.common.collect.{HashBiMap, BiMap}
-    import org.apache.log4j.Logger
-    import org.apache.mahout.math.cf.SimilarityAnalysis
-    import org.apache.mahout.math.indexeddataset._
-    import org.apache.mahout.sparkbindings._
-    import scala.collection.immutable.HashMap
-    defined module CooccurrenceDriver
-    mahout> 
-
-To run the driver type:
-
-    mahout> CooccurrenceDriver.main(args = Array(""))
-    
-You'll get some stats printed:
-
-    Total number of users for all actions = 5
-    purchase indicator matrix:
-      Number of rows for matrix = 4
-      Number of columns for matrix = 5
-      Number of rows after resize = 5
-    view indicator matrix:
-      Number of rows for matrix = 4
-      Number of columns for matrix = 5
-      Number of rows after resize = 5
-    category indicator matrix:
-      Number of rows for matrix = 5
-      Number of columns for matrix = 7
-      Number of rows after resize = 5
-    
-If you look in ```path/to/3-input-cooc/data/indicators``` you should find 
folders containing the indicator matrices.

http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_pages/docs/0.13.0/tutorials/play-with-shell.md
----------------------------------------------------------------------
diff --git a/website/_pages/docs/0.13.0/tutorials/play-with-shell.md 
b/website/_pages/docs/0.13.0/tutorials/play-with-shell.md
deleted file mode 100644
index 0c88839..0000000
--- a/website/_pages/docs/0.13.0/tutorials/play-with-shell.md
+++ /dev/null
@@ -1,198 +0,0 @@
----
-layout: mahoutdoc
-title: Mahout Samsara In Core
-permalink: /docs/0.13.0/tutorials/samsara-spark-shell
----
-# Playing with Mahout's Spark Shell 
-
-This tutorial will show you how to play with Mahout's scala DSL for linear 
algebra and its Spark shell. **Please keep in mind that this code is still in a 
very early experimental stage**.
-
-_(Edited for 0.10.2)_
-
-## Intro
-
-We'll use an excerpt of a publicly available [dataset about 
cereals](http://lib.stat.cmu.edu/DASL/Datafiles/Cereals.html). The dataset 
tells the protein, fat, carbohydrate and sugars (in milligrams) contained in a 
set of cereals, as well as a customer rating for the cereals. Our aim for this 
example is to fit a linear model which infers the customer rating from the 
ingredients.
-
-
-Name                    | protein | fat | carbo | sugars | rating
-:-----------------------|:--------|:----|:------|:-------|:---------
-Apple Cinnamon Cheerios | 2       | 2   | 10.5  | 10     | 29.509541
-Cap'n'Crunch            | 1       | 2   | 12    | 12     | 18.042851  
-Cocoa Puffs             | 1       | 1   | 12    | 13     | 22.736446
-Froot Loops             | 2       |    1   | 11    | 13     | 32.207582  
-Honey Graham Ohs        | 1       |    2   | 12    | 11     | 21.871292
-Wheaties Honey Gold     | 2       | 1   | 16    |  8     | 36.187559  
-Cheerios                | 6       |    2   | 17    |  1     | 50.764999
-Clusters                | 3       |    2   | 13    |  7     | 40.400208
-Great Grains Pecan      | 3       | 3   | 13    |  4     | 45.811716  
-
-
-## Installing Mahout & Spark on your local machine
-
-We describe how to do a quick toy setup of Spark & Mahout on your local 
machine, so that you can run this example and play with the shell. 
-
- 1. Download [Apache Spark 
1.6.2](http://d3kbcqa49mib13.cloudfront.net/spark-1.6.2-bin-hadoop2.6.tgz) and 
unpack the archive file
- 1. Change to the directory where you unpacked Spark and type ```sbt/sbt 
assembly``` to build it
- 1. Create a directory for Mahout somewhere on your machine, change to there 
and checkout the master branch of Apache Mahout from GitHub ```git clone 
https://github.com/apache/mahout mahout```
- 1. Change to the ```mahout``` directory and build mahout using ```mvn 
-DskipTests clean install```
- 
-## Starting Mahout's Spark shell
-
- 1. Goto the directory where you unpacked Spark and type 
```sbin/start-all.sh``` to locally start Spark
- 1. Open a browser, point it to 
[http://localhost:8080/](http://localhost:8080/) to check whether Spark 
successfully started. Copy the url of the spark master at the top of the page 
(it starts with **spark://**)
- 1. Define the following environment variables: <pre class="codehilite">export 
MAHOUT_HOME=[directory into which you checked out Mahout]
-export SPARK_HOME=[directory where you unpacked Spark]
-export MASTER=[url of the Spark master]
-</pre>
- 1. Finally, change to the directory where you unpacked Mahout and type 
```bin/mahout spark-shell```, 
-you should see the shell starting and get the prompt ```mahout> ```. Check 
-[FAQ](http://mahout.apache.org/users/sparkbindings/faq.html) for further 
troubleshooting.
-
-## Implementation
-
-We'll use the shell to interactively play with the data and incrementally 
implement a simple [linear 
regression](https://en.wikipedia.org/wiki/Linear_regression) algorithm. Let's 
first load the dataset. Usually, we wouldn't need Mahout unless we processed a 
large dataset stored in a distributed filesystem. But for the sake of this 
example, we'll use our tiny toy dataset and "pretend" it was too big to fit 
onto a single machine.
-
-*Note: You can incrementally follow the example by copy-and-pasting the code 
into your running Mahout shell.*
-
-Mahout's linear algebra DSL has an abstraction called *DistributedRowMatrix 
(DRM)* which models a matrix that is partitioned by rows and stored in the 
memory of a cluster of machines. We use ```dense()``` to create a dense 
in-memory matrix from our toy dataset and use ```drmParallelize``` to load it 
into the cluster, "mimicking" a large, partitioned dataset.
-
-<div class="codehilite"><pre>
-val drmData = drmParallelize(dense(
-  (2, 2, 10.5, 10, 29.509541),  // Apple Cinnamon Cheerios
-  (1, 2, 12,   12, 18.042851),  // Cap'n'Crunch
-  (1, 1, 12,   13, 22.736446),  // Cocoa Puffs
-  (2, 1, 11,   13, 32.207582),  // Froot Loops
-  (1, 2, 12,   11, 21.871292),  // Honey Graham Ohs
-  (2, 1, 16,   8,  36.187559),  // Wheaties Honey Gold
-  (6, 2, 17,   1,  50.764999),  // Cheerios
-  (3, 2, 13,   7,  40.400208),  // Clusters
-  (3, 3, 13,   4,  45.811716)), // Great Grains Pecan
-  numPartitions = 2);
-</pre></div>
-
-Have a look at this matrix. The first four columns represent the ingredients 
-(our features) and the last column (the rating) is the target variable for 
-our regression. [Linear 
regression](https://en.wikipedia.org/wiki/Linear_regression) 
-assumes that the **target variable** `\(\mathbf{y}\)` is generated by the 
-linear combination of **the feature matrix** `\(\mathbf{X}\)` with the 
-**parameter vector** `\(\boldsymbol{\beta}\)` plus the
- **noise** `\(\boldsymbol{\varepsilon}\)`, summarized in the formula 
-`\(\mathbf{y}=\mathbf{X}\boldsymbol{\beta}+\boldsymbol{\varepsilon}\)`. 
-Our goal is to find an estimate of the parameter vector 
-`\(\boldsymbol{\beta}\)` that explains the data very well.
-
-As a first step, we extract `\(\mathbf{X}\)` and `\(\mathbf{y}\)` from our 
data matrix. We get *X* by slicing: we take all rows (denoted by ```::```) and 
the first four columns, which have the ingredients in milligrams as content. 
Note that the result is again a DRM. The shell will not execute this code yet, 
it saves the history of operations and defers the execution until we really 
access a result. **Mahout's DSL automatically optimizes and parallelizes all 
operations on DRMs and runs them on Apache Spark.**
-
-<div class="codehilite"><pre>
-val drmX = drmData(::, 0 until 4)
-</pre></div>
-
-Next, we extract the target variable vector *y*, the fifth column of the data 
matrix. We assume this one fits into our driver machine, so we fetch it into 
memory using ```collect```:
-
-<div class="codehilite"><pre>
-val y = drmData.collect(::, 4)
-</pre></div>
-
-Now we are ready to think about a mathematical way to estimate the parameter 
vector *β*. A simple textbook approach is [ordinary least squares 
(OLS)](https://en.wikipedia.org/wiki/Ordinary_least_squares), which minimizes 
the sum of residual squares between the true target variable and the prediction 
of the target variable. In OLS, there is even a closed form expression for 
estimating `\(\boldsymbol{\beta}\)` as 
-`\(\left(\mathbf{X}^{\top}\mathbf{X}\right)^{-1}\mathbf{X}^{\top}\mathbf{y}\)`.
-
-The first thing which we compute for this is  
`\(\mathbf{X}^{\top}\mathbf{X}\)`. The code for doing this in Mahout's scala 
DSL maps directly to the mathematical formula. The operation ```.t()``` 
transposes a matrix and analogous to R ```%*%``` denotes matrix multiplication.
-
-<div class="codehilite"><pre>
-val drmXtX = drmX.t %*% drmX
-</pre></div>
-
-The same is true for computing `\(\mathbf{X}^{\top}\mathbf{y}\)`. We can 
simply type the math in scala expressions into the shell. Here, *X* lives in 
the cluster, while is *y* in the memory of the driver, and the result is a DRM 
again.
-<div class="codehilite"><pre>
-val drmXty = drmX.t %*% y
-</pre></div>
-
-We're nearly done. The next step we take is to fetch 
`\(\mathbf{X}^{\top}\mathbf{X}\)` and 
-`\(\mathbf{X}^{\top}\mathbf{y}\)` into the memory of our driver machine (we 
are targeting 
-features matrices that are tall and skinny , 
-so we can assume that `\(\mathbf{X}^{\top}\mathbf{X}\)` is small enough 
-to fit in). Then, we provide them to an in-memory solver (Mahout provides 
-the an analog to R's ```solve()``` for that) which computes ```beta```, our 
-OLS estimate of the parameter vector `\(\boldsymbol{\beta}\)`.
-
-<div class="codehilite"><pre>
-val XtX = drmXtX.collect
-val Xty = drmXty.collect(::, 0)
-
-val beta = solve(XtX, Xty)
-</pre></div>
-
-That's it! We have a implemented a distributed linear regression algorithm 
-on Apache Spark. I hope you agree that we didn't have to worry a lot about 
-parallelization and distributed systems. The goal of Mahout's linear algebra 
-DSL is to abstract away the ugliness of programming a distributed system 
-as much as possible, while still retaining decent performance and 
-scalability.
-
-We can now check how well our model fits its training data. 
-First, we multiply the feature matrix `\(\mathbf{X}\)` by our estimate of 
-`\(\boldsymbol{\beta}\)`. Then, we look at the difference (via L2-norm) of 
-the target variable `\(\mathbf{y}\)` to the fitted target variable:
-
-<div class="codehilite"><pre>
-val yFitted = (drmX %*% beta).collect(::, 0)
-(y - yFitted).norm(2)
-</pre></div>
-
-We hope that we could show that Mahout's shell allows people to interactively 
and incrementally write algorithms. We have entered a lot of individual 
commands, one-by-one, until we got the desired results. We can now refactor a 
little by wrapping our statements into easy-to-use functions. The definition of 
functions follows standard scala syntax. 
-
-We put all the commands for ordinary least squares into a function ```ols```. 
-
-<div class="codehilite"><pre>
-def ols(drmX: DrmLike[Int], y: Vector) = 
-  solve(drmX.t %*% drmX, drmX.t %*% y)(::, 0)
-
-</pre></div>
-
-Note that DSL declares implicit `collect` if coersion rules require an in-core 
argument. Hence, we can simply
-skip explicit `collect`s. 
-
-Next, we define a function ```goodnessOfFit``` that tells how well a model 
fits the target variable:
-
-<div class="codehilite"><pre>
-def goodnessOfFit(drmX: DrmLike[Int], beta: Vector, y: Vector) = {
-  val fittedY = (drmX %*% beta).collect(::, 0)
-  (y - fittedY).norm(2)
-}
-</pre></div>
-
-So far we have left out an important aspect of a standard linear regression 
-model. Usually there is a constant bias term added to the model. Without 
-that, our model always crosses through the origin and we only learn the 
-right angle. An easy way to add such a bias term to our model is to add a 
-column of ones to the feature matrix `\(\mathbf{X}\)`. 
-The corresponding weight in the parameter vector will then be the bias term.
-
-Here is how we add a bias column:
-
-<div class="codehilite"><pre>
-val drmXwithBiasColumn = drmX cbind 1
-</pre></div>
-
-Now we can give the newly created DRM ```drmXwithBiasColumn``` to our model 
fitting method ```ols``` and see how well the resulting model fits the training 
data with ```goodnessOfFit```. You should see a large improvement in the result.
-
-<div class="codehilite"><pre>
-val betaWithBiasTerm = ols(drmXwithBiasColumn, y)
-goodnessOfFit(drmXwithBiasColumn, betaWithBiasTerm, y)
-</pre></div>
-
-As a further optimization, we can make use of the DSL's caching functionality. 
We use ```drmXwithBiasColumn``` repeatedly  as input to a computation, so it 
might be beneficial to cache it in memory. This is achieved by calling 
```checkpoint()```. In the end, we remove it from the cache with uncache:
-
-<div class="codehilite"><pre>
-val cachedDrmX = drmXwithBiasColumn.checkpoint()
-
-val betaWithBiasTerm = ols(cachedDrmX, y)
-val goodness = goodnessOfFit(cachedDrmX, betaWithBiasTerm, y)
-
-cachedDrmX.uncache()
-
-goodness
-</pre></div>
-
-
-Liked what you saw? Checkout Mahout's overview for the [Scala and Spark 
bindings](https://mahout.apache.org/users/sparkbindings/home.html).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_pages/dustin.html
----------------------------------------------------------------------
diff --git a/website/_pages/dustin.html b/website/_pages/dustin.html
deleted file mode 100644
index de57b01..0000000
--- a/website/_pages/dustin.html
+++ /dev/null
@@ -1,20 +0,0 @@
----
-layout: mahout
-title: Dustins Test
-permalink: /dustin/
----
-
-It doesn't matter what comes, fresh goes better in life, with Mentos fresh and 
full of Life! Nothing gets to you, stayin' fresh, stayin' cool, with Mentos 
fresh and full of life! Fresh goes better! Mentos freshness! Fresh goes better 
with Mentos, fresh and full of life! Mentos! The Freshmaker!
-
-We got a right to pick a little fight, Bonanza! If anyone fights anyone of us, 
he's gotta fight with me! We're not a one to saddle up and run, Bonanza! Anyone 
of us who starts a little fuss knows he can count on me! One for four, four for 
one, this we guarantee. We got a right to pick a little fight, Bonanza! If 
anyone fights anyone of us he's gotta fight with me!
-
-                <div class="col-md-12">
-                    {% for post in paginator.posts %}
-                        {% include tile.html %}
-                    {% endfor %}
-                    
-
-                    {% include pagination.html %}
-                </div>
-
-

http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_pages/how-to-contribute.md
----------------------------------------------------------------------
diff --git a/website/_pages/how-to-contribute.md 
b/website/_pages/how-to-contribute.md
deleted file mode 100644
index cfedf8d..0000000
--- a/website/_pages/how-to-contribute.md
+++ /dev/null
@@ -1,153 +0,0 @@
----
-layout: mahout
-title: How To Contribute
-permalink: /How-To-Contribute/
----
-
-# How to contribute
-
-*Contributing to an Apache project* is about more than just writing code --
-it's about doing what you can to make the project better.  There are lots
-of ways to contribute!
-
-<a name="HowToContribute-BeInvolved"></a>
-## Get Involved
-
-Discussions at Apache happen on the mailing list. To get involved, you should 
join the [Mahout mailing lists](/general/mailing-lists,-irc-and-archives.html). 
 In particular:
-
-* The **user list** (to help others)
-* The **development list** (to join discussions of changes)  -- This is the 
best place
-to understand where the project is headed.
-* The **commit list** (to see changes as they are made)
-
-Please keep discussions about Mahout on list so that everyone benefits. 
-Emailing individual committers with questions about specific Mahout issues
-is discouraged.  See 
[http://people.apache.org/~hossman/#private_q](http://people.apache.org/~hossman/#private_q)
-.  Apache  has a number of [email tips for contributors][1] as well.
-
-<a name="HowToContribute-WhattoWorkOn?"></a>
-## What to Work On?
-
-What do you like to work on?  There are a ton of things in Mahout that we
-would love to have contributions for: documentation, performance improvements, 
better tests, etc.
-The best place to start is by looking into our [issue 
tracker](https://issues.apache.org/jira/browse/MAHOUT) and
-seeing what bugs have been reported and seeing if any look like you could
-take them on.  Small, well written, well tested patches are a great way to
-get your feet wet.  It could be something as simple as fixing a typo.  The
-more important piece is you are showing you understand the necessary steps
-for making changes to the code.  Mahout is a pretty big beast at this
-point, so changes, especially from non-committers, need to be evolutionary
-not revolutionary since it is often very difficult to evaluate the merits
-of a very large patch. Think small, at least to start!
-
-Beyond JIRA, hang out on the dev@ mailing list. That's where we discuss
-what we are working on in the internals and where you can get a sense of
-where people are working.
-
-Also, documentation is a great way to familiarize yourself with the code
-and is always a welcome addition to the codebase and this website. Feel free 
-to contribute texts and tutorials! Committers will make sure they are added 
-to this website, and we have a [guide for making website updates][2].
-We also have a [wide variety of books and slides][3] for learning more about 
-machine learning algorithms. 
-
-If you are interested in working towards being a committer, [general 
guidelines are available online](/developers/how-to-become-a-committer.html).
-
-<a name="HowToContribute-ContributingCode(Features,BigFixes,Tests,etc...)"></a>
-## Contributing Code (Features, Big Fixes, Tests, etc...)
-
-This section identifies the ''optimal'' steps community member can take to
-submit a changes or additions to the Mahout code base. This can be new
-features, bug fixes optimizations of existing features, or tests of
-existing code to prove it works as advertised (and to make it more robust
-against possible future changes).
-
-Please note that these are the "optimal" steps, and community members that
-don't have the time or resources to do everything outlined on this below
-should not be discouraged from submitting their ideas "as is" per "Yonik
-Seeley's (Solr committer) Law of Patches": 
-
-*A half-baked patch in Jira, with no documentation, no tests and no backwards 
compatibility is better than no patch at all.*
-
-Just because you may not have the time to write unit tests, or cleanup
-backwards compatibility issues, or add documentation, doesn't mean other
-people don't. Putting your patch out there allows other people to try it
-and possibly improve it.
-
-<a name="HowToContribute-Gettingthesourcecode"></a>
-## Getting the source code
-
-First of all, you need to get the [Mahout source 
code](/developers/version-control.html). Most development is done on the 
"trunk".  Mahout mirrors its codebase on 
[GitHub](https://github.com/apache/mahout). The first step to making a 
contribution is to fork Mahout's master branch to your GitHub repository.  
-
-
-<a name="HowToContribute-MakingChanges"></a>
-## Making Changes
-
-Before you start, you should send a message to the [Mahout developer mailing 
list](/general/mailing-lists,-irc-and-archives.html)
-(note: you have to subscribe before you can post), or file a ticket in  our 
[issue tracker](/developers/issue-tracker.html).
-Describe your proposed changes and check that they fit in with what others are 
doing and have planned for the project.  Be patient, it may take folks a while 
to understand your requirements.
-
- 1. Create a JIRA Issue (if one does not already exist or you haven't already) 
- 2. Pull the code from your GitHub repository 
- 3. Ensure that you are working with the latest code from the 
[apache/mahout](https://github.com/apache/mahout) master branch.
- 3. Modify the source code and add some (very) nice features. 
-     - Be sure to adhere to the following points:
-         - All public classes and methods should have informative Javadoc
-    comments.  
-         - Code should be formatted according to standard
-    [Java coding 
conventions](http://www.oracle.com/technetwork/java/codeconventions-150003.pdf),
-    with two exceptions:
-             - indent two spaces per level, not four.  
-             - lines can be 120 characters, not 80.  
-         - Contributions should pass existing unit tests. 
-         - New unit tests should be provided to demonstrate bugs and fixes.
- 4. Commit the changes to your local repository. 
- 4. Push the code back up to your GitHub repository.
- 5. Create a [Pull 
Request](https://help.github.com/articles/creating-a-pull-request) to the to 
apache/mahout repository on Github.
-     - Include the corresponding JIRA Issue number and description in the 
title of the pull request: 
-        - ie. MAHOUT-xxxx: < JIRA-Issue-Description >
- 6. Committers and other members of the Mahout community can then comment on 
the Pull Request.  Be sure to watch for comments, respond and make any 
necessary changes.
-
-Please be patient. Committers are busy people too. If no one responds to your 
Pull Request after a few days, please make friendly reminders on the mailing 
list.  Please
-incorporate other's suggestions into into your changes if you think they're 
reasonable.  Finally, remember that even changes that are not committed are 
useful to the community.
-
-<a name="HowToContribute-UnitTests"></a>
-#### Unit Tests
-
-Please make sure that all unit tests succeed before creating your Pull Request.
-
-Run *mvn clean test*, if you see *BUILD SUCCESSFUL* after the tests have 
finished, all is ok, but if you see *BUILD FAILED*, 
-please carefully read the errors messages and check your code.
-
-#### Do's and Don'ts
-
-Please do not:
-
-* reformat code unrelated to the bug being fixed: formatting changes should
-be done in separate issues.
-* comment out code that is now obsolete: just remove it.
-* insert comments around each change, marking the change: folks can use
-subversion to figure out what's changed and by whom.
-* make things public which are not required by end users.
-
-Please do:
-
-* try to adhere to the coding style of files you edit;
-* comment code whose function or rationale is not obvious;
-* update documentation (e.g., ''package.html'' files, the website, etc.)
-
-
-<a name="HowToContribute-Review/ImproveExistingPatches"></a>
-## Review/Improve Existing Pull Requests
-
-If there's a JIRA issue that already has a Pull Request with changes that you 
think are really good, and works well for you -- please add a comment saying 
so.   If there's room
-for improvement (more tests, better javadocs, etc...) then make the changes on 
your GitHub branch and add a comment about them.        If a lot of people 
review a Pull Request and give it a
-thumbs up, that's a good sign for committers when deciding if it's worth 
spending time to review it -- and if other people have already put in
-effort to improve the docs/tests for an issue, that helps even more.
-
-For more information see [Handling GitHub 
PRs](http://mahout.apache.org/developers/github.html).
-
-
-  [1]: http://www.apache.org/dev/contrib-email-tips
-  [2]: http://mahout.apache.org/developers/how-to-update-the-website.html
-  [3]: http://mahout.apache.org/general/books-tutorials-and-talks.html
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_pages/how-to-contribute.mdtext
----------------------------------------------------------------------
diff --git a/website/_pages/how-to-contribute.mdtext 
b/website/_pages/how-to-contribute.mdtext
deleted file mode 100644
index 37a6cbf..0000000
--- a/website/_pages/how-to-contribute.mdtext
+++ /dev/null
@@ -1,149 +0,0 @@
-Title: How To Contribute
-
-# How to contribute
-
-*Contributing to an Apache project* is about more than just writing code --
-it's about doing what you can to make the project better.  There are lots
-of ways to contribute!
-
-<a name="HowToContribute-BeInvolved"></a>
-## Get Involved
-
-Discussions at Apache happen on the mailing list. To get involved, you should 
join the [Mahout mailing lists](/general/mailing-lists,-irc-and-archives.html). 
 In particular:
-
-* The **user list** (to help others)
-* The **development list** (to join discussions of changes)  -- This is the 
best place
-to understand where the project is headed.
-* The **commit list** (to see changes as they are made)
-
-Please keep discussions about Mahout on list so that everyone benefits. 
-Emailing individual committers with questions about specific Mahout issues
-is discouraged.  See 
[http://people.apache.org/~hossman/#private_q](http://people.apache.org/~hossman/#private_q)
-.  Apache  has a number of [email tips for contributors][1] as well.
-
-<a name="HowToContribute-WhattoWorkOn?"></a>
-## What to Work On?
-
-What do you like to work on?  There are a ton of things in Mahout that we
-would love to have contributions for: documentation, performance improvements, 
better tests, etc.
-The best place to start is by looking into our [issue 
tracker](https://issues.apache.org/jira/browse/MAHOUT) and
-seeing what bugs have been reported and seeing if any look like you could
-take them on.  Small, well written, well tested patches are a great way to
-get your feet wet.  It could be something as simple as fixing a typo.  The
-more important piece is you are showing you understand the necessary steps
-for making changes to the code.  Mahout is a pretty big beast at this
-point, so changes, especially from non-committers, need to be evolutionary
-not revolutionary since it is often very difficult to evaluate the merits
-of a very large patch. Think small, at least to start!
-
-Beyond JIRA, hang out on the dev@ mailing list. That's where we discuss
-what we are working on in the internals and where you can get a sense of
-where people are working.
-
-Also, documentation is a great way to familiarize yourself with the code
-and is always a welcome addition to the codebase and this website. Feel free 
-to contribute texts and tutorials! Committers will make sure they are added 
-to this website, and we have a [guide for making website updates][2].
-We also have a [wide variety of books and slides][3] for learning more about 
-machine learning algorithms. 
-
-If you are interested in working towards being a committer, [general 
guidelines are available online](/developers/how-to-become-a-committer.html).
-
-<a name="HowToContribute-ContributingCode(Features,BigFixes,Tests,etc...)"></a>
-## Contributing Code (Features, Big Fixes, Tests, etc...)
-
-This section identifies the ''optimal'' steps community member can take to
-submit a changes or additions to the Mahout code base. This can be new
-features, bug fixes optimizations of existing features, or tests of
-existing code to prove it works as advertised (and to make it more robust
-against possible future changes).
-
-Please note that these are the "optimal" steps, and community members that
-don't have the time or resources to do everything outlined on this below
-should not be discouraged from submitting their ideas "as is" per "Yonik
-Seeley's (Solr committer) Law of Patches": 
-
-*A half-baked patch in Jira, with no documentation, no tests and no backwards 
compatibility is better than no patch at all.*
-
-Just because you may not have the time to write unit tests, or cleanup
-backwards compatibility issues, or add documentation, doesn't mean other
-people don't. Putting your patch out there allows other people to try it
-and possibly improve it.
-
-<a name="HowToContribute-Gettingthesourcecode"></a>
-## Getting the source code
-
-First of all, you need to get the [Mahout source 
code](/developers/version-control.html). Most development is done on the 
"trunk".  Mahout mirrors its codebase on 
[GitHub](https://github.com/apache/mahout). The first step to making a 
contribution is to fork Mahout's master branch to your GitHub repository.  
-
-
-<a name="HowToContribute-MakingChanges"></a>
-## Making Changes
-
-Before you start, you should send a message to the [Mahout developer mailing 
list](/general/mailing-lists,-irc-and-archives.html)
-(note: you have to subscribe before you can post), or file a ticket in  our 
[issue tracker](/developers/issue-tracker.html).
-Describe your proposed changes and check that they fit in with what others are 
doing and have planned for the project.  Be patient, it may take folks a while 
to understand your requirements.
-
- 1. Create a JIRA Issue (if one does not already exist or you haven't already) 
- 2. Pull the code from your GitHub repository 
- 3. Ensure that you are working with the latest code from the 
[apache/mahout](https://github.com/apache/mahout) master branch.
- 3. Modify the source code and add some (very) nice features. 
-     - Be sure to adhere to the following points:
-         - All public classes and methods should have informative Javadoc
-    comments.  
-         - Code should be formatted according to standard
-    [Java coding 
conventions](http://www.oracle.com/technetwork/java/codeconventions-150003.pdf),
-    with two exceptions:
-             - indent two spaces per level, not four.  
-             - lines can be 120 characters, not 80.  
-         - Contributions should pass existing unit tests. 
-         - New unit tests should be provided to demonstrate bugs and fixes.
- 4. Commit the changes to your local repository. 
- 4. Push the code back up to your GitHub repository.
- 5. Create a [Pull 
Request](https://help.github.com/articles/creating-a-pull-request) to the to 
apache/mahout repository on Github.
-     - Include the corresponding JIRA Issue number and description in the 
title of the pull request: 
-        - ie. MAHOUT-xxxx: < JIRA-Issue-Description >
- 6. Committers and other members of the Mahout community can then comment on 
the Pull Request.  Be sure to watch for comments, respond and make any 
necessary changes.
-
-Please be patient. Committers are busy people too. If no one responds to your 
Pull Request after a few days, please make friendly reminders on the mailing 
list.  Please
-incorporate other's suggestions into into your changes if you think they're 
reasonable.  Finally, remember that even changes that are not committed are 
useful to the community.
-
-<a name="HowToContribute-UnitTests"></a>
-#### Unit Tests
-
-Please make sure that all unit tests succeed before creating your Pull Request.
-
-Run *mvn clean test*, if you see *BUILD SUCCESSFUL* after the tests have 
finished, all is ok, but if you see *BUILD FAILED*, 
-please carefully read the errors messages and check your code.
-
-#### Do's and Don'ts
-
-Please do not:
-
-* reformat code unrelated to the bug being fixed: formatting changes should
-be done in separate issues.
-* comment out code that is now obsolete: just remove it.
-* insert comments around each change, marking the change: folks can use
-subversion to figure out what's changed and by whom.
-* make things public which are not required by end users.
-
-Please do:
-
-* try to adhere to the coding style of files you edit;
-* comment code whose function or rationale is not obvious;
-* update documentation (e.g., ''package.html'' files, the website, etc.)
-
-
-<a name="HowToContribute-Review/ImproveExistingPatches"></a>
-## Review/Improve Existing Pull Requests
-
-If there's a JIRA issue that already has a Pull Request with changes that you 
think are really good, and works well for you -- please add a comment saying 
so.   If there's room
-for improvement (more tests, better javadocs, etc...) then make the changes on 
your GitHub branch and add a comment about them.        If a lot of people 
review a Pull Request and give it a
-thumbs up, that's a good sign for committers when deciding if it's worth 
spending time to review it -- and if other people have already put in
-effort to improve the docs/tests for an issue, that helps even more.
-
-For more information see [Handling GitHub 
PRs](http://mahout.apache.org/developers/github.html).
-
-
-  [1]: http://www.apache.org/dev/contrib-email-tips
-  [2]: http://mahout.apache.org/developers/how-to-update-the-website.html
-  [3]: http://mahout.apache.org/general/books-tutorials-and-talks.html
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_pages/mailing-lists.md
----------------------------------------------------------------------
diff --git a/website/_pages/mailing-lists.md b/website/_pages/mailing-lists.md
index 3569ceb..c86fab6 100644
--- a/website/_pages/mailing-lists.md
+++ b/website/_pages/mailing-lists.md
@@ -1,5 +1,5 @@
 ---
-layout: mahout
+layout: default
 title: Mailing Lists, IRC and Archives
 permalink: /mailing-lists/
 ---

http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_pages/reference.md
----------------------------------------------------------------------
diff --git a/website/_pages/reference.md b/website/_pages/reference.md
index a44bf15..f9a75d1 100644
--- a/website/_pages/reference.md
+++ b/website/_pages/reference.md
@@ -1,8 +1,11 @@
 ---
-layout: mahout
+layout: default
 title: Reference Reading
 permalink: /reference/
 ---
+
+**note** tg: was this already in teh website and just lost or did dustin add 
it?
+
 # Reference Reading
 
 Here we provide references to books and courses about data analysis in 
general, which might also be helpful in the context of Mahout.

http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_pages/version-control.mdtext
----------------------------------------------------------------------
diff --git a/website/_pages/version-control.mdtext 
b/website/_pages/version-control.mdtext
deleted file mode 100644
index 2ffe215..0000000
--- a/website/_pages/version-control.mdtext
+++ /dev/null
@@ -1,33 +0,0 @@
-Title: Version Control
-
-# Version control access
-
-The Mahout source is mirrored in the **[Apache Mahout 
GitHub](https://github.com/apache/mahout)** repository.
-  
-<a name="VersionControl-WebAccess(read-only)"></a>
-## Web Access (read-only)
-
-The source code can be browsed via the Web at 
[https://github.com/apache/mahout](https://github.com/apache/mahout). 
-
-<a name="VersionControl-AnonymousAccess(read-only)"></a>
-## Anonymous Access (read-only)
-
-The Git URL for anonymous users is 
[https://github.com/apache/mahout.git](https://github.com/apache/mahout.git).
-
-<a name="VersionControl-CommitterAccess(read-write)"></a>
-## Committer Access (read-write)
-
-The Git URL for committers is 
[https://git-wip-us.apache.org/repos/asf/mahout.git](https://git-wip-us.apache.org/repos/asf/mahout.git).
-
-## Mahout Website 
-The Mahout website resides in the [Apache SVN 
repository](https://svn.apache.org/viewvc/mahout/site).
-
-The SVN URL for the Mahout site is: 
[https://svn.apache.org/repos/asf/mahout/site](https://svn.apache.org/repos/asf/mahout/site).
-
-The Mahout website can be edited via the [ASF CMS 
Editor](http://www.apache.org/dev/cms.html) or by checking out the source 
locally from SVN.  A handy tool for publising the website locally while editing 
is available [here](https://gist.github.com/tuxdna/11223434). 
-
-
-<a name="VersionControl-Issues"></a>
-## Issues
-
-All bugs, improvements, [pull 
requests](http://mahout.apache.org/developers/github.html), etc. should be 
logged in our [issue 
tracker](https://mahout.apache.org/developers/issue-tracker.html).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_sass/_bootstrap.scss
----------------------------------------------------------------------
diff --git a/website/_sass/_bootstrap.scss b/website/_sass/_bootstrap.scss
deleted file mode 100755
index 598b007..0000000
--- a/website/_sass/_bootstrap.scss
+++ /dev/null
@@ -1,56 +0,0 @@
-/*!
- * Bootstrap v3.3.5 (http://getbootstrap.com)
- * Copyright 2011-2015 Twitter, Inc.
- * Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
- */
-
-// Core variables and mixins
-@import "bootstrap/variables";
-@import "bootstrap/mixins";
-
-// Reset and dependencies
-@import "bootstrap/normalize";
-@import "bootstrap/print";
-@import "bootstrap/glyphicons";
-
-// Core CSS
-@import "bootstrap/scaffolding";
-@import "bootstrap/type";
-@import "bootstrap/code";
-@import "bootstrap/grid";
-@import "bootstrap/tables";
-@import "bootstrap/forms";
-@import "bootstrap/buttons";
-
-// Components
-@import "bootstrap/component-animations";
-@import "bootstrap/dropdowns";
-@import "bootstrap/button-groups";
-@import "bootstrap/input-groups";
-@import "bootstrap/navs";
-@import "bootstrap/navbar";
-@import "bootstrap/breadcrumbs";
-@import "bootstrap/pagination";
-@import "bootstrap/pager";
-@import "bootstrap/labels";
-@import "bootstrap/badges";
-@import "bootstrap/jumbotron";
-@import "bootstrap/thumbnails";
-@import "bootstrap/alerts";
-@import "bootstrap/progress-bars";
-@import "bootstrap/media";
-@import "bootstrap/list-group";
-@import "bootstrap/panels";
-@import "bootstrap/responsive-embed";
-@import "bootstrap/wells";
-@import "bootstrap/close";
-
-// Components w/ JavaScript
-@import "bootstrap/modals";
-@import "bootstrap/tooltip";
-@import "bootstrap/popovers";
-@import "bootstrap/carousel";
-
-// Utility classes
-@import "bootstrap/utilities";
-@import "bootstrap/responsive-utilities";

http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_sass/_syntax-highlighting.scss
----------------------------------------------------------------------
diff --git a/website/_sass/_syntax-highlighting.scss 
b/website/_sass/_syntax-highlighting.scss
deleted file mode 100644
index 96d98d0..0000000
--- a/website/_sass/_syntax-highlighting.scss
+++ /dev/null
@@ -1,67 +0,0 @@
-/**
- * Syntax highlighting styles
- */
-.highlight {
-    background: #fff;
-    // @extend %vertical-rhythm;
-
-    .c     { color: #998; font-style: italic } // Comment
-    .err   { color: #a61717; background-color: #e3d2d2 } // Error
-    .k     { font-weight: bold } // Keyword
-    .o     { font-weight: bold } // Operator
-    .cm    { color: #998; font-style: italic } // Comment.Multiline
-    .cp    { color: #999; font-weight: bold } // Comment.Preproc
-    .c1    { color: #998; font-style: italic } // Comment.Single
-    .cs    { color: #999; font-weight: bold; font-style: italic } // 
Comment.Special
-    .gd    { color: #000; background-color: #fdd } // Generic.Deleted
-    .gd .x { color: #000; background-color: #faa } // Generic.Deleted.Specific
-    .ge    { font-style: italic } // Generic.Emph
-    .gr    { color: #a00 } // Generic.Error
-    .gh    { color: #999 } // Generic.Heading
-    .gi    { color: #000; background-color: #dfd } // Generic.Inserted
-    .gi .x { color: #000; background-color: #afa } // Generic.Inserted.Specific
-    .go    { color: #888 } // Generic.Output
-    .gp    { color: #555 } // Generic.Prompt
-    .gs    { font-weight: bold } // Generic.Strong
-    .gu    { color: #aaa } // Generic.Subheading
-    .gt    { color: #a00 } // Generic.Traceback
-    .kc    { font-weight: bold } // Keyword.Constant
-    .kd    { font-weight: bold } // Keyword.Declaration
-    .kp    { font-weight: bold } // Keyword.Pseudo
-    .kr    { font-weight: bold } // Keyword.Reserved
-    .kt    { color: #458; font-weight: bold } // Keyword.Type
-    .m     { color: #099 } // Literal.Number
-    .s     { color: #d14 } // Literal.String
-    .na    { color: #008080 } // Name.Attribute
-    .nb    { color: #0086B3 } // Name.Builtin
-    .nc    { color: #458; font-weight: bold } // Name.Class
-    .no    { color: #008080 } // Name.Constant
-    .ni    { color: #800080 } // Name.Entity
-    .ne    { color: #900; font-weight: bold } // Name.Exception
-    .nf    { color: #900; font-weight: bold } // Name.Function
-    .nn    { color: #555 } // Name.Namespace
-    .nt    { color: #000080 } // Name.Tag
-    .nv    { color: #008080 } // Name.Variable
-    .ow    { font-weight: bold } // Operator.Word
-    .w     { color: #bbb } // Text.Whitespace
-    .mf    { color: #099 } // Literal.Number.Float
-    .mh    { color: #099 } // Literal.Number.Hex
-    .mi    { color: #099 } // Literal.Number.Integer
-    .mo    { color: #099 } // Literal.Number.Oct
-    .sb    { color: #d14 } // Literal.String.Backtick
-    .sc    { color: #d14 } // Literal.String.Char
-    .sd    { color: #d14 } // Literal.String.Doc
-    .s2    { color: #d14 } // Literal.String.Double
-    .se    { color: #d14 } // Literal.String.Escape
-    .sh    { color: #d14 } // Literal.String.Heredoc
-    .si    { color: #d14 } // Literal.String.Interpol
-    .sx    { color: #d14 } // Literal.String.Other
-    .sr    { color: #009926 } // Literal.String.Regex
-    .s1    { color: #d14 } // Literal.String.Single
-    .ss    { color: #990073 } // Literal.String.Symbol
-    .bp    { color: #999 } // Name.Builtin.Pseudo
-    .vc    { color: #008080 } // Name.Variable.Class
-    .vg    { color: #008080 } // Name.Variable.Global
-    .vi    { color: #008080 } // Name.Variable.Instance
-    .il    { color: #099 } // Literal.Number.Integer.Long
-}

http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_sass/bootstrap/_alerts.scss
----------------------------------------------------------------------
diff --git a/website/_sass/bootstrap/_alerts.scss 
b/website/_sass/bootstrap/_alerts.scss
deleted file mode 100755
index 7d1e1fd..0000000
--- a/website/_sass/bootstrap/_alerts.scss
+++ /dev/null
@@ -1,73 +0,0 @@
-//
-// Alerts
-// --------------------------------------------------
-
-
-// Base styles
-// -------------------------
-
-.alert {
-  padding: $alert-padding;
-  margin-bottom: $line-height-computed;
-  border: 1px solid transparent;
-  border-radius: $alert-border-radius;
-
-  // Headings for larger alerts
-  h4 {
-    margin-top: 0;
-    // Specified for the h4 to prevent conflicts of changing $headings-color
-    color: inherit;
-  }
-
-  // Provide class for links that match alerts
-  .alert-link {
-    font-weight: $alert-link-font-weight;
-  }
-
-  // Improve alignment and spacing of inner content
-  > p,
-  > ul {
-    margin-bottom: 0;
-  }
-
-  > p + p {
-    margin-top: 5px;
-  }
-}
-
-// Dismissible alerts
-//
-// Expand the right padding and account for the close button's positioning.
-
-.alert-dismissable, // The misspelled .alert-dismissable was deprecated in 
3.2.0.
-.alert-dismissible {
-  padding-right: ($alert-padding + 20);
-
-  // Adjust close link position
-  .close {
-    position: relative;
-    top: -2px;
-    right: -21px;
-    color: inherit;
-  }
-}
-
-// Alternate styles
-//
-// Generate contextual modifier classes for colorizing the alert.
-
-.alert-success {
-  @include alert-variant($alert-success-bg, $alert-success-border, 
$alert-success-text);
-}
-
-.alert-info {
-  @include alert-variant($alert-info-bg, $alert-info-border, $alert-info-text);
-}
-
-.alert-warning {
-  @include alert-variant($alert-warning-bg, $alert-warning-border, 
$alert-warning-text);
-}
-
-.alert-danger {
-  @include alert-variant($alert-danger-bg, $alert-danger-border, 
$alert-danger-text);
-}

http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_sass/bootstrap/_badges.scss
----------------------------------------------------------------------
diff --git a/website/_sass/bootstrap/_badges.scss 
b/website/_sass/bootstrap/_badges.scss
deleted file mode 100755
index 70002e0..0000000
--- a/website/_sass/bootstrap/_badges.scss
+++ /dev/null
@@ -1,68 +0,0 @@
-//
-// Badges
-// --------------------------------------------------
-
-
-// Base class
-.badge {
-  display: inline-block;
-  min-width: 10px;
-  padding: 3px 7px;
-  font-size: $font-size-small;
-  font-weight: $badge-font-weight;
-  color: $badge-color;
-  line-height: $badge-line-height;
-  vertical-align: middle;
-  white-space: nowrap;
-  text-align: center;
-  background-color: $badge-bg;
-  border-radius: $badge-border-radius;
-
-  // Empty badges collapse automatically (not available in IE8)
-  &:empty {
-    display: none;
-  }
-
-  // Quick fix for badges in buttons
-  .btn & {
-    position: relative;
-    top: -1px;
-  }
-
-  .btn-xs &,
-  .btn-group-xs > .btn & {
-    top: 0;
-    padding: 1px 5px;
-  }
-
-  // [converter] extracted a& to a.badge
-
-  // Account for badges in navs
-  .list-group-item.active > &,
-  .nav-pills > .active > a > & {
-    color: $badge-active-color;
-    background-color: $badge-active-bg;
-  }
-
-  .list-group-item > & {
-    float: right;
-  }
-
-  .list-group-item > & + & {
-    margin-right: 5px;
-  }
-
-  .nav-pills > li > a > & {
-    margin-left: 3px;
-  }
-}
-
-// Hover state, but only for links
-a.badge {
-  &:hover,
-  &:focus {
-    color: $badge-link-hover-color;
-    text-decoration: none;
-    cursor: pointer;
-  }
-}

http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_sass/bootstrap/_breadcrumbs.scss
----------------------------------------------------------------------
diff --git a/website/_sass/bootstrap/_breadcrumbs.scss 
b/website/_sass/bootstrap/_breadcrumbs.scss
deleted file mode 100755
index b61f0c7..0000000
--- a/website/_sass/bootstrap/_breadcrumbs.scss
+++ /dev/null
@@ -1,28 +0,0 @@
-//
-// Breadcrumbs
-// --------------------------------------------------
-
-
-.breadcrumb {
-  padding: $breadcrumb-padding-vertical $breadcrumb-padding-horizontal;
-  margin-bottom: $line-height-computed;
-  list-style: none;
-  background-color: $breadcrumb-bg;
-  border-radius: $border-radius-base;
-
-  > li {
-    display: inline-block;
-
-    + li:before {
-      // [converter] Workaround for https://github.com/sass/libsass/issues/1115
-      $nbsp: "\00a0";
-      content: "#{$breadcrumb-separator}#{$nbsp}"; // Unicode space added 
since inline-block means non-collapsing white-space
-      padding: 0 5px;
-      color: $breadcrumb-color;
-    }
-  }
-
-  > .active {
-    color: $breadcrumb-active-color;
-  }
-}

http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_sass/bootstrap/_button-groups.scss
----------------------------------------------------------------------
diff --git a/website/_sass/bootstrap/_button-groups.scss 
b/website/_sass/bootstrap/_button-groups.scss
deleted file mode 100755
index 43d235c..0000000
--- a/website/_sass/bootstrap/_button-groups.scss
+++ /dev/null
@@ -1,244 +0,0 @@
-//
-// Button groups
-// --------------------------------------------------
-
-// Make the div behave like a button
-.btn-group,
-.btn-group-vertical {
-  position: relative;
-  display: inline-block;
-  vertical-align: middle; // match .btn alignment given font-size hack above
-  > .btn {
-    position: relative;
-    float: left;
-    // Bring the "active" button to the front
-    &:hover,
-    &:focus,
-    &:active,
-    &.active {
-      z-index: 2;
-    }
-  }
-}
-
-// Prevent double borders when buttons are next to each other
-.btn-group {
-  .btn + .btn,
-  .btn + .btn-group,
-  .btn-group + .btn,
-  .btn-group + .btn-group {
-    margin-left: -1px;
-  }
-}
-
-// Optional: Group multiple button groups together for a toolbar
-.btn-toolbar {
-  margin-left: -5px; // Offset the first child's margin
-  @include clearfix;
-
-  .btn,
-  .btn-group,
-  .input-group {
-    float: left;
-  }
-  > .btn,
-  > .btn-group,
-  > .input-group {
-    margin-left: 5px;
-  }
-}
-
-.btn-group > .btn:not(:first-child):not(:last-child):not(.dropdown-toggle) {
-  border-radius: 0;
-}
-
-// Set corners individual because sometimes a single button can be in a 
.btn-group and we need :first-child and :last-child to both match
-.btn-group > .btn:first-child {
-  margin-left: 0;
-  &:not(:last-child):not(.dropdown-toggle) {
-    @include border-right-radius(0);
-  }
-}
-// Need .dropdown-toggle since :last-child doesn't apply given a 
.dropdown-menu immediately after it
-.btn-group > .btn:last-child:not(:first-child),
-.btn-group > .dropdown-toggle:not(:first-child) {
-  @include border-left-radius(0);
-}
-
-// Custom edits for including btn-groups within btn-groups (useful for 
including dropdown buttons within a btn-group)
-.btn-group > .btn-group {
-  float: left;
-}
-.btn-group > .btn-group:not(:first-child):not(:last-child) > .btn {
-  border-radius: 0;
-}
-.btn-group > .btn-group:first-child:not(:last-child) {
-  > .btn:last-child,
-  > .dropdown-toggle {
-    @include border-right-radius(0);
-  }
-}
-.btn-group > .btn-group:last-child:not(:first-child) > .btn:first-child {
-  @include border-left-radius(0);
-}
-
-// On active and open, don't show outline
-.btn-group .dropdown-toggle:active,
-.btn-group.open .dropdown-toggle {
-  outline: 0;
-}
-
-
-// Sizing
-//
-// Remix the default button sizing classes into new ones for easier 
manipulation.
-
-.btn-group-xs > .btn { @extend .btn-xs; }
-.btn-group-sm > .btn { @extend .btn-sm; }
-.btn-group-lg > .btn { @extend .btn-lg; }
-
-
-// Split button dropdowns
-// ----------------------
-
-// Give the line between buttons some depth
-.btn-group > .btn + .dropdown-toggle {
-  padding-left: 8px;
-  padding-right: 8px;
-}
-.btn-group > .btn-lg + .dropdown-toggle {
-  padding-left: 12px;
-  padding-right: 12px;
-}
-
-// The clickable button for toggling the menu
-// Remove the gradient and set the same inset shadow as the :active state
-.btn-group.open .dropdown-toggle {
-  @include box-shadow(inset 0 3px 5px rgba(0,0,0,.125));
-
-  // Show no shadow for `.btn-link` since it has no other button styles.
-  &.btn-link {
-    @include box-shadow(none);
-  }
-}
-
-
-// Reposition the caret
-.btn .caret {
-  margin-left: 0;
-}
-// Carets in other button sizes
-.btn-lg .caret {
-  border-width: $caret-width-large $caret-width-large 0;
-  border-bottom-width: 0;
-}
-// Upside down carets for .dropup
-.dropup .btn-lg .caret {
-  border-width: 0 $caret-width-large $caret-width-large;
-}
-
-
-// Vertical button groups
-// ----------------------
-
-.btn-group-vertical {
-  > .btn,
-  > .btn-group,
-  > .btn-group > .btn {
-    display: block;
-    float: none;
-    width: 100%;
-    max-width: 100%;
-  }
-
-  // Clear floats so dropdown menus can be properly placed
-  > .btn-group {
-    @include clearfix;
-    > .btn {
-      float: none;
-    }
-  }
-
-  > .btn + .btn,
-  > .btn + .btn-group,
-  > .btn-group + .btn,
-  > .btn-group + .btn-group {
-    margin-top: -1px;
-    margin-left: 0;
-  }
-}
-
-.btn-group-vertical > .btn {
-  &:not(:first-child):not(:last-child) {
-    border-radius: 0;
-  }
-  &:first-child:not(:last-child) {
-    border-top-right-radius: $btn-border-radius-base;
-    @include border-bottom-radius(0);
-  }
-  &:last-child:not(:first-child) {
-    border-bottom-left-radius: $btn-border-radius-base;
-    @include border-top-radius(0);
-  }
-}
-.btn-group-vertical > .btn-group:not(:first-child):not(:last-child) > .btn {
-  border-radius: 0;
-}
-.btn-group-vertical > .btn-group:first-child:not(:last-child) {
-  > .btn:last-child,
-  > .dropdown-toggle {
-    @include border-bottom-radius(0);
-  }
-}
-.btn-group-vertical > .btn-group:last-child:not(:first-child) > 
.btn:first-child {
-  @include border-top-radius(0);
-}
-
-
-// Justified button groups
-// ----------------------
-
-.btn-group-justified {
-  display: table;
-  width: 100%;
-  table-layout: fixed;
-  border-collapse: separate;
-  > .btn,
-  > .btn-group {
-    float: none;
-    display: table-cell;
-    width: 1%;
-  }
-  > .btn-group .btn {
-    width: 100%;
-  }
-
-  > .btn-group .dropdown-menu {
-    left: auto;
-  }
-}
-
-
-// Checkbox and radio options
-//
-// In order to support the browser's form validation feedback, powered by the
-// `required` attribute, we have to "hide" the inputs via `clip`. We cannot use
-// `display: none;` or `visibility: hidden;` as that also hides the popover.
-// Simply visually hiding the inputs via `opacity` would leave them clickable 
in
-// certain cases which is prevented by using `clip` and `pointer-events`.
-// This way, we ensure a DOM element is visible to position the popover from.
-//
-// See https://github.com/twbs/bootstrap/pull/12794 and
-// https://github.com/twbs/bootstrap/pull/14559 for more information.
-
-[data-toggle="buttons"] {
-  > .btn,
-  > .btn-group > .btn {
-    input[type="radio"],
-    input[type="checkbox"] {
-      position: absolute;
-      clip: rect(0,0,0,0);
-      pointer-events: none;
-    }
-  }
-}

http://git-wip-us.apache.org/repos/asf/mahout/blob/a60c79e7/website/_sass/bootstrap/_buttons.scss
----------------------------------------------------------------------
diff --git a/website/_sass/bootstrap/_buttons.scss 
b/website/_sass/bootstrap/_buttons.scss
deleted file mode 100755
index 6452b70..0000000
--- a/website/_sass/bootstrap/_buttons.scss
+++ /dev/null
@@ -1,168 +0,0 @@
-//
-// Buttons
-// --------------------------------------------------
-
-
-// Base styles
-// --------------------------------------------------
-
-.btn {
-  display: inline-block;
-  margin-bottom: 0; // For input.btn
-  font-weight: $btn-font-weight;
-  text-align: center;
-  vertical-align: middle;
-  touch-action: manipulation;
-  cursor: pointer;
-  background-image: none; // Reset unusual Firefox-on-Android default style; 
see https://github.com/necolas/normalize.css/issues/214
-  border: 1px solid transparent;
-  white-space: nowrap;
-  @include button-size($padding-base-vertical, $padding-base-horizontal, 
$font-size-base, $line-height-base, $btn-border-radius-base);
-  @include user-select(none);
-
-  &,
-  &:active,
-  &.active {
-    &:focus,
-    &.focus {
-      @include tab-focus;
-    }
-  }
-
-  &:hover,
-  &:focus,
-  &.focus {
-    color: $btn-default-color;
-    text-decoration: none;
-  }
-
-  &:active,
-  &.active {
-    outline: 0;
-    background-image: none;
-    @include box-shadow(inset 0 3px 5px rgba(0,0,0,.125));
-  }
-
-  &.disabled,
-  &[disabled],
-  fieldset[disabled] & {
-    cursor: $cursor-disabled;
-    @include opacity(.65);
-    @include box-shadow(none);
-  }
-
-  // [converter] extracted a& to a.btn
-}
-
-a.btn {
-  &.disabled,
-  fieldset[disabled] & {
-    pointer-events: none; // Future-proof disabling of clicks on `<a>` elements
-  }
-}
-
-
-// Alternate buttons
-// --------------------------------------------------
-
-.btn-default {
-  @include button-variant($btn-default-color, $btn-default-bg, 
$btn-default-border);
-}
-.btn-primary {
-  @include button-variant($btn-primary-color, $btn-primary-bg, 
$btn-primary-border);
-}
-// Success appears as green
-.btn-success {
-  @include button-variant($btn-success-color, $btn-success-bg, 
$btn-success-border);
-}
-// Info appears as blue-green
-.btn-info {
-  @include button-variant($btn-info-color, $btn-info-bg, $btn-info-border);
-}
-// Warning appears as orange
-.btn-warning {
-  @include button-variant($btn-warning-color, $btn-warning-bg, 
$btn-warning-border);
-}
-// Danger and error appear as red
-.btn-danger {
-  @include button-variant($btn-danger-color, $btn-danger-bg, 
$btn-danger-border);
-}
-
-
-// Link buttons
-// -------------------------
-
-// Make a button look and behave like a link
-.btn-link {
-  color: $link-color;
-  font-weight: normal;
-  border-radius: 0;
-
-  &,
-  &:active,
-  &.active,
-  &[disabled],
-  fieldset[disabled] & {
-    background-color: transparent;
-    @include box-shadow(none);
-  }
-  &,
-  &:hover,
-  &:focus,
-  &:active {
-    border-color: transparent;
-  }
-  &:hover,
-  &:focus {
-    color: $link-hover-color;
-    text-decoration: $link-hover-decoration;
-    background-color: transparent;
-  }
-  &[disabled],
-  fieldset[disabled] & {
-    &:hover,
-    &:focus {
-      color: $btn-link-disabled-color;
-      text-decoration: none;
-    }
-  }
-}
-
-
-// Button Sizes
-// --------------------------------------------------
-
-.btn-lg {
-  // line-height: ensure even-numbered height of button next to large input
-  @include button-size($padding-large-vertical, $padding-large-horizontal, 
$font-size-large, $line-height-large, $btn-border-radius-large);
-}
-.btn-sm {
-  // line-height: ensure proper height of button next to small input
-  @include button-size($padding-small-vertical, $padding-small-horizontal, 
$font-size-small, $line-height-small, $btn-border-radius-small);
-}
-.btn-xs {
-  @include button-size($padding-xs-vertical, $padding-xs-horizontal, 
$font-size-small, $line-height-small, $btn-border-radius-small);
-}
-
-
-// Block button
-// --------------------------------------------------
-
-.btn-block {
-  display: block;
-  width: 100%;
-}
-
-// Vertically space out multiple block buttons
-.btn-block + .btn-block {
-  margin-top: 5px;
-}
-
-// Specificity overrides
-input[type="submit"],
-input[type="reset"],
-input[type="button"] {
-  &.btn-block {
-    width: 100%;
-  }
-}

Reply via email to