http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/managing/disk_storage/disk_free_space_monitoring.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/disk_free_space_monitoring.html.md.erb 
b/geode-docs/managing/disk_storage/disk_free_space_monitoring.html.md.erb
deleted file mode 100644
index 837ac25..0000000
--- a/geode-docs/managing/disk_storage/disk_free_space_monitoring.html.md.erb
+++ /dev/null
@@ -1,57 +0,0 @@
----
-title:  Configuring Disk Free Space Monitoring
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-To modify `disk-usage-warning-percentage` and `disk-usage-critical-percentage` 
thresholds, specify the parameters when executing the `gfsh create disk-store` 
command.
-
-``` pre
-gfsh>create disk-store --name=serverOverflow --dir=c:\overflow_data#20480 \
---compaction-threshold=40 --auto-compact=false --allow-force-compaction=true \
---max-oplog-size=512 --queue-size=10000 --time-interval=15 
--write-buffer-size=65536 \
---disk-usage-warning-percentage=80 --disk-usage-critical-percentage=98
-```
-
-By default, disk usage above 80% triggers a warning message. Disk usage above 
99% generates an error and shuts down the member cache that accesses that disk 
store. To disable disk store monitoring, set the parameters to 0.
-
-To view the current threshold values set for an existing disk store, use the 
`gfsh                 describe` disk-store command:
-
-``` pre
-gfsh>describe disk-store --member=server1 --name=DiskStore1
-```
-
-You can also use the following `DiskStoreMXBean` method APIs to configure and 
obtain these thresholds programmatically.
-
--   `getDiskUsageCriticalPercentage`
--   `getDiskUsageWarningPercentage`
--   `setDiskUsageCriticalPercentage`
--   `setDiskUsageWarningPercentage`
-
-You can obtain statistics on disk space usage and the performance of disk 
space monitoring by accessing the following statistics:
-
--   `diskSpace`
--   `maximumSpace`
--   `volumeSize`
--   `volumeFreeSpace`
--   `volumeFreeSpaceChecks`
--   `volumeFreeSpaceTime`
-
-See [Disk Space Usage 
(DiskDirStatistics)](../../reference/statistics/statistics_list.html#section_6C2BECC63A83456190B029DEDB8F4BE3).
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/managing/disk_storage/disk_store_configuration_params.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/disk_store_configuration_params.html.md.erb 
b/geode-docs/managing/disk_storage/disk_store_configuration_params.html.md.erb
deleted file mode 100644
index 939028e..0000000
--- 
a/geode-docs/managing/disk_storage/disk_store_configuration_params.html.md.erb
+++ /dev/null
@@ -1,123 +0,0 @@
----
-title:  Disk Store Configuration Parameters
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-You define your disk stores by using the `gfsh create disk-store` command or 
in `<disk-store>` subelements of your cache declaration in `cache.xml`. All 
disk stores are available for use by all of your regions and queues.
-
-These `<disk-store>` attributes and subelements have corresponding `gfsh 
create disk-store` command-line parameters as well as getter and setter methods 
in the `org.apache.geode.cache.DiskStoreFactory` and 
`org.apache.geode.cache.DiskStore` APIs.
-
-## <a 
id="disk_store_configuration_params__section_77273B9B5EA54227A2D25682BD77BAC3" 
class="no-quick-link"></a>Disk Store Configuration Attributes and Elements
-
-<table>
-<colgroup>
-<col width="33%" />
-<col width="33%" />
-<col width="34%" />
-</colgroup>
-<thead>
-<tr class="header">
-<th>disk-store attribute</th>
-<th>Description</th>
-<th>Default</th>
-</tr>
-</thead>
-<tbody>
-<tr class="odd">
-<td><code class="ph codeph">name</code></td>
-<td>String used to identify this disk store. All regions and queues select 
their disk store by specifying this name.</td>
-<td>DEFAULT</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">allow-force-compaction</code></td>
-<td>Boolean indicating whether to allow manual compaction through the API or 
command-line tools.</td>
-<td>false</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">auto-compact</code></td>
-<td>Boolean indicating whether to automatically compact a file when it reaches 
the <code class="ph codeph">compaction-threshold</code>.</td>
-<td>true</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">compaction-threshold</code></td>
-<td>Percentage of garbage allowed in the file before it is eligible for 
compaction. Garbage is created by entry destroys, entry updates, and region 
destroys and creates. Surpassing this percentage does not make compaction 
occur—it makes the file eligible to be compacted when a compaction is 
done.</td>
-<td>50</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">disk-usage-critical-percentage</code></td>
-<td>Disk usage above this threshold generates an error message and shuts down 
the member's cache. For example, if the threshold is set to 99%, then falling 
under 10 GB of free disk space on a 1 TB drive generates the error and shuts 
down the cache.
-<p>Set to &quot;0&quot; (zero) to disable.</p></td>
-<td>99</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">disk-usage-warning-percentage</code></td>
-<td>Disk usage above this threshold generates a warning message. For example, 
if the threshold is set to 90%, then on a 1 TB drive falling under 100 GB of 
free disk space generates the warning.
-<p>Set to &quot;0&quot; (zero) to disable.</p></td>
-<td>90</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">max-oplog-size</code></td>
-<td>The largest size, in megabytes, to allow an operation log to become before 
automatically rolling to a new file. This size is the combined sizes of the 
oplog files.</td>
-<td>1024</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">queue-size</code></td>
-<td>For asynchronous queueing. The maximum number of operations to allow into 
the write queue before automatically flushing the queue. Operations that would 
add entries to the queue block until the queue is flushed. A value of zero 
implies no size limit. Reaching this limit or the time-interval limit will 
cause the queue to flush.</td>
-<td>0</td>
-</tr>
-<tr class="odd">
-<td><code class="ph codeph">time-interval</code></td>
-<td>For asynchronous queueing. The number of milliseconds that can elapse 
before data is flushed to disk. Reaching this limit or the queue-size limit 
causes the queue to flush.</td>
-<td>1000</td>
-</tr>
-<tr class="even">
-<td><code class="ph codeph">write-buffer-size</code></td>
-<td>Size of the buffer, in bytes, used to write to disk.</td>
-<td>32768</td>
-</tr>
-</tbody>
-</table>
-
-| `disk-store` subelement | Description                                        
                                     | Default                |
-|-------------------------|-----------------------------------------------------------------------------------------|------------------------|
-| `<disk-dirs>`           | Defines the system directories where the disk 
store is written and their maximum sizes. | `.` with no size limit |
-
-## <a 
id="disk_store_configuration_params__section_366001C72D674AF69B2CED91BFA73A9B" 
class="no-quick-link"></a>disk-dirs Element
-
-The `<disk-dirs>` element defines the host system directories to use for the 
disk store. It contains one or more single `<disk-dir>` elements with the 
following contents:
-
--   The directory specification, provided as the text of the `disk-dir` 
element.
--   An optional `dir-size` attribute specifying the maximum amount of space, 
in megabytes, to use for the disk store in the directory. By default, there is 
no limit. The space used is calculated as the combined sizes of all oplog files.
-
-You can specify any number of `disk-dir` subelements to the `disk-dirs` 
element. The data is spread evenly among the active disk files in the 
directories, keeping within any limits you set.
-
-Example:
-
-``` pre
-<disk-dirs>
-    <disk-dir>/host1/users/gf/memberA_DStore</disk-dir>
-    <disk-dir>/host2/users/gf/memberA_DStore</disk-dir> 
-    <disk-dir dir-size="20480">/host3/users/gf/memberA_DStore</disk-dir> 
-</disk-dirs>
-```
-
-**Note:**
-The directories must exist when the disk store is created or the system throws 
an exception. Geode does not create directories.
-
-Use different disk-dir specifications for different disk stores. You cannot 
use the same directory for the same named disk store in two different members.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/managing/disk_storage/file_names_and_extensions.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/file_names_and_extensions.html.md.erb 
b/geode-docs/managing/disk_storage/file_names_and_extensions.html.md.erb
deleted file mode 100644
index 727f23b..0000000
--- a/geode-docs/managing/disk_storage/file_names_and_extensions.html.md.erb
+++ /dev/null
@@ -1,96 +0,0 @@
----
-title:  Disk Store File Names and Extensions
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-Disk store files include store management files, access control files, and the 
operation log, or oplog, files, consisting of one file for deletions and 
another for all other operations.
-
-<a 
id="file_names_and_extensions__section_AE90870A7BDB425B93111D1A6E166874"></a>
-The next tables describe file names and extensions; they are followed by 
example disk store files.
-
-## <a id="file_names_and_extensions__section_C99ABFDB1AEA4FE4B38F5D4F1D612F71" 
class="no-quick-link"></a>File Names
-
-File names have three parts:
-
-**First Part of File Name: Usage Identifier**
-
-| Values   | Used for                                                          
     | Examples                                   |
-|----------|------------------------------------------------------------------------|--------------------------------------------|
-| OVERFLOW | Oplog data from overflow regions and queues only.                 
     | OVERFLOWoverflowDS1\_1.crf                 |
-| BACKUP   | Oplog data from persistent and persistent+overflow regions and 
queues. | BACKUPoverflowDS1.if, BACKUPDEFAULT.if     |
-| DRLK\_IF | Access control - locking the disk store.                          
     | DRLK\_IFoverflowDS1.lk, DRLK\_IFDEFAULT.lk |
-
-**Second Part of File Name: Disk Store Name**
-
-| Values                  | Used for                                           
                                                                       | 
Examples                                                                        
     |
-|-------------------------|---------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------|
-| &lt;disk store name&gt; | Non-default disk stores.                           
                                                                       | 
name="overflowDS1" DRLK\_IFoverflowDS1.lk, name="persistDS1" 
BACKUPpersistDS1\_1.crf |
-| DEFAULT                 | Default disk store name, used when persistence or 
overflow are specified on a region or queue but no disk store is named. | 
DRLK\_IFDEFAULT.lk, BACKUPDEFAULT\_1.crf                                        
     |
-
-**Third Part of File Name: oplog Sequence Number**
-
-| Values                            | Used for                                 
       | Examples                                                               
      |
-|-----------------------------------|-------------------------------------------------|------------------------------------------------------------------------------|
-| Sequence number in the format \_n | Oplog data files only. Numbering starts 
with 1. | OVERFLOWoverflowDS1\_1.crf, BACKUPpersistDS1\_2.crf, 
BACKUPpersistDS1\_3.crf |
-
-## <a id="file_names_and_extensions__section_4FC89D10D6304088882B2E278A889A9B" 
class="no-quick-link"></a>File Extensions
-
-| File extension | Used for                                         | Notes    
                                                                                
            |
-|----------------|--------------------------------------------------|------------------------------------------------------------------------------------------------------|
-| if             | Disk store metadata                              | Stored 
in the first disk-dir listed for the store. Negligible size - not considered in 
size control. |
-| lk             | Disk store access control                        | Stored 
in the first disk-dir listed for the store. Negligible size - not considered in 
size control. |
-| crf            | Oplog: create, update, and invalidate operations | 
Pre-allocated 90% of the total max-oplog-size at creation.                      
                     |
-| drf            | Oplog: delete operations                         | 
Pre-allocated 10% of the total max-oplog-size at creation.                      
                     |
-| krf            | Oplog: key and crf offset information            | Created 
after the oplog has reached the max-oplog-size. Used to improve performance at 
startup.      |
-
-Example files for disk stores persistDS1 and overflowDS1:
-
-``` pre
-bash-2.05$ ls -tlra persistData1/
-total 8
--rw-rw-r--   1 person users        188 Mar  4 06:17 BACKUPpersistDS1.if
-drwxrwxr-x   2 person users        512 Mar  4 06:17 .
--rw-rw-r--   1 person users          0 Mar  4 06:18 BACKUPpersistDS1_1.drf
--rw-rw-r--   1 person users         38 Mar  4 06:18 BACKUPpersistDS1_1.crf
-drwxrwxr-x   8 person users        512 Mar  4 06:20 ..
-bash-2.05$
- 
-bash-2.05$ ls -ltra overflowData1/
-total 1028
-drwxrwxr-x   8 person users        512 Mar  4 06:20 ..
--rw-rw-r--   1 person users          0 Mar  4 06:21 DRLK_IFoverflowDS1.lk
--rw-rw-r--   1 person users          0 Mar  4 06:21 BACKUPoverflowDS1.if
--rw-rw-r--   1 person users 1073741824 Mar  4 06:21 OVERFLOWoverflowDS1_1.crf
-drwxrwxr-x   2 person users        512 Mar  4 06:21 .
-```
-
-Example default disk store files for a persistent region:
-
-``` pre
-bash-2.05$ ls -tlra
-total 106
-drwxrwxr-x   8 person users       1024 Mar  8 14:51 ..
--rw-rw-r--   1 person users       1010 Mar  8 15:01 defTest.xml
-drwxrwxr-x   2 person users        512 Mar  8 15:01 backupDirectory
--rw-rw-r--   1 person users          0 Mar  8 15:01 DRLK_IFDEFAULT.lk
--rw-rw-r--   1 person users  107374183 Mar  8 15:01 BACKUPDEFAULT_1.drf
--rw-rw-r--   1 person users  966367641 Mar  8 15:01 BACKUPDEFAULT_1.crf
--rw-rw-r--   1 person users        172 Mar  8 15:01 BACKUPDEFAULT.if
-drwxrwxr-x   3 person users        512 Mar  8 15:01 .           
-```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/managing/disk_storage/handling_missing_disk_stores.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/handling_missing_disk_stores.html.md.erb 
b/geode-docs/managing/disk_storage/handling_missing_disk_stores.html.md.erb
deleted file mode 100644
index 959ae51..0000000
--- a/geode-docs/managing/disk_storage/handling_missing_disk_stores.html.md.erb
+++ /dev/null
@@ -1,72 +0,0 @@
----
-title:  Handling Missing Disk Stores
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-<a 
id="handling_missing_disk_stores__section_9345819FC27E41FB94F5E54979B7C506"></a>
-This section applies to disk stores that hold the latest copy of your data for 
at least one region.
-
-## <a 
id="handling_missing_disk_stores__section_9E8FBB7935F34239AD5E65A3E857EEAA" 
class="no-quick-link"></a>Show Missing Disk Stores
-
-Using `gfsh`, the `show missing-disk-stores` command lists all disk stores 
with most recent data that are being waited on by other members.
-
-For replicated regions, this command only lists missing members that are 
preventing other members from starting up. For partitioned regions, this 
command also lists any offline data stores, even when other data stores for the 
region are online, because their offline status may be causing 
`PartitionOfflineExceptions` in cache operations or preventing the system from 
satisfying redundancy.
-
-Example:
-
-``` pre
-gfsh>show missing-disk-stores
-          Disk Store ID              |   Host    |               Directory     
                                      
------------------------------------- | --------- | 
-------------------------------------
-60399215-532b-406f-b81f-9b5bd8d1b55a | excalibur | 
/usr/local/gemfire/deploy/disk_store1
-```
-
-**Note:**
-You need to be connected to JMX Manager in `gfsh` to run this command.
-
-**Note:**
-The disk store directories listed for missing disk stores may not be the 
directories you have currently configured for the member. The list is retrieved 
from the other running members—the ones who are reporting the missing member. 
They have information from the last time the missing disk store was online. If 
you move your files and change the member’s configuration, these directory 
locations will be stale.
-
-Disk stores usually go missing because their member fails to start. The member 
can fail to start for a number of reasons, including:
-
--   Disk store file corruption. You can check on this by validating the disk 
store.
--   Incorrect distributed system configuration for the member
--   Network partitioning
--   Drive failure
-
-## <a 
id="handling_missing_disk_stores__section_FDF161F935054AB190D9DB0D7930CEAA" 
class="no-quick-link"></a>Revoke Missing Disk Stores
-
-This section applies to disk stores for which both of the following are true:
-
--   Disk stores that have the most recent copy of data for one or more regions 
or region buckets.
--   Disk stores that are unrecoverable, such as when you have deleted them, or 
their files are corrupted or on a disk that has had a catastrophic failure.
-
-When you cannot bring the latest persisted copy online, use the revoke command 
to tell the other members to stop waiting for it. Once the store is revoked, 
the system finds the remaining most recent copy of data and uses that.
-
-**Note:**
-Once revoked, a disk store cannot be reintroduced into the system.
-
-Use gfsh show missing-disk-stores to properly identify the disk store you need 
to revoke. The revoke command takes the disk store ID as input, as listed by 
that command.
-
-Example:
-
-``` pre
-gfsh>revoke missing-disk-store --id=60399215-532b-406f-b81f-9b5bd8d1b55a
-Missing disk store successfully revoked
-```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/managing/disk_storage/how_disk_stores_work.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/disk_storage/how_disk_stores_work.html.md.erb 
b/geode-docs/managing/disk_storage/how_disk_stores_work.html.md.erb
deleted file mode 100644
index ee75b98..0000000
--- a/geode-docs/managing/disk_storage/how_disk_stores_work.html.md.erb
+++ /dev/null
@@ -1,60 +0,0 @@
----
-title:  How Disk Stores Work
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-Overflow and persistence use disk stores individually or together to store 
data.
-
-<a id="how_disk_stores_work__section_1A93EFBE3E514918833592C17CFC4C40"></a>
-Disk storage is available for these items:
-
--   **Regions**. Persist and/or overflow data from regions.
--   **Server’s client subscription queues**. Overflow the messaging queues 
to control memory use.
--   **Gateway sender queues**. Persist these for high availability. These 
queues always overflow.
--   **PDX serialization metadata**. Persist metadata about objects you 
serialize using Geode PDX serialization.
-
-Each member has its own set of disk stores, and they are completely separate 
from the disk stores of any other member. For each disk store, define where and 
how the data is stored to disk. You can store data from multiple regions and 
queues in a single disk store.
-
-This figure shows a member with disk stores D through R defined. The member 
has two persistent regions using disk store D and an overflow region and an 
overflow queue using disk store R.
-
-<img src="../../images/diskStores-1.gif" 
id="how_disk_stores_work__image_CB7972998C4A40B2A02550B97A723536" class="image" 
/>
-
-## <a id="how_disk_stores_work__section_433EEEA1560D40DD9842200181EB1D0A" 
class="no-quick-link"></a>What Geode Writes to the Disk Store
-
-This list describes the items that Geode comprise the disk store:
-
--   The members that host the store, and information on their status, such as 
which members are online and which members are offline and time stamps.
--   A disk store identifier.
--   Which regions are in the disk store, specified by region name.
--   Colocated regions that the regions in the disk store are dependent upon.
--   A set of files that specify all keys for the regions, as well as all 
operations on the regions. Given both keys and operations, a region can be 
recreated when a member is restarted.
-
-Geode does not write indexes to disk.
-
-## <a id="how_disk_stores_work__section_C1A047CD5518499D94A0E9A0328F6DB8" 
class="no-quick-link"></a>Disk Store State
-
-The files for a disk store are used by Geode as a group. Treat them as a 
single entity. If you copy them, copy them all together. Do not change the file 
names.
-
-Disk store access and management differs according to whether the member is 
online or offline.
-
-While a member is running, its disk stores are online. When the member exits 
and is not running, its disk stores are offline.
-
--   Online, a disk store is owned and managed by its member process. To run 
operations on an online disk store, use API calls in the member process, or use 
the `gfsh` command-line interface.
--   Offline, the disk store is just a collection of files in the host file 
system. The files are accessible based on file system permissions. You can copy 
the files for backup or to move the member’s disk store location. You can 
also run some maintenance operations, such as file compaction and validation, 
by using the `gfsh` command-line interface. When offline, the disk store's 
information is unavailable to the distributed system. For partitioned regions, 
region data is split between multiple members, and therefore the start up of a 
member is dependent on and must wait for all members to be online. An attempt 
to access an entry that is stored on disk by an offline member results in a 
`PartitionOfflineException`.
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/managing/disk_storage/keeping_offline_disk_store_in_sync.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/keeping_offline_disk_store_in_sync.html.md.erb
 
b/geode-docs/managing/disk_storage/keeping_offline_disk_store_in_sync.html.md.erb
deleted file mode 100644
index 0da24a8..0000000
--- 
a/geode-docs/managing/disk_storage/keeping_offline_disk_store_in_sync.html.md.erb
+++ /dev/null
@@ -1,65 +0,0 @@
----
-title:  Keeping a Disk Store Synchronized with the Cache
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-<a 
id="syncing_offline_disk_store__section_7D01550D750E48289EFBA9BBDB5A334E"></a>
-You can take several actions to optimize disk store use and data loading at 
startup.
-
-## <a 
id="syncing_offline_disk_store__section_7B95B20F07BD40699CDB7F3D6A93B905" 
class="no-quick-link"></a>Change Region Configuration
-
-When your disk store is offline, you can keep the configuration for its 
regions up-to-date with your `cache.xml` and API settings. The disk store 
retains region capacity and load settings, including entry map settings 
(initial capacity, concurrency level, load factor), LRU eviction settings, and 
the statistics enabled boolean. If the configurations do not match at startup, 
the `cache.xml` and API override any disk store settings and the disk store is 
automatically updated to match. So you do not need to modify your disk store to 
keep your cache configuration and disk store synchronized, but you will save 
startup time and memory if you do.
-
-For example, to change the initial capacity of the disk store:
-
-``` pre
-gfsh>alter disk-store --name=myDiskStoreName --region=partitioned_region 
---disk-dirs=/firstDiskStoreDir,/secondDiskStoreDir,/thirdDiskStoreDir 
---initialCapacity=20
-```
-
-To list all modifiable settings and their current values for a region, run the 
command with no actions specified:
-
-``` pre
-gfsh>alter disk-store --name=myDiskStoreName --region=partitioned_region
---disk-dirs=/firstDiskStoreDir,/secondDiskStoreDir,/thirdDiskStoreDir  
-```
-
-## <a 
id="syncing_offline_disk_store__section_0CA17ED106394686A1A5B30601758DA6" 
class="no-quick-link"></a>Take a Region Out of Your Cache Configuration and 
Disk Store
-
-You might remove a region from your application if you decide to rename it or 
to split its data into two entirely different regions. Any significant data 
restructuring can cause you to retire some data regions.
-
-This applies to the removal of regions while the disk store is offline. 
Regions you destroy through API calls or by `gfsh` are automatically removed 
from the disk store of online members.
-
-In your application development, when you discontinue use of a persistent 
region, remove the region from the member’s disk store as well.
-
-**Note:**
-Perform the following operations with caution. You are permanently removing 
data.
-
-You can remove the region from the disk store in one of two ways:
-
--   Delete the entire set of disk store files. Your member will initialize 
with an empty set of files the next time you and start it. Exercise caution 
when removing the files from the file system, as more than one region can be 
specified to use the same disk store directories.
--   Selectively remove the discontinued region from the disk store with a 
command such as:
-
-    ``` pre
-    gfsh>alter disk-store --name=myDiskStoreName --region=partitioned_region
-    --disk-dirs=/firstDiskStoreDir,/secondDiskStoreDir,/thirdDiskStoreDir 
--remove
-    ```
-
-To guard against unintended data loss, Geode maintains the region in the disk 
store until you manually remove it. Regions in the disk stores that are not 
associated with any region in your application are still loaded into temporary 
regions in memory and kept there for the life of the member. The system has no 
way of detecting whether the cache region will be created by your API at some 
point, so it keeps the temporary region loaded and available.

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/managing/disk_storage/managing_disk_buffer_flushes.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/managing_disk_buffer_flushes.html.md.erb 
b/geode-docs/managing/disk_storage/managing_disk_buffer_flushes.html.md.erb
deleted file mode 100644
index 7238843..0000000
--- a/geode-docs/managing/disk_storage/managing_disk_buffer_flushes.html.md.erb
+++ /dev/null
@@ -1,44 +0,0 @@
----
-title:  Altering When Buffers Are Flushed to Disk
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-You can configure Geode to write immediately to disk and you may be able to 
modify your operating system behavior to perform buffer flushes more frequently.
-
-Typically, Geode writes disk data into the operating system's disk buffers and 
the operating system periodically flushes the buffers to disk. Increasing the 
frequency of writes to disk decreases the likelihood of data loss from 
application or machine crashes, but it impacts performance. Your other option, 
which may give you better performance, is to use Geode's in-memory data 
backups. Do this by storing your data in multiple replicated regions or in 
partitioned regions that are configured with redundant copies. See [Region 
Types](../../developing/region_options/region_types.html#region_types).
-
-## <a id="disk_buffer_flushes__section_448348BD28B14F478D81CC2EDC6C7049" 
class="no-quick-link"></a>Modifying Disk Flushes for the Operating System
-
-You may be able to change the operating system settings for periodic flushes. 
You may also be able to perform explicit disk flushes from your application 
code. For information on these options, see your operating system's 
documentation. For example, in Linux you can change the disk flush interval by 
modifying the setting `/proc/sys/vm/dirty_expire_centiseconds`. It defaults to 
30 seconds. To alter this setting, see the Linux documentation for 
`dirty_expire_centiseconds`.
-
-## <a id="disk_buffer_flushes__section_D1068505581A43EE8395DBE97297C60F" 
class="no-quick-link"></a>Modifying Geode to Flush Buffers on Disk Writes
-
-You can have Geode flush the disk buffers on every disk write. Do this by 
setting the system property `gemfire.syncWrites` to true at the command line 
when you start your Geode member. You can only modify this setting when you 
start a member. When this is set, Geode uses a Java `RandomAccessFile` with the 
flags "rwd", which causes every file update to be written synchronously to the 
storage device. This only guarantees your data if your disk stores are on a 
local device. See the Java documentation for `java.IO.RandomAccessFile`.
-
-To modify the setting for a Geode application, add this to the java command 
line when you start the member:
-
-``` pre
--Dgemfire.syncWrites=true
-```
-
-To modify the setting for a cache server, use this syntax:
-
-``` pre
-gfsh>start server --name=... --J=-Dgemfire.syncWrites=true
-```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/managing/disk_storage/managing_disk_stores.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/disk_storage/managing_disk_stores.html.md.erb 
b/geode-docs/managing/disk_storage/managing_disk_stores.html.md.erb
deleted file mode 100644
index 5262be1..0000000
--- a/geode-docs/managing/disk_storage/managing_disk_stores.html.md.erb
+++ /dev/null
@@ -1,42 +0,0 @@
----
-title:  Disk Store Management
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-The `gfsh` command-line tool has a number of options for examining and 
managing your disk stores. The `gfsh` tool, the `cache.xml` file and the 
DiskStore APIs are your management tools for online and offline disk stores.
-
-See [Disk Store 
Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_1ACC91B493EE446E89EC7DBFBBAE00EA)
 for a list of available commands.
-
--   **[Disk Store Management Commands and 
Operations](../../managing/disk_storage/managing_disk_stores_cmds.html)**
-
--   **[Validating a Disk 
Store](../../managing/disk_storage/validating_disk_store.html)**
-
--   **[Running Compaction on Disk Store Log 
Files](../../managing/disk_storage/compacting_disk_stores.html)**
-
--   **[Keeping a Disk Store Synchronized with the 
Cache](../../managing/disk_storage/keeping_offline_disk_store_in_sync.html)**
-
--   **[Configuring Disk Free Space 
Monitoring](../../managing/disk_storage/disk_free_space_monitoring.html)**
-
--   **[Handling Missing Disk 
Stores](../../managing/disk_storage/handling_missing_disk_stores.html)**
-
--   **[Altering When Buffers Are Flushed to 
Disk](../../managing/disk_storage/managing_disk_buffer_flushes.html)**
-
-    You can configure Geode to write immediately to disk and you may be able 
to modify your operating system behavior to perform buffer flushes more 
frequently.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/managing/disk_storage/managing_disk_stores_cmds.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/managing_disk_stores_cmds.html.md.erb 
b/geode-docs/managing/disk_storage/managing_disk_stores_cmds.html.md.erb
deleted file mode 100644
index 1578e51..0000000
--- a/geode-docs/managing/disk_storage/managing_disk_stores_cmds.html.md.erb
+++ /dev/null
@@ -1,62 +0,0 @@
----
-title:  Disk Store Management Commands and Operations
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-<a 
id="concept_8E6C4AD311674880941DA0F224A7BF39__section_4AFD4B9EECDA448BA5235FB1C32A48F1"></a>
-You can manage your disk stores using the gfsh command-line tool. For more 
information on `gfsh` commands, see [gfsh (Geode 
SHell)](../../tools_modules/gfsh/chapter_overview.html) and [Disk Store 
Commands](../../tools_modules/gfsh/quick_ref_commands_by_area.html#topic_1ACC91B493EE446E89EC7DBFBBAE00EA).
-
-**Note:**
-Each of these commands operates either on the online disk stores or offline 
disk stores, but not both.
-
-| gfsh Command                  | Online or Offline Command | See . . .        
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
                          |
-|-------------------------------|---------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `alter disk-store`            | Off                       | [Keeping a Disk 
Store Synchronized with the 
Cache](keeping_offline_disk_store_in_sync.html#syncing_offline_disk_store)      
                                                                                
                                                                                
                                                                                
                                                                                
                                                       |
-| `compact disk-store`          | On                        | [Running 
Compaction on Disk Store Log 
Files](compacting_disk_stores.html#compacting_disk_stores)                      
                                                                                
                                                                                
                                                                                
                                                                                
                                                             |
-| `backup disk-store`           | On                        | [Creating 
Backups for System Recovery and Operational 
Management](backup_restore_disk_store.html#backup_restore_disk_store) |
-| `compact offline-disk-store`  | Off                       | [Running 
Compaction on Disk Store Log 
Files](compacting_disk_stores.html#compacting_disk_stores)                      
                                                                                
                                                                                
                                                                                
                                                                                
                                                             |
-| `export offline-disk-store`   | Off                       | [Creating 
Backups for System Recovery and Operational 
Management](backup_restore_disk_store.html#backup_restore_disk_store) |
-| `revoke missing-disk-store`   | On                        | [Handling 
Missing Disk 
Stores](handling_missing_disk_stores.html#handling_missing_disk_stores)         
                                                                                
                                                                                
                                                                                
                                                                                
                                                                            |
-| `show missing-disk-stores`    | On                        | [Handling 
Missing Disk 
Stores](handling_missing_disk_stores.html#handling_missing_disk_stores)         
                                                                                
                                                                                
                                                                                
                                                                                
                                                                            |
-| `shutdown`                    | On                        | [Start Up and 
Shut Down with Disk Stores](starting_system_with_disk_stores.html)              
                                                                                
                                                                                
                                                                                
                                               |
-| `validate offline disk-store` | Off                       | [Validating a 
Disk Store](validating_disk_store.html#validating_disk_store)                   
                                                                                
                                                                                
                                                                                
                                                                                
                                                                                
     |
-
-For complete command syntax of any gfsh command, run `help                     
<command>` at the gfsh command line.
-
-## <a 
id="concept_8E6C4AD311674880941DA0F224A7BF39__section_885D2FD6C4D94664BE1DEE032153B819"
 class="no-quick-link"></a>Online Disk Store Operations
-
-For online operations, `gfsh` must be connected to a distributed system via a 
JMX manager and sends the operation requests to the members that have disk 
stores. These commands will not run on offline disk stores.
-
-## <a 
id="concept_8E6C4AD311674880941DA0F224A7BF39__section_5B001E58091D4CC1B83CFF9B895C7DA2"
 class="no-quick-link"></a>Offline Disk Store Operations
-
-For offline operations, `gfsh` runs the command against the specified disk 
store and its specified directories. You must specify all directories for the 
disk store. For example:
-
-``` pre
-gfsh>compact offline-disk-store --name=mydiskstore --disk-dirs=MyDirs 
-```
-
-Offline operations will not run on online disk stores. The tool locks the disk 
store while it is running, so the member cannot start in the middle of an 
operation.
-
-If you try to run an offline command for an online disk store, you get a 
message like this:
-
-``` pre
-gfsh>compact offline-disk-store --name=DEFAULT --disk-dirs=s1
-This disk store is in use by another process. "compact disk-store" can 
-be used to compact a disk store that is currently in use.
-```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/managing/disk_storage/operation_logs.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/disk_storage/operation_logs.html.md.erb 
b/geode-docs/managing/disk_storage/operation_logs.html.md.erb
deleted file mode 100644
index b8d4211..0000000
--- a/geode-docs/managing/disk_storage/operation_logs.html.md.erb
+++ /dev/null
@@ -1,69 +0,0 @@
----
-title:  Disk Store Operation Logs
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-At creation, each operation log is initialized at the disk store's 
`max-oplog-size`, with the size divided between the `crf` and `drf` files. When 
the oplog is closed, Apache Geode shrinks the files to the space used in each 
file.
-
-<a id="operation_logs__section_C0B1391492394A908577C29772902A42"></a>
-After the oplog is closed, Geode also attempts to create a `krf` file, which 
contains the key names as well as the offset for the value within the `crf` 
file. Although this file is not required for startup, if it is available, it 
will improve startup performance by allowing Geode to load the entry values in 
the background after the entry keys are loaded.
-
-When an operation log is full, Geode automatically closes it and creates a new 
log with the next sequence number. This is called *oplog rolling*. You can also 
request an oplog rolling through the API call `DiskStore.forceRoll`. You may 
want to do this immediately before compacting your disk stores, so the latest 
oplog is available for compaction.
-
-**Note:**
-Log compaction can change the names of the disk store files. File number 
sequencing is usually altered, with some existing logs removed or replaced by 
newer logs with higher numbering. Geode always starts a new log at a number 
higher than any existing number.
-
-This example listing shows the logs in a system with only one disk directory 
specified for the store. The first log (`BACKUPCacheOverflow_1.crf` and 
`BACKUPCacheOverflow_1.drf`) has been closed and the system is writing to the 
second log.
-
-``` pre
-bash-2.05$ ls -tlra 
-total 55180
-drwxrwxr-x   7 person users        512 Mar 22 13:56 ..
--rw-rw-r--   1 person users          0 Mar 22 13:57 BACKUPCacheOverflow_2.drf
--rw-rw-r--   1 person users     426549 Mar 22 13:57 BACKUPCacheOverflow_2.crf
--rw-rw-r--   1 person users          0 Mar 22 13:57 BACKUPCacheOverflow_1.drf
--rw-rw-r--   1 person users     936558 Mar 22 13:57 BACKUPCacheOverflow_1.crf
--rw-rw-r--   1 person users       1924 Mar 22 13:57 BACKUPCacheOverflow.if
-drwxrwxr-x   2 person users       2560 Mar 22 13:57 .
-```
-
-The system rotates through all available disk directories to write its logs. 
The next log is always started in a directory that has not reached its 
configured capacity, if one exists.
-
-## <a id="operation_logs__section_8431984F4E6644D79292850CCA60E6E3" 
class="no-quick-link"></a>When Disk Store Oplogs Reach the Configured Disk 
Capacity
-
-If no directory exists that is within its capacity limits, how Geode handles 
this depends on whether automatic compaction is enabled.
-
--   If auto-compaction is enabled, Geode creates a new oplog in one of the 
directories, going over the limit, and logs a warning that reports:
-
-    ``` pre
-    Even though the configured directory size limit has been exceeded a 
-    new oplog will be created. The current limit is of XXX. The current 
-    space used in the directory is YYY.
-    ```
-
-    **Note:**
-    When auto-compaction is enabled, `dir-size` does not limit how much disk 
space is used. Geode will perform auto-compaction, which should free space, but 
the system may go over the configured disk limits.
-
--   If auto-compaction is disabled, Geode does not create a new oplog, 
operations in the regions attached to the disk store block, and Geode logs this 
error:
-
-    ``` pre
-    Disk is full and rolling is disabled. No space can be created
-    ```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/managing/disk_storage/optimize_availability_and_performance.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/optimize_availability_and_performance.html.md.erb
 
b/geode-docs/managing/disk_storage/optimize_availability_and_performance.html.md.erb
deleted file mode 100644
index 5443d93..0000000
--- 
a/geode-docs/managing/disk_storage/optimize_availability_and_performance.html.md.erb
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title:  Optimizing a System with Disk Stores
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-Optimize availability and performance by following the guidelines in this 
section.
-
-1.  Apache Geode recommends the use of `ext4` filesystems when operating on 
Linux or Solaris platforms. The `ext4` filesystem supports preallocation, which 
benefits disk startup performance. If you are using `ext3` filesystems in 
latency-sensitive environments with high write throughput, you can improve disk 
startup performance by setting the `maxOplogSize` (see the 
`DiskStoreFactory.setMaxOplogSize`) to a value lower than the default 1 GB and 
by disabling preallocation by specifying the system property 
`gemfire.preAllocateDisk=false` upon Geode process startup.
-2.  When you start your system, start all the members that have persistent 
regions at roughly the same time. Create and use startup scripts for 
consistency and completeness.
-3.  Shut down your system using the gfsh `shutdown` command. This is an 
ordered shutdown that positions your disk stores for a faster startup.
-4.  Configure critical usage thresholds (`disk-usage-warning-percentage` and 
`disk-usage-critical-percentage`) for the disk. By default, these are set to 
80% for warning and 99% for errors that will shut down the cache.
-5.  Decide on a file compaction policy and, if needed, develop procedures to 
monitor your files and execute regular compaction.
-6.  Decide on a backup strategy for your disk stores and follow it. You can 
back up a running sytem by using the `backup                     disk-store` 
command.
-7.  If you remove any persistent region or change its configuration while your 
disk store is offline, consider synchronizing the regions in your disk stores.
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/managing/disk_storage/overview_using_disk_stores.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/overview_using_disk_stores.html.md.erb 
b/geode-docs/managing/disk_storage/overview_using_disk_stores.html.md.erb
deleted file mode 100644
index 74c1b96..0000000
--- a/geode-docs/managing/disk_storage/overview_using_disk_stores.html.md.erb
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title:  Configuring Disk Stores
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-In addition to the disk stores you specify, Apache Geode has a default disk 
store that it uses when disk use is configured with no disk store name 
specified. You can modify default disk store behavior.
-
--   **[Designing and Configuring Disk 
Stores](../../managing/disk_storage/using_disk_stores.html)**
-
-    You define disk stores in your cache, then you assign them to your regions 
and queues by setting the `disk-store-name` attribute in your region and queue 
configurations.
-
--   **[Disk Store Configuration 
Parameters](../../managing/disk_storage/disk_store_configuration_params.html)**
-
-    You define your disk stores by using the `gfsh create disk-store` command 
or in `<disk-store>` subelements of your cache declaration in `cache.xml`. All 
disk stores are available for use by all of your regions and queues.
-
--   **[Modifying the Default Disk 
Store](../../managing/disk_storage/using_the_default_disk_store.html)**
-
-    You can modify the behavior of the default disk store by specifying the 
attributes you want for the disk store named "DEFAULT".
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/managing/disk_storage/starting_system_with_disk_stores.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/starting_system_with_disk_stores.html.md.erb 
b/geode-docs/managing/disk_storage/starting_system_with_disk_stores.html.md.erb
deleted file mode 100644
index d4a8cbc..0000000
--- 
a/geode-docs/managing/disk_storage/starting_system_with_disk_stores.html.md.erb
+++ /dev/null
@@ -1,128 +0,0 @@
----
-title:  Start Up and Shut Down with Disk Stores
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-This section describes what happens during startup and shutdown and provides 
procedures for those operations.
-
-## Start Up
-
-When you start a member with a persistent region, the data is retrieved from 
disk stores to recreate the member’s persistent region. If the member does 
not hold all of the most recent data for the region, then other members have 
the data, and region creation blocks, waiting for the those other members. A 
partitioned region with colocated entries also blocks on start up, waiting for 
the entries of the colocated region to be available. A persistent gateway 
sender is treated the same as a colocated region, so it can also block region 
creation.
-
-With a log level of info or below, the system provides messaging about the 
wait. Here, the disk store for server2 has the most recent data for the region, 
and server1 is waiting for server2.
-
-``` pre
-Region /people has potentially stale data.
-It is waiting for another member to recover the latest data.
-My persistent id:
-
-  DiskStore ID: 6893751ee74d4fbd-b4780d844e6d5ce7
-  Name: server1
-  Location: /192.0.2.0:/home/dsmith/server1/.
-
-Members with potentially new data:
-[
-  DiskStore ID: 160d415538c44ab0-9f7d97bae0a2f8de
-  Name: server2
-  Location: /192.0.2.0:/home/dsmith/server2/.
-]
-Use the "gfsh show missing-disk-stores" command to see all disk stores
-that are being waited on by other members.
-```
-
-When the most recent data is available, the system updates the region, logs a 
message, and continues the startup.
-
-``` pre
-[info 2010/04/09 10:52:13.010 PDT CacheRunner <main> tid=0x1]    
-   Done waiting for the remote data to be available.
-```
-
-If the member's disk store has data for a region that is never created, the 
data remains in the disk store.
-
-Each member’s persistent regions load and go online as quickly as possible, 
not waiting unnecessarily for other members to complete. For performance 
reasons, these actions occur asynchronously:
-
--   Once at least one copy of each and every bucket is recovered from disk, 
the region is available. Secondary buckets will load asynchronously.
--   Entry keys are loaded from the key file in the disk store before 
considering entry values. Once all keys are loaded, Geode loads the entry 
values asynchronously. If a value is requested before it has loaded, the value 
will immediately be fetched from the disk store.
-
-## <a 
id="starting_system_with_disk_stores__section_D0A7403707B847749A22BF9221A2C823" 
class="no-quick-link"></a>Start Up Procedure
-
-To start a system with disk stores:
-
-1.  **Start all members with persisted data first and at the same time**. 
Exactly how you do this depends on your members. Make sure to start members 
that host colocated regions, as well as persistent gateway senders.
-
-    While they are initializing their regions, the members determine which 
have the most recent region data, and initialize their regions with the most 
recent data.
-
-    For replicated regions, where you define persistence only in some of the 
region's host members, start the persistent replicate members prior to the 
non-persistent replicate members to make sure the data is recovered from disk.
-
-    This is an example bash script for starting members in parallel. The 
script waits for the startup to finish. It exits with an error status if one of 
the jobs fails.
-
-    ``` pre
-    #!/bin/bash
-    ssh servera "cd /my/directory; gfsh start server --name=servera &
-    ssh serverb "cd /my/directory; gfsh start server --name=serverb &
-
-    STATUS=0;
-    for job in `jobs -p`
-    do
-    echo $job
-    wait $job;
-    JOB_STATUS=$?;
-    test $STATUS -eq 0 && STATUS=$JOB_STATUS;
-    done
-    exit $STATUS;
-    ```
-
-2.  **Respond to blocked members**. When a member blocks waiting for more 
recent data from another member, the member waits indefinitely rather than 
coming online with stale data. Check for missing disk stores with the `gfsh 
show                             missing-disk-stores` command. See [Handling 
Missing Disk 
Stores](handling_missing_disk_stores.html#handling_missing_disk_stores).
-    -   If no disk stores are missing, the cache initialization must be slow 
for some other reason. Check the information on member hangs in [Diagnosing 
System 
Problems](../troubleshooting/diagnosing_system_probs.html#diagnosing_system_probs).
-    -   If disk stores are missing that you think should be there:
-        -   Make sure you have started the member. Check the logs for any 
failure messages. See 
[Logging](../logging/logging.html#concept_30DB86B12B454E168B80BB5A71268865).
-        -   Make sure your disk store files are accessible. If you have moved 
your member or disk store files, you must update your disk store configuration 
to match.
-    -   If disk stores are missing that you know are lost, because you have 
deleted them or their files are otherwise unavailable, revoke them so the 
startup can continue.
-
-## <a 
id="starting_system_with_disk_stores__section_5E32F488EB5D4E74AAB6BF394E4329D6" 
class="no-quick-link"></a>Example Startup to Illustrate Ordering
-
-The following lists the two possibilities for starting up a replicated 
persistent region after a shutdown. Assume that Member A (MA) exits first, 
leaving persisted data on disk for RegionP. Member B (MB) continues to run 
operations on RegionP, which update its disk store and leave the disk store for 
MA in a stale condition. MB exits, leaving the most up-to-date data on disk for 
RegionP.
-
--   Restart order 1
-    1.  MB is started first. MB identifies that it has the most recent disk 
data for RegionP and initializes the region from disk. MB does not block.
-    2.  MA is started, recovers its data from disk, and updates region data as 
needed from the data in MB.
--   Restart order 2
-    1.  MA is started first. MA identifies that it does not have the most 
recent disk data and blocks, waiting for MB to start before recreating RegionP 
in MA.
-    2.  MB is started. MB identifies that it has the most recent disk data for 
RegionP and initializes the region from disk.
-    3.  MA recovers its RegionP data from disk and updates region data as 
needed from the data in MB.
-
-## Shutdown
-
-If more than one member hosts a persistent region or queue, the order in which 
the various members shut down may be significant upon restart of the system. 
The last member to exit the system or shut down has the most up-to-date data on 
disk. Each member knows which other system members were online at the time of 
exit or shutdown. This permits a member to acquire the most recent data upon 
subsequent start up.
-
-For a replicated region with persistence, the last member to exit has the most 
recent data.
-
-For a partitioned region every member persists its own buckets. A shutdown 
using `gfsh shutdown` will synchronize the disk stores before exiting, so all 
disk stores hold the most recent data. Without an orderly shutdown, some disk 
stores may have more recent bucket data than others.
-
-The best way to shut down a system is to invoke the `gfsh shutdown` command 
with all members running. All online data stores will be synchronized before 
shutting down, so all hold the most recent data copy. To shut down all members 
other than locators:
-
-``` pre
-gfsh>shutdown
-```
-
-To shut down all members, including locators:
-
-``` pre
-gfsh>shutdown --include-locators=true
-```

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/managing/disk_storage/using_disk_stores.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/disk_storage/using_disk_stores.html.md.erb 
b/geode-docs/managing/disk_storage/using_disk_stores.html.md.erb
deleted file mode 100644
index 4835533..0000000
--- a/geode-docs/managing/disk_storage/using_disk_stores.html.md.erb
+++ /dev/null
@@ -1,216 +0,0 @@
----
-title:  Designing and Configuring Disk Stores
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-You define disk stores in your cache, then you assign them to your regions and 
queues by setting the `disk-store-name` attribute in your region and queue 
configurations.
-
-**Note:**
-Besides the disk stores you specify, Apache Geode has a default disk store 
that it uses when disk use is configured with no disk store name specified. By 
default, this disk store is saved to the application’s working directory. You 
can change its behavior, as indicated in [Create and Configure Your Disk 
Stores](using_disk_stores.html#defining_disk_stores__section_37BC5A4D84B34DB49E489DD4141A4884)
 and [Modifying the Default Disk 
Store](using_the_default_disk_store.html#using_the_default_disk_store).
-
--   [Design Your Disk 
Stores](using_disk_stores.html#defining_disk_stores__section_0CD724A12EE4418587046AAD9EEC59C5)
--   [Create and Configure Your Disk 
Stores](using_disk_stores.html#defining_disk_stores__section_37BC5A4D84B34DB49E489DD4141A4884)
--   [Configuring Regions, Queues, and PDX Serialization to Use the Disk 
Stores](using_disk_stores.html#defining_disk_stores__section_AFB254CA9C5A494A8E335352A6849C16)
--   [Configuring Disk Stores on Gateway 
Senders](using_disk_stores.html#defining_disk_stores__config-disk-store-gateway)
-
-## <a id="defining_disk_stores__section_0CD724A12EE4418587046AAD9EEC59C5" 
class="no-quick-link"></a>Design Your Disk Stores
-
-Before you begin, you should understand Geode [Basic Configuration and 
Programming](../../basic_config/book_intro.html).
-
-1.  Work with your system designers and developers to plan for anticipated 
disk storage requirements in your testing and production caching systems. Take 
into account space and functional requirements.
-    -   For efficiency, separate data that is only overflowed in separate disk 
stores from data that is persisted or persisted and overflowed. Regions can be 
overflowed, persisted, or both. Server subscription queues are only overflowed.
-    -   When calculating your disk requirements, figure in your data 
modification patterns and your compaction strategy. Geode creates each oplog 
file at the max-oplog-size, which defaults to 1 GB. Obsolete operations are 
only removed from the oplogs during compaction, so you need enough space to 
store all operations that are done between compactions. For regions where you 
are doing a mix of updates and deletes, if you use automatic compaction, a good 
upper bound for the required disk space is
-
-        ``` pre
-        (1 / (1 - (compaction_threshold/100)) ) * data size
-        ```
-
-        where data size is the total size of all the data you store in the 
disk store. So, for the default compaction-threshold of 50, the disk space is 
roughly twice your data size. Note that the compaction thread could lag behind 
other operations, causing disk use to rise above the threshold temporarily. If 
you disable automatic compaction, the amount of disk required depends on how 
many obsolete operations accumulate between manual compactions.
-
-2.  Work with your host system administrators to determine where to place your 
disk store directories, based on your anticipated disk storage requirements and 
the available disks on your host systems.
-    -   Make sure the new storage does not interfere with other processes that 
use disk on your systems. If possible, store your files to disks that are not 
used by other processes, including virtual memory or swap space. If you have 
multiple disks available, for the best performance, place one directory on each 
disk.
-    -   Use different directories for different members. You can use any 
number of directories for a single disk store.
-
-## <a id="defining_disk_stores__section_37BC5A4D84B34DB49E489DD4141A4884" 
class="no-quick-link"></a>Create and Configure Your Disk Stores
-
-1.  In the locations you have chosen, create all directories you will specify 
for your disk stores to use. Geode throws an exception if the specified 
directories are not available when a disk store is created. You do not need to 
populate these directories with anything.
-2.  Open a `gfsh` prompt and connect to the distributed system.
-3.  At the `gfsh` prompt, create and configure a disk store:
-    -  Specify the name (`--name`) of the disk-store.
-
-        -   Choose disk store names that reflect how the stores should be used 
and that work for your operating systems. Disk store names are used in the disk 
file names:
-
-            -   Use disk store names that satisfy the file naming requirements 
for your operating system. For example, if you store your data to disk in a 
Windows system, your disk store names could not contain any of these reserved 
characters, &lt; &gt; : " / \\ | ? \*.
-
-            -   Do not use very long disk store names. The full file names 
must fit within your operating system limits. On Linux, for example, the 
standard limitation is 255 characters.
-
-        ``` pre
-        gfsh>create disk-store --name=serverOverflow 
--dir=c:\overflow_data#20480 
-        ```
-    -  Configure the directory locations (`--dir`) and the maximum space to 
use for the store (specified after the disk directory name by \# and the 
maximum number in megabytes).
-
-        ``` pre
-        gfsh>create disk-store --name=serverOverflow 
--dir=c:\overflow_data#20480
-        ```
-    -  Optionally, you can configure the store’s file compaction behavior. 
In conjunction with this, plan and program for any manual compaction.  Example:
-
-        ``` pre
-        gfsh>create disk-store --name=serverOverflow 
--dir=c:\overflow_data#20480 \
-        --compaction-threshold=40 --auto-compact=false 
--allow-force-compaction=true
-        ```
-    -  If needed, configure the maximum size (in MB) of a single oplog. When 
the current files reach this size, the system rolls forward to a new file. You 
get better performance with relatively small maximum file sizes.  Example:
-
-        ``` pre
-        gfsh>create disk-store --name=serverOverflow 
--dir=c:\overflow_data#20480 \
-        --compaction-threshold=40 --auto-compact=false 
--allow-force-compaction=true \
-        --max-oplog-size=512
-        ```
-    -  If needed, modify queue management parameters for asynchronous queueing 
to the disk store. You can configure any region for synchronous or asynchronous 
queueing (region attribute `disk-synchronous`). Server queues and gateway 
sender queues always operate synchronously. When either the `queue-size` 
(number of operations) or `time-interval` (milliseconds) is reached, enqueued 
data is flushed to disk. You can also synchronously flush unwritten data to 
disk through the `DiskStore` `flushToDisk` method.  Example:
-
-        ``` pre
-        gfsh>create disk-store --name=serverOverflow 
--dir=c:\overflow_data#20480 \
-        --compaction-threshold=40 --auto-compact=false 
--allow-force-compaction=true \
-        --max-oplog-size=512 --queue-size=10000 --time-interval=15
-        ```
-    -  If needed, modify the size (specified in bytes) of the buffer used for 
writing to disk.  Example:
-
-        ``` pre
-        gfsh>create disk-store --name=serverOverflow 
--dir=c:\overflow_data#20480 \
-        --compaction-threshold=40 --auto-compact=false 
--allow-force-compaction=true \
-        --max-oplog-size=512 --queue-size=10000 --time-interval=15 
--write-buffer-size=65536
-        ```
-    -  If needed, modify the `disk-usage-warning-percentage` and 
`disk-usage-critical-percentage` thresholds that determine the percentage 
(default: 90%) of disk usage that will trigger a warning and the percentage 
(default: 99%) of disk usage that will generate an error and shut down the 
member cache.  Example:
-
-        ``` pre
-        gfsh>create disk-store --name=serverOverflow 
--dir=c:\overflow_data#20480 \
-        --compaction-threshold=40 --auto-compact=false 
--allow-force-compaction=true \
-        --max-oplog-size=512 --queue-size=10000 --time-interval=15 
--write-buffer-size=65536 \
-        --disk-usage-warning-percentage=80 --disk-usage-critical-percentage=98
-        ```
-
-The following is the complete disk store cache.xml configuration example:
-
-``` pre
-<disk-store name="serverOverflow" compaction-threshold="40" 
-           auto-compact="false" allow-force-compaction="true"
-        max-oplog-size="512" queue-size="10000"  
-        time-interval="15" write-buffer-size="65536"
-        disk-usage-warning-percentage="80"
-        disk-usage-critical-percentage="98">
-       <disk-dirs>
-              <disk-dir>c:\overflow_data</disk-dir>
-              <disk-dir dir-size="20480">d:\overflow_data</disk-dir>
-       </disk-dirs>
-</disk-store>
-```
-
-**Note:**
-As an alternative to defining cache.xml on every server in the cluster-- if 
you have the cluster configuration service enabled, when you create a disk 
store in `gfsh`, you can share the disk store's configuration with the rest of 
cluster. See [Overview of the Cluster Configuration 
Service](../../configuring/cluster_config/gfsh_persist.html).
-
-## Modifying Disk Stores
-
-You can modify an offline disk store by using the [alter 
disk-store](../../tools_modules/gfsh/command-pages/alter.html#topic_99BCAD98BDB5470189662D2F308B68EB)
 command. If you are modifying the default disk store configuration, use 
"DEFAULT" as the disk-store name.
-
-## <a id="defining_disk_stores__section_AFB254CA9C5A494A8E335352A6849C16" 
class="no-quick-link"></a>Configuring Regions, Queues, and PDX Serialization to 
Use the Disk Stores
-
-The following are examples of using already created and named disk stores for 
Regions, Queues, and PDX Serialization.
-
-Example of using a disk store for region persistence and overflow:
-
--   gfsh:
-
-    ``` pre
-    gfsh>create region --name=regionName --type=PARTITION_PERSISTENT_OVERFLOW \
-    --disk-store=serverPersistOverflow
-    ```
-
--   cache.xml
-
-    ``` pre
-    <region refid="PARTITION_PERSISTENT_OVERFLOW" 
disk-store-name="persistOverflow1"/>
-    ```
-
-Example of using a named disk store for server subscription queue overflow 
(cache.xml):
-
-``` pre
-<cache-server port="40404">
-   <client-subscription 
-      eviction-policy="entry" 
-      capacity="10000"
-      disk-store-name="queueOverflow2"/>
-</cache-server>
-```
-
-Example of using a named disk store for PDX serialization metadata (cache.xml):
-
-``` pre
-<pdx read-serialized="true" 
-    persistent="true" 
-    disk-store-name="SerializationDiskStore">
-</pdx>
-```
-
-## <a id="defining_disk_stores__config-disk-store-gateway" 
class="no-quick-link"></a>Configuring Disk Stores on Gateway Senders
-
-Gateway sender queues are always overflowed and may be persisted. Assign them 
to overflow disk stores if you do not persist, and to persistence disk stores 
if you do.
-
-Example of using a named disk store for gateway sender queue persistence:
-
--   gfsh:
-
-    ``` pre
-    gfsh>create gateway-sender --id=persistedSender1 
--remote-distributed-system-id=1 \
-    --enable-persistence=true --disk-store-name=diskStoreA 
--maximum-queue-memory=100  
-    ```
-
--   cache.xml:
-
-    ``` pre
-    <cache>
-      <gateway-sender id="persistedsender1" parallel="true" 
-       remote-distributed-system-id="1"
-       enable-persistence="true"
-       disk-store-name="diskStoreA"
-       maximum-queue-memory="100"/> 
-       ... 
-    </cache>
-    ```
-
-Examples of using the default disk store for gateway sender queue persistence 
and overflow:
-
--   gfsh:
-
-    ``` pre
-    gfsh>create gateway-sender --id=persistedSender1 
--remote-distributed-system-id=1 \
-    --enable-persistence=true --maximum-queue-memory=100 
-    ```
-
--   cache.xml:
-
-    ``` pre
-    <cache>
-      <gateway-sender id="persistedsender1" parallel="true" 
-       remote-distributed-system-id="1"
-       enable-persistence="true"
-       maximum-queue-memory="100"/> 
-       ... 
-    </cache>
-    ```
-
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/managing/disk_storage/using_the_default_disk_store.html.md.erb
----------------------------------------------------------------------
diff --git 
a/geode-docs/managing/disk_storage/using_the_default_disk_store.html.md.erb 
b/geode-docs/managing/disk_storage/using_the_default_disk_store.html.md.erb
deleted file mode 100644
index 2618290..0000000
--- a/geode-docs/managing/disk_storage/using_the_default_disk_store.html.md.erb
+++ /dev/null
@@ -1,70 +0,0 @@
----
-title:  Modifying the Default Disk Store
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-You can modify the behavior of the default disk store by specifying the 
attributes you want for the disk store named "DEFAULT".
-
-<a 
id="using_the_default_disk_store__section_7D6E1A05D28840AC8606EF0D88E9B373"></a>
-Whenever you use disk stores without specifying the disk store to use, Geode 
uses the disk store named "DEFAULT".
-
-For example, these region and queue configurations specify persistence and/or 
overflow, but do not specify the disk-store-name. Because no disk store is 
specified, these use the disk store named "DEFAULT".
-
-Examples of using the default disk store for region persistence and overflow:
-
--   gfsh:
-
-    ``` pre
-    gfsh>create region --name=regionName --type=PARTITION_PERSISTENT_OVERFLOW
-    ```
-
--   cache.xml
-
-    ``` pre
-    <region refid="PARTITION_PERSISTENT_OVERFLOW"/>
-    ```
-
-Example of using the default disk store for server subscription queue overflow 
(cache.xml):
-
-``` pre
-<cache-server port="40404">
-    <client-subscription eviction-policy="entry" capacity="10000"/>
-</cache-server>
-```
-
-## <a 
id="using_the_default_disk_store__section_671AED6EAFEE485D837411DEBE0C6BC6" 
class="no-quick-link"></a>Change the Behavior of the Default Disk Store
-
-Geode initializes the default disk store with the default disk store 
configuration settings. You can modify the behavior of the default disk store 
by specifying the attributes you want for the disk store named "DEFAULT". The 
only thing you can’t change about the default disk store is the name.
-
-The following example changes the default disk store to allow manual 
compaction and to use multiple, non-default directories:
-
-cache.xml:
-
-``` pre
-<disk-store name="DEFAULT" allow-force-compaction="true">
-     <disk-dirs>
-        <disk-dir>/export/thor/customerData</disk-dir>
-        <disk-dir>/export/odin/customerData</disk-dir>
-        <disk-dir>/export/embla/customerData</disk-dir>
-     </disk-dirs>
-</disk-store>
-```
-
-<a 
id="using_the_default_disk_store__section_C61BA9AD9A6442DA934C2B20C75E0996"></a>
-

http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/managing/disk_storage/validating_disk_store.html.md.erb
----------------------------------------------------------------------
diff --git a/geode-docs/managing/disk_storage/validating_disk_store.html.md.erb 
b/geode-docs/managing/disk_storage/validating_disk_store.html.md.erb
deleted file mode 100644
index c47c515..0000000
--- a/geode-docs/managing/disk_storage/validating_disk_store.html.md.erb
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title:  Validating a Disk Store
----
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one or more
-contributor license agreements.  See the NOTICE file distributed with
-this work for additional information regarding copyright ownership.
-The ASF licenses this file to You under the Apache License, Version 2.0
-(the "License"); you may not use this file except in compliance with
-the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
--->
-
-<a id="validating_disk_store__section_1782CD93DB6040A2BF52014A6600EA44"></a>
-The `validate offline-disk-store` command verifies the health of your offline 
disk store and gives you information about the regions in it, the total 
entries, and the number of records that would be removed if you compacted the 
store.
-
-Use this command at these times:
-
--   Before compacting an offline disk store to help decide whether it’s 
worth doing.
--   Before restoring or modifying a disk store.
--   Any time you want to be sure the disk store is in good shape.
-
-Example:
-
-``` pre
-gfsh>validate offline-disk-store --name=ds1 --disk-dirs=hostB/bupDirectory
-```
-
-

Reply via email to