http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/configuring/running/firewalls_ports.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/configuring/running/firewalls_ports.html.md.erb b/geode-docs/configuring/running/firewalls_ports.html.md.erb deleted file mode 100644 index 11e4554..0000000 --- a/geode-docs/configuring/running/firewalls_ports.html.md.erb +++ /dev/null @@ -1,246 +0,0 @@ ---- -title: Firewalls and Ports ---- - -<!-- -Licensed to the Apache Software Foundation (ASF) under one or more -contributor license agreements. See the NOTICE file distributed with -this work for additional information regarding copyright ownership. -The ASF licenses this file to You under the Apache License, Version 2.0 -(the "License"); you may not use this file except in compliance with -the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. ---> - -Make sure your port settings are configured correctly for firewalls. - -<a id="concept_5ED182BDBFFA4FAB89E3B81366EBC58E__section_F9C1D7419F954DC1A305C34714C8615C"></a> -There are several different port settings that need to be considered when using firewalls: - -- Port that the cache server listens on. This is configurable using the `cache-server` element in cache.xml, on the CacheServer class in Java APIs, and as a command line option to the `gfsh start server` command. - - By default, if not otherwise specified, Geode clients and servers discover each other on a pre-defined port (**40404**) on the localhost. - -- Locator port. Geode clients can use the locator to automatically discover cache servers. The locator port is configurable as a command-line option to the `gfsh start locator` command. Locators are used in the peer-to-peer cache deployments to discover other processes. They can be used by clients to locate servers as an alternative to configuring clients with a collection of server addresses and ports. - - By default, if not otherwise specified, Geode locators use the default multicast port **10334**. - -- Since locators start up the distributed system, locators must also have their ephemeral port range and TCP port accessible to other members through the firewall. -- For clients, you configure the client to connect to servers using the client's pool configuration. The client's pool configuration has two options: you can create a pool with either a list of server elements or a list of locator elements. For each element, you specify the host and port. The ports specified must be made accessible through your firewall. - -## **Limiting Ephemeral Ports for Peer-to-Peer Membership** - -By default, Geode assigns *ephemeral* ports, that is, temporary ports assigned from a designated range, which can encompass a large number of possible ports. When a firewall is present, the ephemeral port range usually must be limited to a much smaller number, for example six. If you are configuring P2P communications through a firewall, you must also set the TCP port for each process and ensure that UDP traffic is allowed through the firewall. - -## **Properties for Firewall and Port Configuration** - -This table contains properties potentially involved in firewall behavior, with a brief description of each property. Click on a property name for a link to the reference topic. - -<table> -<colgroup> -<col width="33%" /> -<col width="33%" /> -<col width="34%" /> -</colgroup> -<thead> -<tr class="header"> -<th><strong>Configuration area</strong></th> -<th><strong>Property or Setting</strong></th> -<th><strong>Definition</strong></th> -</tr> -</thead> -<tbody> -<tr class="odd"> -<td>peer-to-peer config</td> -<td><p><code class="ph codeph">conserve-sockets</code></p></td> -<td><p>Specifies whether sockets are shared by the system member's threads.</p></td> -</tr> -<tr class="even"> -<td>peer-to-peer config</td> -<td><p><code class="ph codeph">locators</code></p></td> -<td><p>The list of locators used by system members. The list must be configured consistently for every member of the distributed system.</p></td> -</tr> -<tr class="odd"> -<td>peer-to-peer config</td> -<td><p><code class="ph codeph">mcast-address</code></p></td> -<td><p>Address used to discover other members of the distributed system. Only used if mcast-port is non-zero. This attribute must be consistent across the distributed system.</p></td> -</tr> -<tr class="even"> -<td>peer-to-peer config</td> -<td><p><code class="ph codeph">mcast-port</code></p></td> -<td><p>Port used, along with the mcast-address, for multicast communication with other members of the distributed system. If zero, multicast is disabled for data distribution.</p></td> -</tr> -<tr class="odd"> -<td>peer-to-peer config</td> -<td><p><code class="ph codeph">membership-port-range</code></p></td> -<td><p>The range of ephemeral ports available for unicast UDP messaging and for TCP failure detection in the peer-to-peer distributed system.</p></td> -</tr> -<tr class="even"> -<td>peer-to-peer config</td> -<td><p><code class="ph codeph">tcp-port</code></p></td> -<td><p>The TCP port to listen on for cache communications.</p></td> -</tr> -</tbody> -</table> - -<table> -<colgroup> -<col width="33%" /> -<col width="33%" /> -<col width="33%" /> -</colgroup> -<thead> -<tr class="header"> -<th>Configuration Area</th> -<th><strong>Property or Setting</strong></th> -<th><strong>Definition</strong></th> -</tr> -</thead> -<tbody> -<tr class="odd"> -<td>cache server config</td> -<td><p><code class="ph codeph">hostname-for-clients</code></p></td> -<td><p>Hostname or IP address to pass to the client as the location where the server is listening.</p></td> -</tr> -<tr class="even"> -<td>cache server config</td> -<td><p><code class="ph codeph">max-connections</code></p></td> -<td><p>Maximum number of client connections for the server. When the maximum is reached, the server refuses additional client connections.</p></td> -</tr> -<tr class="odd"> -<td>cache server config</td> -<td><p><code class="ph codeph">port</code> (cache.xml) or <code class="ph codeph">--port</code> parameter to the <code class="ph codeph">gfsh start server</code> command</p></td> -<td><p>Port that the server listens on for client communication.</p></td> -</tr> -</tbody> -</table> - -## Default Port Configurations - -<table> -<colgroup> -<col width="33%" /> -<col width="33%" /> -<col width="33%" /> -</colgroup> -<thead> -<tr class="header"> -<th><p><strong>Port Name</strong></p></th> -<th>Related Configuration Setting</th> -<th><p><strong>Default Port</strong></p></th> -</tr> -</thead> -<tbody> -<tr class="odd"> -<td><p>Cache Server</p></td> -<td><p><code class="ph codeph">port</code> (cache.xml)</p></td> -<td>40404</td> -</tr> -<tr class="even"> -<td><p>HTTP</p></td> -<td><code class="ph codeph">http-service-port</code></td> -<td>7070</td> -</tr> -<tr class="odd"> -<td><p>Locator</p></td> -<td><code class="ph codeph">start-locator</code> (for embedded locators) or <code class="ph codeph">--port</code> parameter to the <code class="ph codeph">gfsh start locator</code> command.</td> -<td><em>if not specified upon startup or in the start-locator property, uses default multicast port 10334</em></td> -</tr> -<tr class="even"> -<td><p>Membership Port Range</p></td> -<td><code class="ph codeph">membership-port-range</code></td> -<td>1024 to 65535</td> -</tr> -<tr class="odd"> -<td><p>Memcached Port</p></td> -<td><code class="ph codeph">memcached-port</code></td> -<td><em>not set</em></td> -</tr> -<tr class="even"> -<td><p>Multicast</p></td> -<td><code class="ph codeph">mcast-port</code></td> -<td>10334</td> -</tr> -<tr class="odd"> -<td><p>RMI</p></td> -<td><code class="ph codeph">jmx-manager-port</code></td> -<td>1099</td> -</tr> -<tr class="even"> -<td><p>TCP</p></td> -<td><code class="ph codeph">tcp-port</code></td> -<td>ephemeral port</td> -</tr> -</tbody> -</table> - -## **Properties for Firewall and Port Configuration in Multi-Site (WAN) Configurations** - -Each gateway receiver uses a port to listen for incoming communication from one or more gateway senders communication between Geode sites. The full range of port values for gateway receivers must be made accessible within the firewall from across the WAN. - -This table contains properties potentially involved in firewall behavior, with a brief description of each property. Click on a property name for a link to the [gemfire.properties and gfsecurity.properties (Geode Properties)](../../reference/topics/gemfire_properties.html#gemfire_properties) reference topic. - -<table> -<colgroup> -<col width="33%" /> -<col width="33%" /> -<col width="33%" /> -</colgroup> -<thead> -<tr class="header"> -<th>Configuration Area</th> -<th><strong>Property or Setting</strong></th> -<th><strong>Definition</strong></th> -</tr> -</thead> -<tbody> -<tr class="odd"> -<td>multi-site (WAN) config</td> -<td><p>[hostname-for-senders](../../reference/topics/gfe_cache_xml.html#gateway-receiver)</p></td> -<td><p>Hostname or IP address of the gateway receiver used by gateway senders to connect.</p></td> -</tr> -<tr class="even"> -<td>multi-site (WAN) config</td> -<td>[remote-locators](../../reference/topics/gemfire_properties.html#gemfire_properties)</td> -<td><p>List of locators (and their ports) that are available on the remote WAN site.</p></td> -</tr> -<tr class="odd"> -<td>multi-site (WAN) config</td> -<td><p>[start-port](../../reference/topics/gfe_cache_xml.html#gateway-receiver) and [end-port](../../reference/topics/gfe_cache_xml.html#gateway-receiver) (cache.xml) or <code class="ph codeph">--start-port</code> and <code class="ph codeph">--end-port</code> parameters to the <code class=" ph codeph">gfsh start gateway receiver</code> command</p></td> -<td><p>Port range that the gateway receiver can use to listen for gateway sender communication.</p></td> -</tr> -</tbody> -</table> - -## Default Port Configuration - -<table> -<colgroup> -<col width="33%" /> -<col width="33%" /> -<col width="33%" /> -</colgroup> -<thead> -<tr class="header"> -<th><p><strong>Port Name</strong></p></th> -<th>Related Configuration Setting</th> -<th><p><strong>Default Port</strong></p></th> -</tr> -</thead> -<tbody> -<tr class="odd"> -<td><p>Gateway Receiver</p></td> -<td><p>[start-port](../../reference/topics/gfe_cache_xml.html#gateway-receiver) and [end-port](../../reference/topics/gfe_cache_xml.html#gateway-receiver) (cache.xml) or <code class="ph codeph">--start-port</code> and <code class="ph codeph">--end-port</code> parameters to the <code class="ph codeph">gfsh start gateway receiver</code> command</p></td> -<td><em>not set</em> Each gateway receiver uses a single port to accept connections from gateway senders in other systems. However, the configuration of a gateway receiver specifies a range of possible port values to use. Geode selects an available port from the specified range when the gateway receiver starts. Configure your firewall so that the full range of possible port values is accessible by gateway senders from across the WAN.</td> -</tr> -</tbody> -</table> - -
http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/configuring/running/managing_output_files.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/configuring/running/managing_output_files.html.md.erb b/geode-docs/configuring/running/managing_output_files.html.md.erb deleted file mode 100644 index b194f79..0000000 --- a/geode-docs/configuring/running/managing_output_files.html.md.erb +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: Managing System Output Files ---- - -<!-- -Licensed to the Apache Software Foundation (ASF) under one or more -contributor license agreements. See the NOTICE file distributed with -this work for additional information regarding copyright ownership. -The ASF licenses this file to You under the Apache License, Version 2.0 -(the "License"); you may not use this file except in compliance with -the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. ---> - -Geode output files are optional and can become quite large. Work with your system administrator to determine where to place them to avoid interfering with other system activities. - -<a id="managing_output_files__section_F0CEA4299D274801B9AB700C074F178F"></a> -Geode includes several types of optional output files as described below. - -- **Log Files**. Comprehensive logging messages to help you confirm system configuration and to debug problems in configuration and code. Configure log file behavior in the `gemfire.properties` file. See [Logging](../../managing/logging/logging.html#concept_30DB86B12B454E168B80BB5A71268865). - -- **Statistics Archive Files**. Standard statistics for caching and distribution activities, which you can archive on disk. Configure statistics collection and archival in the `gemfire.properties`, `archive-disk-space-limit` and `archive-file-size-limit`. See the [Reference](../../reference/book_intro.html#reference). - -- **Disk Store Files**. Hold persistent and overflow data from the cache. You can configure regions to persist data to disk for backup purposes or overflow to disk to control memory use. The subscription queues that servers use to send events to clients can be overflowed to disk. Gateway sender queues overflow to disk automatically and can be persisted for high availability. Configure these through the `cache.xml`. See [Disk Storage](../../managing/disk_storage/chapter_overview.html). - - http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/configuring/running/running_the_cacheserver.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/configuring/running/running_the_cacheserver.html.md.erb b/geode-docs/configuring/running/running_the_cacheserver.html.md.erb deleted file mode 100644 index 374839b..0000000 --- a/geode-docs/configuring/running/running_the_cacheserver.html.md.erb +++ /dev/null @@ -1,199 +0,0 @@ ---- -title: Running Geode Server Processes ---- - -<!-- -Licensed to the Apache Software Foundation (ASF) under one or more -contributor license agreements. See the NOTICE file distributed with -this work for additional information regarding copyright ownership. -The ASF licenses this file to You under the Apache License, Version 2.0 -(the "License"); you may not use this file except in compliance with -the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. ---> - -A Geode server is a process that runs as a long-lived, configurable member of a client/server system. - -<a id="running_the_cacheserver__section_6C2B495518C04064A181E7917CA81FC1"></a> -The Geode server is used primarily for hosting long-lived data regions and for running standard Geode processes such as the server in a client/server configuration. You can start and stop servers using the following methods: - -- The `gfsh` tool allows you to manage Geode server processes from the command line. -- You can also start, stop and manage the Geode servers through the `org.apache.geode.distributed.ServerLauncher` API. The `ServerLauncher` API can only be used for Geode Servers that were started with `gfsh` or with the `ServerLauncher` class itself. See the JavaDocs for additional specifics on using the `ServerLauncher` API. - -## <a id="running_the_cacheserver__section_E15FB1B039CE4F6CB2E4B5618D7ECAA1" class="no-quick-link"></a>Default Server Configuration and Log Files - -The `gfsh` utility uses a working directory for its configuration files and log files. These are the defaults and configuration options: - -- When you start a standalone server using `gfsh`, `gfsh` will automatically load the required JAR files `$GEMFIRE/lib/server-dependencies.jar` and `$JAVA_HOME/lib/tools.jar` into the CLASSPATH of the JVM process. If you start a standalone server using the ServerLauncher API, you must specify `$GEMFIRE/lib/server-dependencies.jar` inside your command to launch the process. For more information on CLASSPATH settings in Geode, see [Setting Up the CLASSPATH](../../getting_started/setup_classpath.html). -- Servers are configured like any other Geode process, with `gemfire.properties` and shared cluster configuration files. It is not programmable except through application plug-ins. Typically, you provide the `gemfire.properties` file and the `gfsecurity.properties` file (if you are using a separate, restricted access security settings file). You can also specify a `cache.xml` file in the cache serverâs working directory. -- By default, a new server started with `gfsh` receives its initial cache configuration from the cluster configuration service, assuming the locator is running the cluster configuration service. If you specify a group when starting the server, the server also receives configurations that apply to a group. The shared configuration consists of `cache.xml` files, `gemfire.properties` files, and deployed jar files. You can disable use of the cluster configuration service by specifying `--use-cluster-configuration=false` when starting the server using `gfsh`. - - See [Overview of the Cluster Configuration Service](../cluster_config/gfsh_persist.html#concept_r22_hyw_bl). - -- If you are using the Spring Framework, you can specify a Spring ApplicationContext XML file when starting up your server in `gfsh` by using the `--spring-xml-location` command-line option. This option allows you to bootstrap your Geode server process with your Spring application's configuration. See [Spring documentation](http://docs.spring.io/spring/docs/3.2.x/spring-framework-reference/html/resources.html#resources-app-ctx) for more information on this file. -- For logging output, log file output defaults to `server_name.log` in the cache server's working directory. If you restart a server with the same server name, the existing *server\_name*.log file is automatically renamed for you (for example, `server1-01-01.log` or `server1-02-01.log`). You can modify the level of logging details in this file by specifying a level in the `--log-level` argument when starting up the server. -- By default, the server will start in a subdirectory (named after the server's specified `--name`) under the directory where `gfsh` is executed. This subdirectory is considered the current working directory. You can also specify a different working directory when starting the cache server in `gfsh`. -- By default, a server process that has been shutdown and disconnected due to a network partition event or member unresponsiveness will restart itself and automatically try to reconnect to the existing distributed system. See [Handling Forced Cache Disconnection Using Autoreconnect](../../managing/autoreconnect/member-reconnect.html#concept_22EE6DDE677F4E8CAF5786E17B4183A9) for more details. -- You can pass JVM parameters to the server's JVM by using the `--J=-Dproperty.name=value` upon server startup. These parameters can be Java properties or Geode configuration properties such as `gemfire.jmx-manager`. For example: - - ``` pre - gfsh>start server --name=server1 --J=-Dgemfire.jmx-manager=true \ - --J=-Dgemfire.jmx-manager-start=true --J=-Dgemfire.http-port=8080 - ``` - -- We recommend that you do not use the `-XX:+UseCompressedStrings` and `-XX:+UseStringCache` JVM configuration properties when starting up servers. These JVM options can cause issues with data corruption and compatibility. - -## <a id="running_the_cacheserver__section_07001480D33745139C3707EDF8166D86" class="no-quick-link"></a>Start the Server - -The startup syntax for Geode servers in `gfsh` is: - -``` pre -start server --name=value [--assign-buckets(=value)] [--bind-address=value] - [--cache-xml-file=value] [--classpath=value] [--disable-default-server(=value)] - [--disable-exit-when-out-of-memory(=value)] [--enable-time-statistics(=value)] - [--force(=value)] [--include-system-classpath(=value)] [--properties-file=value] - [--security-properties-file=value] - [--group=value] [--locators=value] [--locator-wait-time=value] [--log-level=value] - [--mcast-address=value] [--mcast-port=value] [--memcached-port=value] - [--memcached-protocol=value] [--rebalance(=value)] [--server-bind-address=value] - [--server-port=value] [--spring-xml-location=value] - [--statistic-archive-file=value] [--dir=value] [--initial-heap=value] - [--max-heap=value] [--use-cluster-configuration(=value)] [--J=value(,value)*] - [--critical-heap-percentage=value] [--critical-off-heap-percentage=value] - [--eviction-heap-percentage=value] [--eviction-off-heap-percentage=value] - [--hostname-for-clients=value] [--max-connections=value] - [--message-time-to-live=value] [--max-message-count=value] [--max-threads=value] - [--socket-buffer-size=value] [--lock-memory=value] [--off-heap-memory-size=value] -``` - -**Note:** -When both `--max-heap` and `--initial-heap` are specified during server startup, additional GC parameters are specified internally by Geode's Resource Manager. If you do not want the additional default GC properties set by the Resource Manager, then use the `-Xms` & `-Xmx` JVM options. See [Controlling Heap Use with the Resource Manager](../../managing/heap_use/heap_management.html#configuring_resource_manager) for more information. - -The following `gfsh start server` start sequences specify a `cache.xml` file for cache configuration, and use different incoming client connection ports: - -``` pre -gfsh>start server --name=server1 --mcast-port=10338 \ ---cache-xml-file=../ServerConfigs/cache.xml --server-port=40404 - -gfsh>start server --name=server2 --mcast-port=10338 \ ---cache-xml-file=../ServerConfigs/cache.xml --server-port=40405 -``` - -Here is a portion of a `gemfire.properties` file that sets the location of a`cache.xml` file for the server and sets the mcast-port: - -``` pre -mcast-port=10338 -cache-xml-file=D:\gfeserver\cacheCS.xml -``` - -To start the server using this `gemfire.properties` file, enter: - -``` pre -gfsh>start server --name=server1 \ ---properties-file=D:\gfeserver\gemfire.properties -``` - -To start a server with an embedded JMX Manager, you can enter the following command: - -``` pre -gfsh>start server --name=server2 \ ---J=-Dgemfire.jmx-manager=true --J=-Dgemfire.jmx-manager-start=true -``` - -To start a server and provide JVM configuration settings, you can issue a command like the following: - -``` pre -gfsh>start server --name=server3 \ ---J=-Xms80m,-Xmx80m --J=-XX:+UseConcMarkSweepGC,-XX:CMSInitiatingOccupancyFraction=65 -``` - -## Start the Server Programmatically - -Use `org.apache.geode.distributed.ServerLauncher` API to start the cache server process inside your code. Use the `ServerLauncher.Builder` class to construct an instance of the `ServerLauncher`, and then use the `start()` method to start the server service. The other methods in the `ServerLauncher` class provide status information about the server and allow you to stop the server. - -``` pre -import org.apache.geode.distributed.ServerLauncher; - - public class MyEmbeddedServer { - - public static void main(String[] args){ - ServerLauncher serverLauncher = new ServerLauncher.Builder() - .setMemberName("server1") - .setServerPort(40405) - .set("jmx-manager", "true") - .set("jmx-manager-start", "true") - .build(); - - serverLauncher.start(); - - System.out.println("Cache server successfully started"); - } - } -``` - -## <a id="running_the_cacheserver__section_F58F229D5C7048E9915E0EC470F9A923" class="no-quick-link"></a>Check Server Status - -If you are connected to the distributed system in `gfsh`, you can check the status of a running cache server by providing the server name. For example: - -``` pre -gfsh>status server --name=server1 -``` - -If you are not connected to a distributed system, you can check the status of a local cache server by providing the process ID or the server's current working directory. For example: - -``` pre -gfsh>status server --pid=2484 -``` - -or - -``` pre -% gfsh status server --dir=<server_working_directory> -``` - -where <*server\_working\_directory*> corresponds to the local working directory where the cache server is running. - -If successful, the command returns the following information (with the JVM arguments that were provided at startup): - -``` pre -% gfsh status server --dir=server4 -Server in /home/user/server4 on ubuntu.local[40404] as server4 is currently online. -Process ID: 3324 -Uptime: 1 minute 5 seconds -GemFire Version: 8.0.0 -Java Version: 1.7.0_65 -Log File: /home/user/server4/server4.log -JVM Arguments: -... -``` - -## <a id="running_the_cacheserver__section_0E4DDED6AB784B0CAFBAD538B227F487" class="no-quick-link"></a>Stop Server - -If you are connected to the distributed system in `gfsh`, you can stop a running cache server by providing the server name. For example: - -``` pre -gfsh>stop server --name=server1 -``` - -If you are not connected to a distributed system, you can stop a local cache server by specify the server's current working directory or the process ID. For example: - -``` pre -gfsh>stop server --pid=2484 -``` - -or - -``` pre -gfsh>stop server --dir=<server_working_directory> -``` - -where <*server\_working\_directory*> corresponds to the local working directory where the cache server is running. - -You can also use the `gfsh` `shutdown` command to shut down all cache servers in an orderly fashion. This is useful if you are using persistent regions. See [Starting Up and Shutting Down Your System](starting_up_shutting_down.html) for more details. http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/configuring/running/running_the_locator.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/configuring/running/running_the_locator.html.md.erb b/geode-docs/configuring/running/running_the_locator.html.md.erb deleted file mode 100644 index a8c2d7d..0000000 --- a/geode-docs/configuring/running/running_the_locator.html.md.erb +++ /dev/null @@ -1,257 +0,0 @@ ---- -title: Running Geode Locator Processes ---- - -<!-- -Licensed to the Apache Software Foundation (ASF) under one or more -contributor license agreements. See the NOTICE file distributed with -this work for additional information regarding copyright ownership. -The ASF licenses this file to You under the Apache License, Version 2.0 -(the "License"); you may not use this file except in compliance with -the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. ---> - -The locator is a Geode process that tells new, connecting members where running members are located and provides load balancing for server use. - -<a id="running_the_locator__section_E9C98E8756524552BEA9B0CA49A2069E"></a> -You can run locators as peer locators, server locators, or both: - -- Peer locators give joining members connection information to members already running in the locator's distributed system. -- Server locators give clients connection information to servers running in the locator's distributed system. Server locators also monitor server load and send clients to the least-loaded servers. - -By default, locators run as peer and server locators. - -You can run the locator standalone or embedded within another Geode process. Running your locators standalone provides the highest reliability and availability of the locator service as a whole. - -## <a id="running_the_locator__section_0733348268AF4D5F8851B999A6A36C53" class="no-quick-link"></a>Locator Configuration and Log Files - -Locator configuration and log files have the following properties: - -- When you start a standalone locator using `gfsh`, `gfsh` will automatically load the required JAR files (`$GEMFIRE/lib/locator-dependencies.jar`) into the CLASSPATH of the JVM process. If you start a standalone locator using the `LocatorLauncher` API, you must specify `$GEMFIRE/lib/locator-dependencies.jar` inside the command used to launch the locator process. For more information on CLASSPATH settings in Geode, see [CLASSPATH Settings for Geode Processes](../../getting_started/setup_classpath.html). You can modify the CLASSPATH by specifying the `--classpath` parameter. -- Locators are members of the distributed system just like any other member. In terms of `mcast-port` and `locators` configuration, a locator should be configured in the same manner as a server. Therefore, if there are two other locators in the distributed system, each locator should reference the other locators (just like a server member would). For example: - - ``` pre - gfsh> start locator --name=locator1 --port=9009 --mcast-port=0 \ - --locators='host1[9001],host2[9003]' - ``` - -- You can configure locators within the `gemfire.properties` file or by specifying start-up parameters on the command line. If you are specifying the locator's configuration in a properties file, locators require the same `gemfire.properties` settings as other members of the distributed system and the same `gfsecurity.properties` settings if you are using a separate, restricted access security settings file. - - For example, to configure both locators and a multicast port in `gemfire.properties:` - - ``` pre - locators=host1[9001],host2[9003] - mcast-port=0 - ``` - -- There is no cache configuration specific to locators. -- For logging output, the locator creates a log file in its current working directory. Log file output defaults to `locator_name.log` in the locator's working directory. If you restart a locator with a previously used locator name, the existing *locator\_name*.log file is automatically renamed for you (for example, `locator1-01-01.log` or `locator1-02-01.log`). You can modify the level of logging details in this file by specifying a level in the `--log-level` argument when starting up the locator. -- By default, a locator will start in a subdirectory (named after the locator) under the directory where `gfsh` is executed. This subdirectory is considered the current working directory. You can also specify a different working directory when starting the locator in `gfsh`. -- By default, a locator that has been shutdown and disconnected due to a network partition event or member unresponsiveness will restart itself and automatically try to reconnect to the existing distributed system. When a locator is in the reconnecting state, it provides no discovery services for the distributed system. See [Handling Forced Cache Disconnection Using Autoreconnect](../../managing/autoreconnect/member-reconnect.html) for more details. - -## <a id="running_the_locator__section_wst_ykb_rr" class="no-quick-link"></a>Locators and the Cluster Configuration Service - -Locators use the cluster configuration service to save configurations that apply to all cluster members, or to members of a specified group. The configurations are saved in the Locator's directory and are propagated to all locators in a distributed system. When you start servers using `gfsh`, the servers receive the group-level and cluster-level configurations from the locators. - -See [Overview of the Cluster Configuration Service](../cluster_config/gfsh_persist.html). - -## <a id="running_the_locator__section_FF25228E30624E04ACA8784A2183D585" class="no-quick-link"></a>Start the Locator - -Use the following guidelines to start the locator: - -- **Standalone locator**. Start a standalone locator in one of these ways: - - Use the `gfsh` command-line utility. See [`gfsh` (Geode SHell)](../../tools_modules/gfsh/chapter_overview.html) for more information on using `gfsh`. For example: - - ``` pre - gfsh>start locator --name=locator1 - - gfsh> start locator --name=locator2 --bind-address=192.0.2.0 --port=13489 - ``` - - - Start the locator using the `main` method in the `org.apache.geode.distributed.LocatorLauncher` class and the Java executable. For example: - - ``` pre - working/directory/of/Locator/process$java -server \ - -classpath "$GEMFIRE/lib/locator-dependencies.jar:/path/to/application/classes.jar" \ - org.apache.geode.distributed.LocatorLauncher start Locator1 --port=11235 \ - --redirect-output - ``` - - Specifically, you use the `LocatorLauncher` class API to run an embedded Locator service in Java application processes that you have created. The directory where you execute the java command becomes the working directory for the locator process. - - - When starting up multiple locators, do not start them up in parallel (in other words, simultaneously). As a best practice, you should wait approximately 30 seconds for the first locator to complete startup before starting any other locators. To check the successful startup of a locator, check for locator log files. To view the uptime of a running locator, you can use the `gfsh status locator` command. - -- **Embedded (colocated) locator**. Manage a colocated locator at member startup or through the APIs: - - Use the `gemfire.properties` `start-locator` setting to start the locator automatically inside your Geode member. See the [Reference](../../reference/book_intro.html#reference). The locator stops automatically when the member exits. The property has the following syntax: - - ``` pre - #gemfire.properties - start-locator=[address]port[,server={true|false},peer={true|false}] - ``` - - Example: - - ``` pre - #gemfire.properties - start-locator=13489 - ``` - - - Use `org.apache.geode.distributed.LocatorLauncher` API to start the locator inside your code. Use the `LocatorLauncher.Builder` class to construct an instance of the `LocatorLauncher`, and then use the `start()` method to start a Locator service embedded in your Java application process. The other methods in the `LocatorLauncher` class provide status information about the locator and allow you to stop the locator. - - ``` pre - import org.apache.geode.distributed.LocatorLauncher; - - public class MyEmbeddedLocator { - - public static void main(String[] args){ - LocatorLauncher locatorLauncher = new LocatorLauncher.Builder() - .setMemberName("locator1") - .setPort(13489) - .build(); - - locatorLauncher.start(); - - System.out.println("Locator successfully started"); - } - } - ``` - - Here's another example that embeds the locator within an application, starts it and then checks the status of the locator before allowing other members to access it: - - ``` pre - package example; - - import ... - - class MyApplication implements Runnable { - - private final LocatorLauncher locatorLauncher; - - public MyApplication(final String... args) { - validateArgs(args); - - locatorLauncher = new LocatorLauncher.Builder() - .setMemberName(args[0]) - .setPort(Integer.parseInt(args[1]) - .setRedirectOutput(true) - .build(); - } - - protected void args(final String[] args) { - ... - } - - public void run() { - ... - - // start the Locator in-process - locatorLauncher.start(); - - // wait for Locator to start and be ready to accept member (client) connections - locatorLauncher.waitOnStatusResponse(30, 5, TimeUnit.SECONDS); - - ... - } - - public static void main(final String... args) { - new MyApplication(args).run(); - } - - } - ``` - - Then to execute the application, you would run: - - ``` pre - /working/directory/of/MyApplication$ java \ - -server -classpath "$GEMFIRE/lib/locator-dependencies.jar:/path/to/application/classes.jar" \ - example.MyApplication Locator1 11235 - ``` - - The directory where you execute the java command becomes the working directory for the locator process. - -## <a id="running_the_locator__section_F58F229D5C7048E9915E0EC470F9A923" class="no-quick-link"></a>Check Locator Status - -If you are connected to the distributed system in `gfsh`, you can check the status of a running Locator by providing the Locator name. For example: - -``` pre -gfsh>status locator --name=locator1 -``` - -If you are not connected to a distributed system, you can check the status of a local Locator by providing the process ID, the Locator's hostname and port, or the Locator's current working directory. For example: - -``` pre -gfsh>status locator --pid=2986 -``` - -or - -``` pre -gfsh>status locator --host=host1 --port=1035 -``` - -or - -``` pre -$ gfsh status locator --dir=<locator_working_directory> -``` - -where <*locator\_working\_directory*> corresponds to the local working directory where the locator is running. - -If successful, the command returns the following information (with the JVM arguments that were provided at startup): - -``` pre -$ gfsh status locator --dir=locator1 -Locator in /home/user/locator1 on ubuntu.local[10334] as locator1 is currently online. -Process ID: 2359 -Uptime: 17 minutes 3 seconds -GemFire Version: 8.0.0 -Java Version: 1.7.0_65 -Log File: /home/user/locator1/locator1.log -JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false - -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806 -Class-Path: /home/user/Pivotal_GemFire_800_b48319_Linux/lib/locator-dependencies.jar:/usr/local/java/lib/tools.jar - -Cluster configuration service is up and running. -``` - -## <a id="running_the_locator__section_0E4DDED6AB784B0CAFBAD538B227F487" class="no-quick-link"></a>Stop the Locator - -If you are connected to the distributed system in `gfsh`, you can stop a running locator by providing the locator name. For example: - -``` pre -gfsh>stop locator --name=locator1 -``` - -If you are not connected to a distributed system, you can stop a local locator by specifying the locator's process ID or the locator's current working directory. For example: - -``` pre -gfsh>stop locator --pid=2986 -``` - -or - -``` pre -gfsh>stop locator --dir=<locator_working_directory> -``` - -where <*locator\_working\_directory*> corresponds to the local working directory where the locator is running. - -## Locators and Multi-Site (WAN) Deployments - -If you use a multi-site (WAN) configuration, you can connect a locator to a remote site when starting the locator. - -To connect a new locator process to a remote locator in a WAN configuration, specify the following at startup: - -``` pre -gfsh> start locator --name=locator1 --port=9009 --mcast-port=0 \ ---J='-Dgemfire.remote-locators=192.0.2.0[9009],198.51.100.0[9009]' -``` http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/configuring/running/starting_up_shutting_down.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/configuring/running/starting_up_shutting_down.html.md.erb b/geode-docs/configuring/running/starting_up_shutting_down.html.md.erb deleted file mode 100644 index 01b191d..0000000 --- a/geode-docs/configuring/running/starting_up_shutting_down.html.md.erb +++ /dev/null @@ -1,146 +0,0 @@ ---- -title: Starting Up and Shutting Down Your System ---- - -<!-- -Licensed to the Apache Software Foundation (ASF) under one or more -contributor license agreements. See the NOTICE file distributed with -this work for additional information regarding copyright ownership. -The ASF licenses this file to You under the Apache License, Version 2.0 -(the "License"); you may not use this file except in compliance with -the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. ---> - -Determine the proper startup and shutdown procedures, and write your startup and shutdown scripts. - -Well-designed procedures for starting and stopping your system can speed startup and protect your data. The processes you need to start and stop include server and locator processes and your other Geode applications, including clients. The procedures you use depend in part on your systemâs configuration and the dependencies between your system processes. - -Use the following guidelines to create startup and shutdown procedures and scripts. Some of these instructions use [`gfsh` (Geode SHell)](../../tools_modules/gfsh/chapter_overview.html). - -## <a id="starting_up_shutting_down__section_3D111558326D4A38BE48C17D44BB66DB" class="no-quick-link"></a>Starting Up Your System - -You should follow certain order guidelines when starting your Geode system. - -Start server-distributed systems before you start their client applications. In each distributed system, follow these guidelines for member startup: - -- Start locators first. See [Running Geode Locator Processes](running_the_locator.html) for examples of locator start up commands. -- Start cache servers before the rest of your processes unless the implementation requires that other processes be started ahead of them. See [Running Geode Server Processes](running_the_cacheserver.html) for examples of server start up commands. -- If your distributed system uses both persistent replicated and non-persistent replicated regions, you should start up all the persistent replicated members in parallel before starting the non-persistent regions. This way, persistent members will not delay their startup for other persistent members with later data. -- For a system that includes persistent regions, see [Start Up and Shut Down with Disk Stores](../../managing/disk_storage/starting_system_with_disk_stores.html). -- If you are running producer processes and consumer or event listener processes, start the consumers first. This ensures the consumers and listeners do not miss any notifications or updates. -- If you are starting up your locators and peer members all at once, you can use the `locator-wait-time` property (in seconds) upon process start up. This timeout allows peers to wait for the locators to finish starting up before attempting to join the distributed system. If a process has been configured to wait for a locator to start, it will log an info-level message - - > `GemFire startup was unable to contact a locator. Waiting for one to start. Configured locators are frodo[12345],pippin[12345]. ` - - The process will then sleep for a second and retry until it either connects or the number of seconds specified in `locator-wait-time` has elapsed. By default, `locator-wait-time` is set to zero meaning that a process that cannot connect to a locator upon startup will throw an exception. - -**Note:** -You can optionally override the default timeout period for shutting down individual processes. This override setting must be specified during member startup. See [Shutting Down the System](#starting_up_shutting_down__section_mnx_4cp_cv) for details. - -## <a id="starting_up_shutting_down__section_2F8ABBFCE641463C8A8721841407993D" class="no-quick-link"></a>Starting Up After Losing Data on Disk - -This information pertains to catastrophic loss of Geode disk store files. If you lose disk store files, your next startup may hang, waiting for the lost disk stores to come back online. If your system hangs at startup, use the `gfsh` command `show missing-disk-store` to list missing disk stores and, if needed, revoke missing disk stores so your system startup can complete. You must use the Disk Store ID to revoke a disk store. These are the two commands: - -``` pre -gfsh>show missing-disk-stores - -Disk Store ID | Host | Directory ------------------------------------- | --------- | ------------------------------------- -60399215-532b-406f-b81f-9b5bd8d1b55a | excalibur | /usr/local/gemfire/deploy/disk_store1 - -gfsh>revoke missing-disk-store --id=60399215-532b-406f-b81f-9b5bd8d1b55a -``` - -**Note:** -This `gfsh` commands require that you are connected to the distributed system via a JMX Manager node. - -## <a id="starting_up_shutting_down__section_mnx_4cp_cv" class="no-quick-link"></a>Shutting Down the System - -Shut down your Geode system by using either the `gfsh` `shutdown` command or by shutting down individual members one at a time. - -## <a id="starting_up_shutting_down__section_0EB4DDABB6A348BA83B786EEE7C84CF1" class="no-quick-link"></a>Using the shutdown Command - -If you are using persistent regions, (members are persisting data to disk), you should use the `gfsh` `shutdown` command to stop the running system in an orderly fashion. This command synchronizes persistent partitioned regions before shutting down, which makes the next startup of the distributed system as efficient as possible. - -If possible, all members should be running before you shut them down so synchronization can occur. Shut down the system using the following `gfsh` command: - -``` pre -gfsh>shutdown -``` - -By default, the shutdown command will only shut down data nodes. If you want to shut down all nodes including locators, specify the `--include-locators=true` parameter. For example: - -``` pre -gfsh>shutdown --include-locators=true -``` - -This will shut down all locators one by one, shutting down the manager last. - -To shutdown all data members after a grace period, specify a time-out option (in seconds). - -``` pre -gfsh>shutdown --time-out=60 -``` - -To shutdown all members including locators after a grace period, specify a time-out option (in seconds). - -``` pre -gfsh>shutdown --include-locators=true --time-out=60 -``` - -## <a id="starting_up_shutting_down__section_A07D40BC118544D0984860A3B4A5CB29" class="no-quick-link"></a>Shutting Down System Members Individually - -If you are not using persistent regions, you can shut down the distributed system by shutting down each member in the reverse order of their startup. (See [Starting Up Your System](#starting_up_shutting_down__section_3D111558326D4A38BE48C17D44BB66DB) for the recommended order of member startup.) - -Shut down the distributed system members according to the type of member. For example, use the following mechanisms to shut down members: - -- Use the appropriate mechanism to shut down any Geode-connected client applications that are running in the distributed system. -- Shut down any cache servers. To shut down a server, issue the following `gfsh` command: - - ``` pre - gfsh>stop server --name=<...> - ``` - - or - - ``` pre - gfsh>stop server --dir=<server_working_dir> - ``` - -- Shut down any locators. To shut down a locator, issue the following `gfsh` command: - - ``` pre - gfsh>stop locator --name=<...> - ``` - - or - - ``` pre - gfsh>stop locator --dir=<locator_working_dir> - ``` - -## <a id="starting_up_shutting_down__section_7CF680CF8A924C57A7052AE2F975DA81" class="no-quick-link"></a>Option for System Member Shutdown Behavior - -The `DISCONNECT_WAIT` command line argument sets the maximum time for each individual step in the shutdown process. If any step takes longer than the specified amount, it is forced to end. Each operation is given this grace period, so the total length of time the cache member takes to shut down depends on the number of operations and the `DISCONNECT_WAIT` setting. During the shutdown process, Geode produces messages such as: - -``` pre -Disconnect listener still running -``` - -The `DISCONNECT_WAIT` default is 10000 milliseconds. - -To change it, set this system property on the Java command line used for member startup. For example: - -``` pre -gfsh>start server --J=-DDistributionManager.DISCONNECT_WAIT=<milliseconds> -``` - -Each process can have different `DISCONNECT_WAIT` settings. http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/book_intro.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/book_intro.html.md.erb b/geode-docs/developing/book_intro.html.md.erb deleted file mode 100644 index 8086b7a..0000000 --- a/geode-docs/developing/book_intro.html.md.erb +++ /dev/null @@ -1,74 +0,0 @@ ---- -title: Developing with Apache Geode ---- - -<!-- -Licensed to the Apache Software Foundation (ASF) under one or more -contributor license agreements. See the NOTICE file distributed with -this work for additional information regarding copyright ownership. -The ASF licenses this file to You under the Apache License, Version 2.0 -(the "License"); you may not use this file except in compliance with -the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. ---> - -*Developing with Apache Geode* explains main concepts of application programming with Apache Geode. It describes how to plan and implement regions, data serialization, event handling, delta propagation, transactions, and more. - -For information about Geode REST application development, see [Developing REST Applications for Apache Geode](../rest_apps/book_intro.html). - -- **[Region Data Storage and Distribution](../developing/region_options/chapter_overview.html)** - - The Apache Geode data storage and distribution models put your data in the right place at the right time. You should understand all the options for data storage in Geode before you start configuring your data regions. - -- **[Partitioned Regions](../developing/partitioned_regions/chapter_overview.html)** - - In addition to basic region management, partitioned regions include options for high availability, data location control, and data balancing across the distributed system. - -- **[Distributed and Replicated Regions](../developing/distributed_regions/chapter_overview.html)** - - In addition to basic region management, distributed and replicated regions include options for things like push and pull distribution models, global locking, and region entry versions to ensure consistency across Geode members. - -- **[Consistency for Region Updates](../developing/distributed_regions/region_entry_versions.html)** - - Geode ensures that all copies of a region eventually reach a consistent state on all members and clients that host the region, including Geode members that distribute region events. - -- **[General Region Data Management](../developing/management_all_region_types/chapter_overview.html)** - - For all regions, you have options to control memory use, back up your data to disk, and keep stale data out of your cache. - -- **[Data Serialization](../developing/data_serialization/chapter_overview.html)** - - Data that you manage in Geode must be serialized and deserialized for storage and transmittal between processes. You can choose among several options for data serialization. - -- **[Events and Event Handling](../developing/events/chapter_overview.html)** - - Geode provides versatile and reliable event distribution and handling for your cached data and system member events. - -- **[Delta Propagation](../developing/delta_propagation/chapter_overview.html)** - - Delta propagation allows you to reduce the amount of data you send over the network by including only changes to objects rather than the entire object. - -- **[Querying](../developing/querying_basics/chapter_overview.html)** - - Geode provides a SQL-like querying language called OQL that allows you to access data stored in Geode regions. - -- **[Continuous Querying](../developing/continuous_querying/chapter_overview.html)** - - Continuous querying continuously returns events that match the queries you set up. - -- **[Transactions](../developing/transactions/chapter_overview.html)** - - Geode provides a transactions API, with `begin`, `commit`, and `rollback` methods. These methods are much the same as the familiar relational database transactions methods. - -- **[Function Execution](../developing/function_exec/chapter_overview.html)** - - A function is a body of code that resides on a server and that an application can invoke from a client or from another server without the need to send the function code itself. The caller can direct a data-dependent function to operate on a particular dataset, or can direct a data-independent function to operate on a particular server, member, or member group. - - http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/continuous_querying/chapter_overview.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/continuous_querying/chapter_overview.html.md.erb b/geode-docs/developing/continuous_querying/chapter_overview.html.md.erb deleted file mode 100644 index 3f77edb..0000000 --- a/geode-docs/developing/continuous_querying/chapter_overview.html.md.erb +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Continuous Querying ---- - -<!-- -Licensed to the Apache Software Foundation (ASF) under one or more -contributor license agreements. See the NOTICE file distributed with -this work for additional information regarding copyright ownership. -The ASF licenses this file to You under the Apache License, Version 2.0 -(the "License"); you may not use this file except in compliance with -the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. ---> - -Continuous querying continuously returns events that match the queries you set up. - -<a id="continuous__section_779B4E4D06E948618E5792335174E70D"></a> - -- **[How Continuous Querying Works](../../developing/continuous_querying/how_continuous_querying_works.html)** - - Clients subscribe to server-side events by using SQL-type query filtering. The server sends all events that modify the query results. CQ event delivery uses the client/server subscription framework. - -- **[Implementing Continuous Querying](../../developing/continuous_querying/implementing_continuous_querying.html)** - - Use continuous querying in your clients to receive continuous updates to queries run on the servers. - -- **[Managing Continuous Querying](../../developing/continuous_querying/continuous_querying_whats_next.html)** - - This topic discusses CQ management options, CQ states, and retrieving initial result sets. - - http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/continuous_querying/continuous_querying_whats_next.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/continuous_querying/continuous_querying_whats_next.html.md.erb b/geode-docs/developing/continuous_querying/continuous_querying_whats_next.html.md.erb deleted file mode 100644 index 4d91722..0000000 --- a/geode-docs/developing/continuous_querying/continuous_querying_whats_next.html.md.erb +++ /dev/null @@ -1,88 +0,0 @@ ---- -title: Managing Continuous Querying ---- - -<!-- -Licensed to the Apache Software Foundation (ASF) under one or more -contributor license agreements. See the NOTICE file distributed with -this work for additional information regarding copyright ownership. -The ASF licenses this file to You under the Apache License, Version 2.0 -(the "License"); you may not use this file except in compliance with -the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. ---> - -This topic discusses CQ management options, CQ states, and retrieving initial result sets. - -## Using CQs from a RegionService Instance - -If you are running durable client queues (CQs) from the `RegionService` instance, stop and start the offline event storage for the client as a whole. The server manages one queue for the entire client process, so you need to request the stop and start of durable CQ event messaging for the cache as a whole, through the `ClientCache` instance. If you closed the `RegionService` instances, event processing would stop, but the server would continue to send events, and those events would be lost. - -Stop with: - -``` pre -clientCache.close(true); -``` - -Start up again in this order: - -1. Create `ClientCache` instance. -2. Create all `RegionService` instances. Initialize CQ listeners. -3. Call `ClientCache` instance `readyForEvents` method. - -## <a id="continuous_querying_whats_next__section_35F929682CD24478AF0B2249C5065A27" class="no-quick-link"></a>States of a CQ - -A CQ has three possible states, which are maintained on the server. You can check them from the client through `CqQuery.getState`. - -| Query State | What does this mean? | When does the CQ reach this state? | Notes | -|-------------|----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| STOPPED | The CQ is in place and ready to run, but is not running. | When CQ is first created and after being stopped from a running state. | A stopped CQ uses system resources. Stopping a CQ only stops the CQ event messaging from server to client. All server-side CQ processing continues, but new CQ events are not placed into the server's client queue. Stopping a CQ does not change anything on the client side (but, of course, the client stops receiving events for the CQ that is stopped). | -| RUNNING | The CQ is running against server region events and the client listeners are waiting for CQ events. | When CQ is executed from a stopped state. | This is the only state in which events are sent to the client. | -| CLOSED | The CQ is not available for any further activities. You cannot rerun a closed CQ. | When CQ is closed by the client and when cache or connection conditions make it impossible to maintain or run. | The closed CQ does not use system resources. | - -## <a id="continuous_querying_whats_next__section_4E308A70BCE44031BB1F37B95B4D06E6" class="no-quick-link"></a>CQ Management Options - -You manage your CQs from the client side. All calls are executed only for the calling client's CQs. - -| Task | For a single CQ use ... | For groups of CQs use ... | -|----------------------------------------------|-----------------------------------------------------------|-------------------------------------------| -| Create a CQ | `QueryService.newCq` | N/A | -| Execute a CQ | `CqQuery.execute` and `CqQuery.executeWithInitialResults` | `QueryService.executeCqs` | -| Stop a CQ | `CqQuery.stop` | `QueryService.stopCqs` | -| Close a CQ | `CqQuery.close` | `QueryService.closeCqs` | -| Access a CQ | `CqEvent.getCq` and `QueryService.getCq` | `QueryService.getCq` | -| Modify CQ Listeners | `CqQuery.getCqAttributesMutator` | N/A | -| Access CQ Runtime Statistics | `CqQuery.getStatistics` | `QueryService.getCqStatistics` | -| Get all durable CQs registered on the server | N/A | `QueryService.getAllDurableCqsFromServer` | - -## <a id="continuous_querying_whats_next__section_B274DA982AE6441288323A1D11B58786" class="no-quick-link"></a>Managing CQs and Durable Clients Using gfsh - -Using the `gfsh` command-line utility, you can perform the following actions: - -- Close durable clients and durable client CQs. See [close](../../tools_modules/gfsh/command-pages/close.html#topic_27555B1929D7487D9158096BC065D372). -- List all durable CQs for a given durable client ID. See [list](../../tools_modules/gfsh/command-pages/list.html). -- Show the subscription event queue size for a given durable client ID. See [show subscription-queue-size](../../tools_modules/gfsh/command-pages/show.html#topic_395C96B500AD430CBF3D3C8886A4CD2E). - -## <a id="continuous_querying_whats_next__section_345E9C144EB544FBA61FC9C83BF1C1ED" class="no-quick-link"></a>Retrieving an Initial Result Set of a CQ - -You can optionally retrieve an initial result set when you execute your CQ. To do this, execute the CQ with the `executeWithInitialResults` method. The initial `SelectResults` returned is the same that you would get if you ran the query ad hoc, by calling `QueryService.newQuery.execute` on the server cache, but with the key included. This example retrieves keys and values from an initial result set: - -``` pre -SelectResults cqResults = cq.executeWithInitialResults(); -for (Object o : cqResults.asList()) { - Struct s = (Struct) o; // Struct with Key, value pair - Portfolio p = (Portfolio) s.get("value"); // get value from the Struct - String id = (String) s.get("key"); // get key from the Struct -} -``` - -If you are managing a data set from the CQ results, you can initialize the set by iterating over the result set and then updating it from your listeners as events arrive. For example, you might populate a new screen with initial results and then update the screen from a CQ listener. - -If a CQ is executed using the `ExecuteWithInitialResults` method, the returned result may already include the changes with respect to the event. This can arise when updates are happening on the region while CQ registration is in progress. The CQ does not block any region operation as it could affect the performance of the region operation. Design your application to synchronize between the region operation and CQ registration to avoid duplicate events from being delivered. http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/continuous_querying/how_continuous_querying_works.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/continuous_querying/how_continuous_querying_works.html.md.erb b/geode-docs/developing/continuous_querying/how_continuous_querying_works.html.md.erb deleted file mode 100644 index 67bb447..0000000 --- a/geode-docs/developing/continuous_querying/how_continuous_querying_works.html.md.erb +++ /dev/null @@ -1,98 +0,0 @@ ---- -title: How Continuous Querying Works ---- - -<!-- -Licensed to the Apache Software Foundation (ASF) under one or more -contributor license agreements. See the NOTICE file distributed with -this work for additional information regarding copyright ownership. -The ASF licenses this file to You under the Apache License, Version 2.0 -(the "License"); you may not use this file except in compliance with -the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. ---> - -Clients subscribe to server-side events by using SQL-type query filtering. The server sends all events that modify the query results. CQ event delivery uses the client/server subscription framework. - -<a id="how_continuous_querying_works__section_D473C4D532E14044820B7D76DEE83450"></a> -With CQ, the client sends a query to the server side for execution and receives the events that satisfy the criteria. For example, in a region storing stock market trade orders, you can retrieve all orders over a certain price by running a CQ with a query like this: - -``` pre -SELECT * FROM /tradeOrder t WHERE t.price > 100.00 -``` - -When the CQ is running, the server sends the client all new events that affect the results of the query. On the client side, listeners programmed by you receive and process incoming events. For this example query on `/tradeOrder`, you might program a listener to push events to a GUI where higher-priced orders are displayed. CQ event delivery uses the client/server subscription framework. - -## <a id="how_continuous_querying_works__section_777DEEA9D1DD45F59EC1BB35789C3A5D" class="no-quick-link"></a>Logical Architecture of Continuous Querying - -Your clients can execute any number of CQs, with each CQ assigned any number of listeners. - -<img src="../../images/ContinuousQuerying-1.gif" id="how_continuous_querying_works__image_B7C36491E8CA4376AEAE4E030C3DF86B" class="image" /> - -## <a id="how_continuous_querying_works__section_F0E19919B3F645EF83EACBD7AFDF527E" class="no-quick-link"></a>Data Flow with CQs - -CQs do not update the client region. This is in contrast to other server-to-client messaging like the updates sent to satisfy interest registration and responses to get requests from the client's `Pool`. CQs serve as notification tools for the CQ listeners, which can be programmed in any way your application requires. - -When a CQ is running against a server region, each entry event is evaluated against the CQ query by the thread that updates the server cache. If either the old or the new entry value satisfies the query, the thread puts a `CqEvent` in the client's queue. The `CqEvent` contains information from the original cache event plus information specific to the CQ's execution. Once received by the client, the `CqEvent` is passed to the `onEvent` method of all `CqListener`s defined for the CQ. - -Here is the typical CQ data flow for entries updated in the server cache: - -1. Entry events come to the server's cache from the server or its peers, distribution from remote sites, or updates from a client. -2. For each event, the server's CQ executor framework checks for a match with its running CQs. -3. If the old or new entry value satisfies a CQ query, a CQ event is sent to the CQ's listeners on the client side. Each listener for the CQ gets the event. - -In the following figure: - -- Both the new and old prices for entry X satisfy the CQ query, so that event is sent indicating an update to the query results. -- The old price for entry Y satisfied the query, so it was part of the query results. The invalidation of entry Y makes it not satisfy the query. Because of this, the event is sent indicating that it is destroyed in the query results. -- The price for the newly created entry Z does not satisfy the query, so no event is sent. - -<img src="../../images/ContinuousQuerying-3.gif" id="how_continuous_querying_works__image_2F21A3820906449FAABE7ACC9654A564" class="image" /> - -## <a id="how_continuous_querying_works__section_819CDBA814024315A6DDA83BD56D125C" class="no-quick-link"></a>CQ Events - -CQ events do not change your client cache. They are provided as an event service only. This allows you to have any collection of CQs without storing large amounts of data in your regions. If you need to persist information from CQ events, program your listener to store the information where it makes the most sense for your application. - -The `CqEvent` object contains this information: - -- Entry key and new value. -- Base operation that triggered the cache event in the server. This is the standard `Operation` class instance used for cache events in GemFire. -- `CqQuery` object associated with this CQ event. -- `Throwable` object, returned only if an error occurred when the `CqQuery` ran for the cache event. This is non-null only for `CqListener` onError calls. -- Query operation associated with this CQ event. This operation describes the change affected to the query results by the cache event. Possible values are: - - `CREATE`, which corresponds to the standard database - - `INSERT` operation - - `UPDATE` - - `DESTROY`, which corresponds to the standard database DELETE operation - -Region operations do not translate to specific query operations and query operations do not specifically describe region events. Instead, the query operation describes how the region event affects the query results. - -| Query operations based on old and new entry values | New value does not satisfy the query | New value satisfies the query | -|----------------------------------------------------|--------------------------------------|-------------------------------| -| Old value does not satisfy the query | no event | `CREATE` query operation | -| Old value does satisfies the query | `DESTROY` query operation | `UPDATE` query operation | - -You can use the query operation to decide what to do with the `CqEvent` in your listeners. For example, a `CqListener` that displays query results on screen might stop displaying the entry, start displaying the entry, or update the entry display depending on the query operation. - -## <a id="how_continuous_querying_works__section_bfs_llr_gr" class="no-quick-link"></a>Region Type Restrictions for CQs - -You can only create CQs on replicated or partitioned regions. If you attempt to create a CQ on a non-replicated or non-partitioned region, you will receive the following error message: - -``` pre -The region <region name> specified in CQ creation is neither replicated nor partitioned; only replicated or partitioned regions are allowed in CQ creation. -``` - -In addition, you cannot create a CQ on a replicated region with eviction setting of local-destroy since this eviction setting changes the region's data policy. If you attempt to create a CQ on this kind of region, you will receive the following error message: - -``` pre -CQ is not supported for replicated region: <region name> with eviction action: LOCAL_DESTROY -``` - -See also [Configure Distributed, Replicated, and Preloaded Regions](../distributed_regions/managing_distributed_regions.html) for potential issues with setting local-destroy eviction on replicated regions. http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/continuous_querying/implementing_continuous_querying.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/continuous_querying/implementing_continuous_querying.html.md.erb b/geode-docs/developing/continuous_querying/implementing_continuous_querying.html.md.erb deleted file mode 100644 index e1bb4ea..0000000 --- a/geode-docs/developing/continuous_querying/implementing_continuous_querying.html.md.erb +++ /dev/null @@ -1,202 +0,0 @@ ---- -title: Implementing Continuous Querying ---- - -<!-- -Licensed to the Apache Software Foundation (ASF) under one or more -contributor license agreements. See the NOTICE file distributed with -this work for additional information regarding copyright ownership. -The ASF licenses this file to You under the Apache License, Version 2.0 -(the "License"); you may not use this file except in compliance with -the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. ---> - -Use continuous querying in your clients to receive continuous updates to queries run on the servers. - -CQs are only run by a client on its servers. - -Before you begin, you should be familiar with [Querying](../querying_basics/chapter_overview.html) and have your client/server system configured. - -1. Configure the client pools you will use for CQs with `subscription-enabled` set to true. - - To have CQ and interest subscription events arrive as closely together as possible, use a single pool for everything. Different pools might use different servers, which can lead to greater differences in event delivery time. - -2. Write your OQL query to retrieve the data you need from the server. - - The query must satisfy these CQ requirements in addition to the standard GemFire querying specifications: - - The FROM clause must contain only a single region specification, with optional iterator variable. - - The query must be a SELECT expression only, preceded by zero or more IMPORT statements. This means the query cannot be a statement such as <code>"/tradeOrder.name"</code> or <code>"(SELECT \* from /tradeOrder).size".</code> - - The CQ query cannot use: - - Cross region joins - - Drill-downs into nested collections - - DISTINCT - - Projections - - Bind parameters - - The CQ query must be created on a partitioned or replicated region. See [Region Type Restrictions for CQs](how_continuous_querying_works.html#how_continuous_querying_works__section_bfs_llr_gr). - - The basic syntax for the CQ query is: - - ``` pre - SELECT * FROM /fullRegionPath [iterator] [WHERE clause] - ``` - - This example query could be used to get all trade orders where the price is over $100: - - ``` pre - SELECT * FROM /tradeOrder t WHERE t.price > 100.00 - ``` - -3. Write your CQ listeners to handle CQ events from the server. - Implement `org.apache.geode.cache.query.CqListener` in each event handler you need. In addition to your main CQ listeners, you might have listeners that you use for all CQs to track statistics or other general information. - - **Note:** - Be especially careful if you choose to update your cache from your `CqListener`. If your listener updates the region that is queried in its own CQ and that region has a `Pool` named, the update will be forwarded to the server. If the update on the server satisfies the same CQ, it may be returned to the same listener that did the update, which could put your application into an infinite loop. This same scenario could be played out with multiple regions and multiple CQs, if the listeners are programmed to update each other's regions. - - This example outlines a `CqListener` that might be used to update a display screen with current data from the server. The listener gets the `queryOperation` and entry key and value from the `CqEvent` and then updates the screen according to the type of `queryOperation`. - - ``` pre - // CqListener class - public class TradeEventListener implements CqListener - { - public void onEvent(CqEvent cqEvent) - { - // org.apache.geode.cache Operation associated with the query op - Operation queryOperation = cqEvent.getQueryOperation(); - // key and new value from the event - Object key = cqEvent.getKey(); - TradeOrder tradeOrder = (TradeOrder)cqEvent.getNewValue(); - if (queryOperation.isUpdate()) - { - // update data on the screen for the trade order . . . - } - else if (queryOperation.isCreate()) - { - // add the trade order to the screen . . . - } - else if (queryOperation.isDestroy()) - { - // remove the trade order from the screen . . . - } - } - public void onError(CqEvent cqEvent) - { - // handle the error - } - // From CacheCallback public void close() - { - // close the output screen for the trades . . . - } - } - ``` - - When you install the listener and run the query, your listener will handle all of the CQ results. - -4. If you need your CQs to detect whether they are connected to any of the servers that host its subscription queues, implement a `CqStatusListener` instead of a `CqListener`. - `CqStatusListener` extends the current `CqListener`, allowing a client to detect when a CQ is connected and/or disconnected from the server(s). The `onCqConnected()` method will be invoked when the CQ is connected, and when the CQ has been reconnected after being disconnected. The `onCqDisconnected()` method will be invoked when the CQ is no longer connected to any servers. - - Taking the example from step 3, we can instead implement a `CqStatusListener`: - - ``` pre - public class TradeEventListener implements CqStatusListener - { - public void onEvent(CqEvent cqEvent) - { - // org.apache.geode.cache Operation associated with the query op - Operation queryOperation = cqEvent.getQueryOperation(); - // key and new value from the event - Object key = cqEvent.getKey(); - TradeOrder tradeOrder = (TradeOrder)cqEvent.getNewValue(); - if (queryOperation.isUpdate()) - { - // update data on the screen for the trade order . . . - } - else if (queryOperation.isCreate()) - { - // add the trade order to the screen . . . - } - else if (queryOperation.isDestroy()) - { - // remove the trade order from the screen . . . - } - } - public void onError(CqEvent cqEvent) - { - // handle the error - } - // From CacheCallback public void close() - { - // close the output screen for the trades . . . - } - - public void onCqConnected() { - //Display connected symbol - } - - public void onCqDisconnected() { - //Display disconnected symbol - } - } - ``` - - When you install the `CqStatusListener`, your listener will be able to detect its connection status to the servers that it is querying. - -5. Program your client to run the CQ: - 1. Create a `CqAttributesFactory` and use it to set your `CqListener`s and `CqStatusListener`. - 2. Pass the attributes factory and the CQ query and its unique name to the `QueryService` to create a new `CqQuery`. - 3. Start the query running by calling one of the execute methods on the `CqQuery` object. - You can execute with or without an initial result set. - 4. When you are done with the CQ, close it. - -## Continuous Query Implementation - -``` pre -// Get cache and queryService - refs to local cache and QueryService -// Create client /tradeOrder region configured to talk to the server - -// Create CqAttribute using CqAttributeFactory -CqAttributesFactory cqf = new CqAttributesFactory(); - -// Create a listener and add it to the CQ attributes callback defined below -CqListener tradeEventListener = new TradeEventListener(); -cqf.addCqListener(tradeEventListener); -CqAttributes cqa = cqf.create(); -// Name of the CQ and its query -String cqName = "priceTracker"; -String queryStr = "SELECT * FROM /tradeOrder t where t.price > 100.00"; - -// Create the CqQuery -CqQuery priceTracker = queryService.newCq(cqName, queryStr, cqa); - -try -{ // Execute CQ, getting the optional initial result set - // Without the initial result set, the call is priceTracker.execute(); - SelectResults sResults = priceTracker.executeWithInitialResults(); - for (Object o : sResults) { - Struct s = (Struct) o; - TradeOrder to = (TradeOrder) s.get("value"); - System.out.println("Intial result includes: " + to); - } -} - catch (Exception ex) -{ - ex.printStackTrace(); -} -// Now the CQ is running on the server, sending CqEvents to the listener -. . . - -// End of life for the CQ - clear up resources by closing -priceTracker.close(); -``` - -With continuous queries, you can optionally implement: - -- Highly available CQs by configuring your servers for high availability. -- Durable CQs by configuring your clients for durable messaging and indicating which CQs are durable at creation. http://git-wip-us.apache.org/repos/asf/incubator-geode/blob/84cfbdfc/geode-docs/developing/data_serialization/PDX_Serialization_Features.html.md.erb ---------------------------------------------------------------------- diff --git a/geode-docs/developing/data_serialization/PDX_Serialization_Features.html.md.erb b/geode-docs/developing/data_serialization/PDX_Serialization_Features.html.md.erb deleted file mode 100644 index e6c06f4..0000000 --- a/geode-docs/developing/data_serialization/PDX_Serialization_Features.html.md.erb +++ /dev/null @@ -1,40 +0,0 @@ ---- -title: Geode PDX Serialization Features ---- - -<!-- -Licensed to the Apache Software Foundation (ASF) under one or more -contributor license agreements. See the NOTICE file distributed with -this work for additional information regarding copyright ownership. -The ASF licenses this file to You under the Apache License, Version 2.0 -(the "License"); you may not use this file except in compliance with -the License. You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. ---> - -Geode PDX serialization offers several advantages in terms of functionality. - -## <a id="concept_F02E40517C4B42F2A75B133BB507C626__section_A0EEB4DA3E9F4EA4B65FE727D3951EA1" class="no-quick-link"></a>Application Versioning of PDX Domain Objects - -Domain objects evolve along with your application code. You might create an address object with two address lines, then realize later that a third line is required for some situations. Or you might realize that a particular field is not used and want to get rid of it. With PDX, you can use old and new versions of domain objects together in a distributed system if the versions differ by the addition or removal of fields. This compatibility lets you gradually introduce modified code and data into the system, without bringing the system down. - -Geode maintains a central registry of the PDX domain object metadata. Using the registry, Geode preserves fields in each member's cache regardless of whether the field is defined. When a member receives an object with a registered field that the member is not aware of, the member does not access the field, but preserves it and passes it along with the entire object to other members. When a member receives an object that is missing one or more fields according to the member's version, Geode assigns the Java default values for the field types to the missing fields. - -## <a id="concept_F02E40517C4B42F2A75B133BB507C626__section_D68A6A9C2C0C4D32AE7DADA2A4C3104D" class="no-quick-link"></a>Portability of PDX Serializable Objects - -When you serialize an object using PDX, Geode stores the object's type information in the central registry. The information is passed among clients and servers, peers, and distributed systems. - -This centralization of object type information is advantageous for client/server installations in which clients and servers are written in different languages. Clients pass registry information to servers automatically when they store a PDX serialized object. Clients can run queries and functions against the data in the servers without compatibility between server and the stored objects. One client can store data on the server to be retrieved by another client, with no requirements on the part of the server. - -## <a id="concept_F02E40517C4B42F2A75B133BB507C626__section_08C901A3CF3E438C8778F09D482B9A63" class="no-quick-link"></a>Reduced Deserialization of Serialized Objects - -The access methods of PDX serialized objects allow you to examine specific fields of your domain object without deserializing the entire object. Depending on your object usage, you can reduce serialization and deserialization costs significantly. - -Java and other clients can run queries and execute functions against the objects in the server caches without deserializing the entire object on the server side. The query engine automatically recognizes PDX objects, retrieves the `PdxInstance` of the object and uses only the fields it needs. Likewise, peers can access only the necessary fields from the serialized object, keeping the object stored in the cache in serialized form.