Hi,

I have put a little bit of work into this:
https://github.com/lshannon/geode-aws-deployment-scripts

These scripts are far from perfect, and need some love (there are some
bugs). But they might give you some ideas. Others on the list will have
better.

My approach is to SCP the geode binaries to all the remote machines in the
cluster, keeping track of the Locator and Servers through txt files that
are pre-configured (those are referenced below):
https://github.com/lshannon/geode-aws-deployment-scripts/blob/master/remote_management_scripts/initial_set_up/intial_setup.sh

Included with the binaries uploaded to the remote servers are a set of
shell scripts to start a Geode process (Locator or Server) as well as
configure the environment a bit so the Geode process will run:
https://github.com/lshannon/geode-aws-deployment-scripts/tree/master/geode-ubuntu-package/scripts

The key thing when starting the cluster is starting the locators first, and
all members knowing the IP:Port the Locators are listening on. To start the
Cluster, you can call this script from a remote control machine (ie: your
laptop).
https://github.com/lshannon/geode-aws-deployment-scripts/blob/master/remote_management_scripts/start_cluster.sh

This script iterates through the locators first (
https://github.com/lshannon/geode-aws-deployment-scripts/blob/master/remote_management_scripts/locators.txt),
calling the remote script on the AWS machine to start the Locator Geode
process. Then it iterates through the servers (
https://github.com/lshannon/geode-aws-deployment-scripts/blob/master/remote_management_scripts/servers.txt),
calling those remote scripts to start the Geode Server processes. Passed in
as an argument for each script execution are the IP:Ports of the Locators
in the Cluster. Note, the IPs in these sample txt files no longer exist,
just left them in there to show the format.

Locators and Server can be on the same machine or different. As long as
they don't share the same ports, its fine (just make sure you have enough
cores & memory to handle multiple Java processes).

As the processes start, they begin membership communication to form a
cluster. Note: For AWS need to configure the /etc/host file with all the
cluster member info (this is noted in the README).

You can stop the Cluster doing pretty much the inverse:
https://github.com/lshannon/geode-aws-deployment-scripts/blob/master/remote_management_scripts/stop_cluster.sh

You can get a remote connection using gfsh to the cluster like this:
https://github.com/lshannon/geode-aws-deployment-scripts/blob/master/remote_management_scripts/gfsh.sh

As noted, this config has not been battled tested. I put it together for a
talk last year and did not use them much after that.

I hope it can at least give you some ideas.

All the best,

Luke




On Sat, Jun 23, 2018 at 12:36 PM trung kien <[email protected]> wrote:

> Dear Geode Gurus,
>
> I'm pretty new with geo and have couple of questions regarding the
> deployment
>
> 1/ In production environment, what's the correct way of deploying geode?
> I'm using gfsh to start locators and servers, but when exitting the
> terminal all processes seems disappear?
>
> 2/ How can i deploy cluster on multiple servers?
>
> Suppose I have 2 servers, does gfsh allow remoting deploy on other
> servers?
>
>
>
> --
> Thanks
> Kien
>


-- 
Luke Shannon | Platform Engineering | Pivotal
-------------------------------------------------------------------------

Mobile:416-571-9495
twitter: @lukewshannon

Join the Toronto Pivotal Usergroup:
http://www.meetup.com/Toronto-Pivotal-User-Group/

Join the Ottawa Pivotal Usergroup:
https://www.meetup.com/Ottawa-Pivotal-User-Group/

Reply via email to