Hi,

I am going to change some IPs of servers in the cage, and do some
cleanup. 

We are using now 4 /29 of the /24 we have in the cage (3 contiguous,
and 1 separated), and after some quick math, I think we could go down
to half of it.

For that, I would need to:
- change hypervisors IP to not use a isolated /29. This is like this
for historical reason, but we can move them. This will not impact
anything, except if I get it wrong during the move. 

This should free a /29 (8 IPs)

- remove old services. As people know, we are in the process of moving
out of gerrit for the last users. Once this part is done, we will be
able to 
  - remove gerrit (it required a public IP for git push)
  - remove the stage gerrit 
  - remove the postgresl server (needed for gerrit, so in the same
vlan)  

so 3 IPs.

Get rid of some test server (chrono.rht)

Move more services in the lan:
- softserve
- fstat
- jenkins
- jenkins stage
- bugziller

I think softserve, bugziller and fstat are already there, but we kept
the old server to rollback in case of problems. I guess that it is time
to stop them (and verify the migration was correct).

Jenkins is not going to be moved easily so I keep that for the future.
Jenkins stage is currently down so it can be reinstalled elsewhere
without trouble. 

This would free 4/5 IPs. 

Finally, the trickiest part is that our firewalls and reverse proxy
each use 3 IP. 1 floating, and 2 for each side of the pair. There is no
reason to have a public IP for each, they just need the floating IP. 

This would free 4 IPs, but that's pretty high risk, so this will happen
likely last.


All of this to say that if you see any weird network issue for anything
the cage, please tell us. For now, I just touched to 1 hypervisor, the
least risky one. 


-- 
Michael Scherer / He/Il/Er/Él
Sysadmin, Community Infrastructure



Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra

Reply via email to