Re: [Gluster-infra] Rackspace and Gluster

2017-10-19 Thread Michael Scherer
Le jeudi 19 octobre 2017 à 13:40 +0100, Michael Scherer a écrit :
> Hi,
> 
> so Rackspace decided to stop their OSS Funding program 2 days ago
> [1].
> For people not aware of it, Rackspace was funding various OSS
> projects
> with 2000 US$  worth of credit per month, which we used to run
> various
> systems (whose list is too long to be listed here). 
> 
> Thanks to them for the fish and the support all theses year, and
> their
> help was really appreciated to grow the project.
> 
> But we now have to do something before the 31th of December, after
> that
> then we would have to pay for the infra. 

So I received a email saying that Rackspace did miscommunicate and
is not accepting new tenants in the OSS Funding program, but the
existing ones will still be funded.

I still plan to move some stuff out of Rackspace, but slowly.
-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS



signature.asc
Description: This is a digitally signed message part
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra

[Gluster-infra] [Bug 1503529] Investigate having a mirror of download server

2017-10-19 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1503529

Mike Hulsman  changed:

   What|Removed |Added

 CC||m...@hulsman.net



--- Comment #1 from Mike Hulsman  ---
I am one of the mirror and infra admins of ftp.nluug.nl.

We can offer a mirror for gluster at ftp.nluug.nl free of any charge
It is reachable by ftp, rsync, http and https and connected with 10 Gb/s

we mirror a lot of opensource projects and distributions
our location is europe/amsterdam

We offer download stats per month see: http://ftp.nluug.nl/.statistics/
For example the Ovirt project had 586873 hits and transferred a total of
540.211 Mb this month.

We are a mirror server of a lot of opensource projects since 1998.

Regards
Mike Hulsman

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=Q3uG9AvWPI&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [Bug 1498151] Move download server and salt-master to the community cage

2017-10-19 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1498151



--- Comment #6 from M. Scherer  ---
I also asked to Rackspace about KSM usage.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=6fgILW4eDi&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [Bug 1498151] Move download server and salt-master to the community cage

2017-10-19 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1498151



--- Comment #5 from M. Scherer  ---
So, the article I mention in 1st comment is out now:

https://thisissecurity.stormshield.com/2017/10/19/attacking-co-hosted-vm-hacker-hammer-two-memory-modules/

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=K9tR1rorI4&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] [Bug 1498151] Move download server and salt-master to the community cage

2017-10-19 Thread bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1498151



--- Comment #4 from M. Scherer  ---
So the move of salt-master is done, I am fixing the last stuffs and will then
take the old VM offline. (and open bug for the issue I did see)

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug 
https://bugzilla.redhat.com/token.cgi?t=sjExAYQmi5&a=cc_unsubscribe
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra


[Gluster-infra] Rackspace and Gluster

2017-10-19 Thread Michael Scherer
Hi,

so Rackspace decided to stop their OSS Funding program 2 days ago [1].
For people not aware of it, Rackspace was funding various OSS projects
with 2000 US$  worth of credit per month, which we used to run various
systems (whose list is too long to be listed here). 

Thanks to them for the fish and the support all theses year, and their
help was really appreciated to grow the project.

But we now have to do something before the 31th of December, after that
then we would have to pay for the infra. 

Nigel did started to work yesterday on a spreadsheet for budget
planning in case we do not hit the target, and I did accelerate the
move of infra that was already under way. We are still in the planning
phase and since I was on PTO yesterday, we do not have a document ready
to share, but we are working on it.


And while we were slowly planning to move out of rackspace to be more
resilient for this precise event anyway, 2 months is quite short and
while I personally think we can do it (hopefully without breakage),
this imply to focus on that, and that mean likely push some of the work
we wanted to do this quarter for later. That also imply that if anyone
need us, please be mindful until January of not adding unplanned work
for us.


IMHO, our challenges would be:
- move the download server in a way that do not disrupt production
(ideally, have a mirror, something we need since a long time). I
already have some idea for that, and opened a bug on it: https://bugzil
la.redhat.com/show_bug.cgi?id=1503529

- deal with the NetBSD VMs in a automated and scalable way (again, a
long time open item).
- scale the number of builders in the cage (e.g., start to use them for
regression testing and not just source code building )

- move the lists server to the cage, in a way that do not disrupt
communication. We have to take in account DNS propagation and stuff
like this.

The rest (moving munin, syslog, freeipa, cleaning old servers) are
mostly internal details that shouldn't impact people, and was already
on its way. And are IMHO also easier and more controlled, so I will
focus on them for the time being.



[1] https://twitter.com/ericholscher/status/920396452307668992
-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS



signature.asc
Description: This is a digitally signed message part
___
Gluster-infra mailing list
Gluster-infra@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-infra