will resume once the DNS is back.
--
Michael Scherer / He/Il/Er/Él
Sysadmin, Community Infrastructure
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra
reset mails) mailinglist, or someone that should
> be able to recover the Gluster Community Jenkins account on GerritHub.
>
> Thanks!
> Niels
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> https://lists.gluster.org/mail
Le mercredi 14 avril 2021 à 17:16 +0200, Michael Scherer a écrit :
> Hi,
>
> Since supercolony is still running on RHEL 6 (who is past its shelf
> life), it has to be upgraded to EL 7. So I installed another VM, used
> ansible, and we are ready to switch soon.
>
> The plan f
(like, I guess, jenkins, and the IP reputation, and maybe some RH IT
stuff)
If all goes well, it should take less than 30 minutes. There is almost
no risk of losing mails. I am gonna send a email to announce the
migration to devel and users.
--
Michael Scherer / He/Il/Er/Él
Sysadmin, Community
Le mercredi 31 mars 2021 à 13:16 +0200, Michael Scherer a écrit :
> Hi,
>
> due to a increase in spam on our mailling list (blocked at mailman
> level, but a human has to review that every morning, and one of
> those
> human, me, is a bit annoyed to do that), I am gonna bl
users), please notify me by a direct mail.
--
Michael Scherer / He/Il/Er/Él
Sysadmin, Community Infrastructure
signature.asc
Description: This is a digitally signed message part
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https
Le mardi 02 février 2021 à 21:06 +0200, Yaniv Kaul a écrit :
> On Tue, Feb 2, 2021 at 8:14 PM Michael Scherer
> wrote:
>
> > Hi,
> >
> > so we finally found the cause of the georep failure, after several
> > days
> > of work fro
rom what I remember, RPC format is supposed to be
compatible and covered by a specification)
Should we test on C8 only ?
--
Michael Scherer / He/Il/Er/Él
Sysadmin, Community Infrastructure
signature.asc
Description: This is a digitally signed me
, they just need the floating IP.
This would free 4 IPs, but that's pretty high risk, so this will happen
likely last.
All of this to say that if you see any weird network issue for anything
the cage, please tell us. For now, I just touched to 1 hypervisor, the
least risky one.
--
Michael
Hi folks,
just a quick note, I will be in in vacation starting tonight until
Monday the 4th of January 2021.
My Out of office message will, as usual, contain ways to contact me.
--
Michael Scherer / He/Il/Er/Él
Sysadmin, Community Infrastructure
signature.asc
Description
Le vendredi 11 décembre 2020 à 13:53 +0100, Michael Scherer a écrit :
> Le vendredi 11 décembre 2020 à 11:43 +0100, Michael Scherer a écrit :
> > Le vendredi 11 décembre 2020 à 16:01 +0530, Amar Tumballi a écrit :
> > > On Thu, Dec 10, 2020 at 11:22 PM Michael Scherer <
>
Le vendredi 11 décembre 2020 à 11:43 +0100, Michael Scherer a écrit :
> Le vendredi 11 décembre 2020 à 16:01 +0530, Amar Tumballi a écrit :
> > On Thu, Dec 10, 2020 at 11:22 PM Michael Scherer <
> > msche...@redhat.com
> > >
> >
> > wrote:
> >
Le vendredi 11 décembre 2020 à 16:01 +0530, Amar Tumballi a écrit :
> On Thu, Dec 10, 2020 at 11:22 PM Michael Scherer >
> wrote:
>
> > Le jeudi 10 décembre 2020 à 22:06 +0530, sankarshan a écrit :
> > > What is your recommendation? As in, the next steps
241
), I think faster access to fixes for the CI (or even for production)
is a good idea.
On Thu, 10 Dec 2020 at 21:28, Michael Scherer
> wrote:
> >
> > Le jeudi 10 décembre 2020 à 21:14 +0530, sankarshan a écrit :
> > > There are 2 specific bits which I expected to st
ments to FUSE or other components do not need to rely
> > on
> > the work Red Hat is planning, but could be worked on by our
> > community
> > and get included earlier.
> >
> > If there are any concerns, I'd l
(and for that job), but I want to add
the same cleanup on others jobs.
Do people have need to go back in the past for some jobs for more than
1 year (and if so, which job, and how much ?)
--
Michael Scherer / He/Il/Er/Él
Sysadmin, Community Infrastructure
signature.asc
Description
using Java/1.8.0_262.
So this could be either gerrit, gerrit stage, jenkins, or one of the several
jenkins plugins we have.
Jenkins is updated, but gerrit is not, and is being sunset.
If anything break related to github auth and our infra after 16h UTC today,
please let us know.
--
Michael
Le mercredi 30 septembre 2020 à 20:34 +0200, Michael Scherer a écrit :
> Hi,
>
> Upstream openssh deprecated SHA-1 by default on the client and
> server.
> And so Fedora 33 picked the change, resulting in failure on gerrit.
>
> If, after a upgrade to F33 or Rawhide (or an
review.gluster.org
PubkeyAcceptedKeyTypes=+ssh-rsa
Since gerrit is going to be sunset once we move to github, and fixing
this would requires a gerrit upgrade (and it was decided to not spend
time on it), there is not much to do on infra side.
--
Michael Scherer / He/Il/Er/Él
Sysadmin, Community
t of the
specific way that's broken. I discussed with them, and it seems to be
blocked on some legal discussions. Pandemic and people moving around
on both side likely didn't help.
--
Michael Scherer / He/Il/Er/Él
Sysadmin, Community Infrastructure
signature.asc
Desc
etplace.
I would love to add the truth and the official images are out there, but I
checked this morning, still not the case.
--
Michael Scherer / He/Il/Er/Él
Sysadmin, Community Infrastructure
signature.asc
Description: This is a digitally signed message part
___
and then, where missing, but so far, it doesn't seems needed.
it also mean a new llvm, with a bit more warning, but again, we already
test that.
--
Michael Scherer / He/Il/Er/Él
Sysadmin, Community Infrastructure
signature.asc
Description: This is a digitally signed message part
ructure
> team'
> message).
>
> Ref: https://github.com/gluster/gluster-kubernetes/issues/644 (and a
> personal email to concerned people).
Done:
https://github.com/gluster/gluster-kubernetes/issues/644#issuecomment-654816868
I didn't push for a readme since we don't usually.
--
that work at
all, since I didn't find anything related since more than 2 years due
to EC2 no longer using xen, with people recommending GCP instead (not a
option for us for now).
--
Michael Scherer / He/Il/Er/Él
Sysadmin, Community Infrastructure
signature.asc
Description: This is a digitally
other way (like, do everything using groovy, which seems to be how
people do config as code with jenkins).
> On Tue, May 26, 2020 at 15:04 Michael Scherer
> wrote:
>
> > Le mardi 26 mai 2020 à 12:50 +0200, Michael Scherer a écrit :
> > > Le mardi 26 mai 2020 à 12:33 +020
Le mardi 26 mai 2020 à 12:50 +0200, Michael Scherer a écrit :
> Le mardi 26 mai 2020 à 12:33 +0200, Michael Scherer a écrit :
> > Hi,
> >
> > while working on the jenkins automation (mostly adding new host
> > with
> > CLI), I did a jenkins-c
Le mardi 26 mai 2020 à 12:33 +0200, Michael Scherer a écrit :
> Hi,
>
> while working on the jenkins automation (mostly adding new host with
> CLI), I did a jenkins-cli reload-configuration. Turn out that this
> is,
> contrary to what I expected, also seems to have blocked
that to be so long.
--
Michael Scherer / He/Il/Er/Él
Sysadmin, Community Infrastructure
signature.asc
Description: This is a digitally signed message part
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo
to logs.aws.gluster, please tell me.
I will likely ask the following info:
host logs.aws.gluster.org
host -t NS aws.gluster.org
host -t SOA aws.gluster.org
if the server is not responding and you need it, the IP is
18.219.45.211
--
Michael Scherer / He/Il/Er/Él
Sysadmin, Community Infrastructure
, download).
All is back in order now, but I suspect the jenkins jobs failed since
the main server lost internet.
--
Michael Scherer / He/Il/Er/Él
Sysadmin, Community Infrastructure
signature.asc
Description: This is a digitally signed message part
ve people if their email start to bounce
or something, but I am not 100% sure.
--
Michael Scherer / He/Il/Er/Él
Sysadmin, Community Infrastructure
signature.asc
Description: This is a digitally signed message part
___
Gluster-infra mailing list
G
gt; +1 646 558 8656 US (New York)
> Meeting ID: 910 385 371
> Find your local number: https://zoom.us/u/a9X9UtuHA
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> https://lists.gluster.org/mailman/list
/redhat> Red Hat
> > <https://www.linkedin.com/company/red-hat> Red Hat
> > <https://www.facebook.com/RedHatInc>
> >
>
> ___________
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> https://lists.gluster.org/mailman/l
Le mardi 21 janvier 2020 à 19:23 +0530, Sankarshan Mukhopadhyay a
écrit :
> On Tue, 21 Jan 2020 at 17:56, Michael Scherer
> wrote:
> >
> > Le mardi 21 janvier 2020 à 08:34 +0530, Sankarshan Mukhopadhyay a
> > écrit :
> > > I was attempting to help someone
have been triggerred and that would create the same kind of
error, they are supposed to display a more meaningful message tho.
--
Michael Scherer / He/Il/Er/Él
Sysadmin, Community Infrastructure
signature.asc
Description: This is a digitally signed message part
__
r vacation, and also for Flock.
In case of emergency, do as usual, don't panic.
--
Michael Scherer
Sysadmin, Community Infrastructure
signature.asc
Description: This is a digitally signed message part
___
Gluster-infra mailing list
Gluster-infra@gluster
by nginx so slow to propagate)
See https://twitter.com/readthedocs/status/1156337277640908801 for the
initial report.
This is kinda out of the control of the gluster infra team, but do not
hesitate to send support, love or money to the volunteers of RTD.
We are monitoring the issue.
--
Michael
g platform, a dozen or so of
jobs seems to have disappeared. Jenkins agent seems to break when a
minor version of the rpm is installed (like the main jenkins
server...), the issue should now be mitigated as well (ansible to send
a signal, and let jenkins reconnect to the builder).
--
Michael Schere
that even if this was old, some might be useful for archives or
something.
Sorry for not having looked earlier, it seems we might need to
redistribute the responsibility regarding this task since a few people
who were doing it likely moved on from the project.
--
Michael Scherer
Sysadmin, Community
job to F30
- wait 2 weeks, and switch fedora-smoke and python-compliance to F30. This will
force someone to fix the problem.
- drop the non fixed containers jobs, unless someone fix them, in 1 month.
--
Michael Scherer
Sysadmin, Community Infrastructure
signature.asc
Description: This is a
Chekhov's gun principle.
[2] yes, that's not much for a lucky perspective. But I did manage to
sleep around 16h after taking the plane last week, it took me a while
to adjust.
--
Michael Scherer
Sysadmin, Community Infrastructure
signature.asc
De
Le vendredi 14 juin 2019 à 10:10 +0200, Michael Scherer a écrit :
> Hi,
>
> there is a ongoing issue regarding review.gluster.org, with some
> people
> being directed to the wrong server.
>
> A quick fix is to add:
> 8.43.85.171 review.gluster.org
>
people (like, it work for me
and still work for me), hence why it wasn't noticed while I tested,
apology for that.
--
Michael Scherer
Sysadmin, Community Infrastructure
signature.asc
Description: This is a digitally signed message part
___
Gluster-infra
ss Deepshika will have to look.
> On Wed, Apr 24, 2019 at 5:30 PM Yaniv Kaul wrote:
>
> >
> >
> > On Tue, Apr 23, 2019 at 5:15 PM Michael Scherer <
> > msche...@redhat.com>
> > wrote:
> >
> > > Le lundi 22 avril 2019 à 22:57 +0530, At
Le lundi 29 avril 2019 à 10:17 +0200, Michael Scherer a écrit :
> Le samedi 27 avril 2019 à 22:18 +0300, Yaniv Kaul a écrit :
> > I'd like to see what is our status.
> > Just had CI failures[1] because builder26.int.rht.gluster.org is
> > not
> > available, apparently.
&
ide :/
I disconnected/reconnected the builder, this should fix for this one,
but we definitely need to dig a bit more to see what happened and how
to prevent that.
Adding supervision of the agent should be quick (*cough* famous last
words *cough*), so let's do that as a first step.
--
Michae
more on what would
be causing some failure.
> On Wed, 3 Apr 2019 at 19:26, Michael Scherer
> wrote:
>
> > Le mercredi 03 avril 2019 à 16:30 +0530, Atin Mukherjee a écrit :
> > > On Wed, Apr 3, 2019 at 11:56 AM Jiffin Thottan <
> > > jt
impacted person.
--
Michael Scherer
Sysadmin, Community Infrastructure
signature.asc
Description: This is a digitally signed message part
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra
Le vendredi 05 avril 2019 à 16:55 +0530, Nithya Balachandran a écrit :
> On Fri, 5 Apr 2019 at 12:16, Michael Scherer
> wrote:
>
> > Le jeudi 04 avril 2019 à 18:24 +0200, Michael Scherer a écrit :
> > > Le jeudi 04 avril 2019 à 19:10 +0300, Yaniv Kaul a écrit :
&
Le jeudi 04 avril 2019 à 18:24 +0200, Michael Scherer a écrit :
> Le jeudi 04 avril 2019 à 19:10 +0300, Yaniv Kaul a écrit :
> > I'm not convinced this is solved. Just had what I believe is a
> > similar
> > failure:
> >
> > *00:12:02.532* A dependency job
hanks misc. I have always seen a pattern that on a reattempt
> > (recheck
> > centos) the same builder is picked up many time even though it's
> > promised
> > to pick up the builders in a round robin manner.
> >
> > On Thu, Apr 4, 2019 at 7:24 PM Michael Scherer >
Le jeudi 04 avril 2019 à 15:19 +0200, Michael Scherer a écrit :
> Le jeudi 04 avril 2019 à 13:53 +0200, Michael Scherer a écrit :
> > Le jeudi 04 avril 2019 à 16:13 +0530, Atin Mukherjee a écrit :
> > > Based on what I have seen that any multi node t
Le jeudi 04 avril 2019 à 13:53 +0200, Michael Scherer a écrit :
> Le jeudi 04 avril 2019 à 16:13 +0530, Atin Mukherjee a écrit :
> > Based on what I have seen that any multi node test case will fail
> > and
> > the
> > above one is picked first from that gro
)
> ___
> Gluster-infra mailing list
> Gluster-infra@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-infra
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
Description: This is a
stem upgrade, or a test change, or both).
So we are still looking at it to have a complete understanding of the
issue, but so far, we hacked our way to make it work (or so do I
think).
Deepshika is working to fix it long term, by fixing the issue regarding
eth0/ens5 with a new base im
Le mercredi 03 avril 2019 à 15:12 +0300, Yaniv Kaul a écrit :
> On Wed, Apr 3, 2019 at 2:53 PM Michael Scherer
> wrote:
>
> > Le mercredi 03 avril 2019 à 16:30 +0530, Atin Mukherjee a écrit :
> > > On Wed, Apr 3, 2019 at 11:56 AM Jiffin Thottan <
> > &
failure that happen after
reboot (resulting in partial network bring up, causing all kind of
weird issue), but it take some time to verify it, and since we lost 33%
of the team with Nigel departure, stuff do not move as fast as before.
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform
technical questions on gluster-users list:
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
Description: This is a digitally signed message part
___
ide the
lan, which mean I am doing a few tests (the old one is untouched).
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
Description: This is a digitally signed message part
___
Gluster-infra mailing list
Glus
has a issue.
> On Thu, Mar 7, 2019 at 8:09 PM Michael Scherer
> wrote:
>
> > Le jeudi 07 mars 2019 à 18:47 +0530, Amar Tumballi Suryanarayan a
> > écrit :
> > > And it is happening with 'failed to determine' the job...
> > > anything
> > > different in j
node offline for now, the others should pick the
work
> On Thu, Mar 7, 2019 at 8:09 PM Michael Scherer
> wrote:
>
> > Le jeudi 07 mars 2019 à 18:47 +0530, Amar Tumballi Suryanarayan a
> > écrit :
> > > And it is happening with 'failed to determine' the job...
t you are speaking of ?
(as I do not see the exact string you pointed, I am not sure that's the
issue)
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
Description: This is a digitally signed message part
___
Glu
Le mercredi 06 mars 2019 à 21:31 +0530, Sankarshan Mukhopadhyay a
écrit :
> On Wed, Mar 6, 2019 at 8:47 PM Michael Scherer
> wrote:
> >
> > Le mercredi 06 mars 2019 à 17:53 +0530, Sankarshan Mukhopadhyay a
> > écrit :
> > > On Wed, Mar 6, 2019 at 5:38 PM D
mit the number of process" would surely
sooner or later block legitimate tests, and requires adjustement (and
likely investigation)
we didn't choose to follow that road for now.
> > Please let us know if you see any such issues again.
> >
> > [1] https://review.glust
the data
- start process on new server
- change DNS
- set some redirection for the port
- hope for the best
I will send a new email when things are back, as a test.
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
Description: This is a digitally signed
Le mercredi 12 décembre 2018 à 15:52 +0530, Vijay Bellur a écrit :
> On Wed, Dec 12, 2018 at 2:20 PM Michael Scherer
> wrote:
>
> > Hi,
> >
> > I just found out that we suffered a outage yesterday night from 22h
> > UTC
> > to 23h40 (I was out on PTO a
of networking loop. I am waiting for a
detailed report from IT to post more information, but situation is
stable.
If you see any jobs that failred during that time period, that's why.
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
Description: This is a digitally
if that did happen, it shouldn't have affected
much people.
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
Description: This is a digitally signed message part
___
Gluster-infra mailing list
Gluster-infra
, based on nftables. I will
continue to harden the rules in the weeks coming. It should be as
seemless as possible, but if anything break in a *.int.rht.gluster.org
server (mostly builders) please tell infra team.
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
will
continue to harden the rules in the weeks coming. It should be as
seemless as possible, but if anything break in a *.int.rht.gluster.org
server (mostly builders) please tell infra team.
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
Description
Le mardi 06 novembre 2018 à 15:30 +0100, Michael Scherer a écrit :
> Hi,
>
> I just got paged for a issue regarding gerrit and jenkins, so I am
> looking at it. I do not have much info now, but just letting people
> know that we are on it.
Network is back, i am discussing with o
Hi,
I just got paged for a issue regarding gerrit and jenkins, so I am
looking at it. I do not have much info now, but just letting people
know that we are on it.
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
Description: This is a digitally signed
, and switch the builder as non
voting, unless I manage to find the time to install a new builder.
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
Description: This is a digitally signed message part
___
Gluster-infra
- get ipv6 working
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
Description: This is a digitally signed message part
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman
Le mardi 23 octobre 2018 à 12:56 +0200, Michael Scherer a écrit :
> Le lundi 22 octobre 2018 à 17:14 +0200, Michael Scherer a écrit :
> > Hi,
> >
> > so as discussed on gluster-infra, we need to move out of rackspace,
> > our
> > hoster for some services.
> &g
Le lundi 22 octobre 2018 à 17:14 +0200, Michael Scherer a écrit :
> Hi,
>
> so as discussed on gluster-infra, we need to move out of rackspace,
> our
> hoster for some services.
>
> To prepare the future migration, we are gonna change the DNS, and
> point
>
remove the
record. At the same time.
I plan to do that around 12h UTC tomorrow, and it should take a few
hours to propagate.
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
Description: This is a digitally signed message part
Le jeudi 18 octobre 2018 à 15:08 +0200, Michael Scherer a écrit :
> Le mardi 16 octobre 2018 à 17:38 +0200, Michael Scherer a écrit :
> > Hi,
> >
> > so I added another VM to run bugzilla job, this time inside the
> > lan.
> > If
> > you see any issue rela
Le mardi 16 octobre 2018 à 17:38 +0200, Michael Scherer a écrit :
> Hi,
>
> so I added another VM to run bugzilla job, this time inside the lan.
> If
> you see any issue related to bugzilla job executed by jenkins, please
> tell us, I will disable the existing node now (and rev
e official builder
- forget the one in the cloud
- install 2 new fresh VM on freebsd 11.X
- do a job for testing the build there
- switch to the 2 new VM
- remove the older one
Thought on the plan ?
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
D
Hi,
so I added another VM to run bugzilla job, this time inside the lan. If
you see any issue related to bugzilla job executed by jenkins, please
tell us, I will disable the existing node now (and revert if any issue
happen).
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform
Le mercredi 05 septembre 2018 à 12:56 +0200, Michael Scherer a écrit :
> Le mardi 04 septembre 2018 à 17:50 +0200, Michael Scherer a écrit :
> > Le jeudi 23 août 2018 à 17:52 +0200, Michael Scherer a écrit :
> > > Le jeudi 15 mars 2018 à 15:35 +0100, Michael Scherer a
Le lundi 15 octobre 2018 à 14:54 +0200, Michael Scherer a écrit :
> Le lundi 15 octobre 2018 à 17:36 +0530, Nigel Babu a écrit :
> > I think it might we worth pulling out some utilization numbers to
> > see
> > how
> > many to pull.
>
> So, I think the firs
the freebsd builder working, that would
> eliminate the need to run it on rackspace and having two of them
> would
> increase the speed at which we process the smoke queue.
The freebsd builder is working:
https://build.gluster.org/job/freebsd-non-voting-smoke/
Also
On Mon, Oct 15, 2018 at 5:
Le lundi 15 octobre 2018 à 15:29 +0530, Sankarshan Mukhopadhyay a
écrit :
> On Mon, Oct 15, 2018 at 3:19 PM Michael Scherer
> wrote:
>
> > so we currently have 50 builders in the cage, and I think that's
> > too
> > much. While that's not a huge issue, having too muc
y but are in NEW, perhaps they
> should be in ASSIGNED.
Since we are not participating in the usual bug triage process (afaik),
I think the right solution should rather to exclude infra bugs from the
list/check.
This is not the first problem we have due to the sharing of product
field in bugzi
Le samedi 15 septembre 2018 à 00:23 +0200, Michael Scherer a écrit :
> Hi,
>
> so today, I moved (without too much problem) our external munin to
> the
> internal lan, which help us getting rid of one more VM on rackspace.
>
> This bring:
> - monitoring of the i
deployment
So there is still some servers to integrate, some others to remove, but
so far, it went well.
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
Description: This is a digitally signed message part
___
Gluster
Le mercredi 12 septembre 2018 à 12:35 +0200, Michael Scherer a écrit :
> Le mardi 11 septembre 2018 à 19:54 +0530, Nigel Babu a écrit :
> > On Tue, Sep 11, 2018 at 7:06 PM Michael Scherer
> > wrote:
> >
> > > And... rescue mode is not working. So the server is
Le mardi 11 septembre 2018 à 19:54 +0530, Nigel Babu a écrit :
> On Tue, Sep 11, 2018 at 7:06 PM Michael Scherer
> wrote:
>
> > And... rescue mode is not working. So the server is down until
> > Rackspace fix it.
> >
> > Can someone disable the freebsd smoke tes
Le mardi 11 septembre 2018 à 15:13 +0200, Michael Scherer a écrit :
> Le mardi 11 septembre 2018 à 13:55 +0200, Michael Scherer a écrit :
> > Hi,
> >
> > so it seems our working builder (the one on rackspace) is still
> > running
> > a EOL version of Freebsd. I a
Le mardi 11 septembre 2018 à 13:55 +0200, Michael Scherer a écrit :
> Hi,
>
> so it seems our working builder (the one on rackspace) is still
> running
> a EOL version of Freebsd. I am about to upgrade it to 10.4, so we can
> expect a reboot.
>
> Then we will
.
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
Description: This is a digitally signed message part
___
Gluster-infra mailing list
Gluster-infra@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-infra
amount of
redundancy we need and we could have there).
Also, I think I did manage to find a way to do the switch without the
downtime, so things should be smoother than I first expected.
> - amye
>
> On Fri, Sep 7, 2018 at 10:11 AM Michael Scherer
> wrote:
>
> > Hi,
>
). Since this depend on DNS propagation, 5 minutes of
turmoil are to be expected until we can get the new LE certificate and
get a working deployment.
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
Description: This is a digitally signed message part
digicert.com/enterprise/order-details.php?order_id=03361
> 710> 27
> Aug 2018 www.gluster.org Active 2 Years Multi-Domain SSL 16 Oct 2020
> So, question is do we really need to keep these DigiCert certificates
> if
> they are not in use?
>
> Thank you.
> ___
Le mardi 04 septembre 2018 à 17:50 +0200, Michael Scherer a écrit :
> Le jeudi 23 août 2018 à 17:52 +0200, Michael Scherer a écrit :
> > Le jeudi 15 mars 2018 à 15:35 +0100, Michael Scherer a écrit :
> > > Hi,
> > >
> > > So now we have a
Le jeudi 23 août 2018 à 17:52 +0200, Michael Scherer a écrit :
> Le jeudi 15 mars 2018 à 15:35 +0100, Michael Scherer a écrit :
> > Hi,
> >
> > So now we have a new proxy (yes, I am almost as proud of it as the
> > firewall), I need to move the old service on the old p
ressources usage
- prepare a template for post mortem
--
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
signature.asc
Description: This is a digitally signed message part
___
Gluster-infra mailing list
Gluster-infra@gluster
Le jeudi 15 mars 2018 à 15:35 +0100, Michael Scherer a écrit :
> Hi,
>
> So now we have a new proxy (yes, I am almost as proud of it as the
> firewall), I need to move the old service on the old proxy to the new
> one. It will imply some time of unavailability, because DN
1 - 100 of 337 matches
Mail list logo