ve an efix later today and try it on the 21.3 kernel.
Venlig hilsen / Best Regards
Andi Rhod Christiansen
-Oprindelig meddelelse-
Fra: gpfsug-discuss-boun...@spectrumscale.org
På vegne af Jonathan Buzzard
Sendt: Wednesday, June 19, 2019 12:23 PM
Til: gpfsug main discussion list
Emne:
Hi Simon,
It was actually also the only solution I found if I want to keep them within
the same cluster 😊
Thanks for the reply, I will see what we figure out !
Venlig hilsen / Best Regards
Andi Rhod Christiansen
Fra: gpfsug-discuss-boun...@spectrumscale.org
På vegne af Simon Thompson
Hi Andrew,
Where can I request such a feature? 😊
Venlig hilsen / Best Regards
Andi Rhod Christiansen
Fra: gpfsug-discuss-boun...@spectrumscale.org
På vegne af Andrew Beattie
Sendt: 9. januar 2019 12:17
Til: gpfsug-discuss@spectrumscale.org
Cc: gpfsug-discuss@spectrumscale.org
Emne: Re
Hi,
I seem to be unable to find any information on separating protocol services on
specific CES nodes within a cluster. Does anyone know if it is possible to
take, lets say 4 of the ces nodes within a cluster and dividing them into two
and have two of the running SMB and the other two running O
or in exception handling leads to DoS (CVE-2018-8897)
Kernel: ipsec: xfrm: use-after-free leading to potential privilege escalation
(CVE-2017-16939)
kernel: Out-of-bounds write via userland offsets in ebt_entry struct in
netfilter/ebtables.c (CVE-2018-1068)
...
On Mon, 14 May 2018, Andi Rhod Chr
Hi,
Yes, kernel 3.10.0-862.2.3.el7 is not supported yet as it is RHEL 7.5 and
latest support is 7.4. You have to revert back to 3.10.0-693 😊
I just had the same issue
Revert to previous working kernel at redhat 7.4 release which is 3.10.9.693.
Make sure kernel-headers and kernel-devel are als
Hi Simon,
I will do that before I go to the customer with a separate switch as a last
resort :) Thanks
Venlig hilsen / Best Regards
Andi Rhod Christiansen
From: gpfsug-discuss-boun...@spectrumscale.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Simon Thompson
(IT Research
10:54
To: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] Changing ip on spectrum scale cluster with every
node down and not connected to network.
On Wed, 2017-10-11 at 08:18 +, Andi Rhod Christiansen wrote:
> Hi Jonathan,
>
> Yes I thought about that but the system i
e.org
[mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Jonathan Buzzard
Sent: 11. oktober 2017 10:02
To: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] Changing ip on spectrum scale cluster with every
node down and not connected to network.
On 11/10/17 08:46, Andi Rhod Christi
Hi,
Does anyone know how to change the ips on all the nodes within a cluster when
gpfs and interfaces are down?
Right now the cluster has been shutdown and all ports disconnected(ports has
been shut down on new switch)
The problem is that when I try to execute any mmchnode command(as the ibm
d
10 matches
Mail list logo