Re: [ClusterLabs] PCS security vulnerability

2024-06-12 Thread Ondrej Mular
Hello Sathish,

The CVEs you mentioned (CVE-2024-25126, CVE-2024-26141,
CVE-2024-26146) were filed against the rack rubygem and not PCS
itself. Therefore, the PCS upstream project is not directly impacted
by these CVEs and doesn't require a change.

However, PCS does rely on and uses the rack rubygem at runtime. So, if
you're using PCS from the upstream source, it's important to ensure
you have up-to-date rubygems installed to avoid using vulnerable
versions of rack.

The advisory you linked (RHSA-2024:3431) addresses these CVEs in the
PCS package for RHEL 8.6. This is because the PCS package shipped with
RHEL includes some bundled rubygems, including rack. Upgrading the
rack rubygem and rebuilding the PCS package were necessary to resolve
the CVEs in that specific scenario.

Regards,
Ondrej

On Tue, 11 Jun 2024 at 15:18, S Sathish S  wrote:
>
> Hi Tomas/Team,
>
>
>
> In our application we are using pcs-0.10.16 version and that module has 
> vulnerability(CVE-2024-25126,CVE-2024-26141,CVE-2024-26146) reported and 
> fixed on below RHSA Errata. can you check and provided fixed on PCS 0.10.x 
> latest version on upstream also.
>
>
>
> https://access.redhat.com/errata/RHSA-2024:3431
>
>
>
> Thanks and Regards,
> S Sathish S

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] cluster okey but errors when tried to move resource - ?

2023-06-09 Thread Ondrej Mular
To me, this seems like an issue in `crm_resource` as the error message
comes from it. Pcs is actually using `crm_resource --move` when moving
resources. In this case, pcs should call `crm_resource --move
REDIS-clone --node podnode3 --master`, you can see that if you run pcs
with `--debug` option. I guess `crm_resource --move --master` creates
a location constraint with `role="Promoted"` and doesn't take into
account the currently used schema. However, I'm unable to test this
theory as I don't have any testing environment available at the
moment.

Ondrej

On Fri, 9 Jun 2023 at 01:39, Reid Wahl  wrote:
>
> On Thu, Jun 8, 2023 at 2:24 PM lejeczek via Users  
> wrote:
> >
> >
> >
> > > Ouch.
> > >
> > > Let's see the full output of the move command, with the whole CIB that
> > > failed to validate.
> > >
> > For a while there I thought perhaps it was just that one
> > pglsq resource, but it seems that any - though only a few
> > are set up - (only clone promoted?)resource fails to move.
> > Perhaps primarily to do with 'pcs'
> >
> > -> $ pcs resource move REDIS-clone --promoted podnode3
> > Error: cannot move resource 'REDIS-clone'
> > 1  > validate-with="pacemaker-3.6" epoch="8212" num_updates="0"
> > admin_epoch="0" cib-last-written="Thu Jun  8 21:59:53 2023"
> > update-origin="podnode1" update-client="crm_attribute"
> > have-quorum="1" update-user="root" dc-uuid="1">
>
> This is the problem: `validate-with="pacemaker-3.6"`. That old schema
> doesn't support role="Promoted" in a location constraint. Support
> begins with version 3.7 of the schema:
> https://github.com/ClusterLabs/pacemaker/commit/e7f1424df49ac41b2d38b72af5ff9ad5121432d2.
>
> You'll need at least Pacemaker 2.1.0.
>
> > 2   
> > 3 
> > 4   
> > 5  > id="cib-bootstrap-options-have-watchdog"
> > name="have-watchdog" value="false"/>
> > 6  > name="dc-version" value="2.1.6-2.el9-6fdc9deea29"/>
> > 7  > id="cib-bootstrap-options-cluster-infrastructure"
> > name="cluster-infrastructure" value="corosync"/>
> > 
> > crm_resource: Error performing operation: Invalid configuration
> >
> > ___
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > ClusterLabs home: https://www.clusterlabs.org/
>
>
>
> --
> Regards,
>
> Reid Wahl (He/Him)
> Senior Software Engineer, Red Hat
> RHEL High Availability - Pacemaker
>
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] VirtualDomain - unable to migrate

2022-01-05 Thread Ondrej Mular
On Tue, 4 Jan 2022 at 16:20, Ken Gaillot  wrote:
>
> On Wed, 2021-12-29 at 17:28 +, lejeczek via Users wrote:
> > Hi guys
> >
> > I'm having problems with cluster on new CentOS Stream 9 and
> > I'd be glad if you can share your thoughts.
> >
> > -> $ pcs resource move c8kubermaster2 swir
> > Location constraint to move resource 'c8kubermaster2' has
> > been created
> > Waiting for the cluster to apply configuration changes...
> > Location constraint created to move resource
> > 'c8kubermaster2' has been removed
> > Waiting for the cluster to apply configuration changes...
> > Error: resource 'c8kubermaster2' is running on node 'whale'
> > Error: Errors have occurred, therefore pcs is unable to continue
>
> pcs on CS9 moves the resource as normal, then runs a simulation to see
> what would happen if it removed the move-related constraint. If nothing
> would change (e.g. stickiness will keep the resource where it is), then
> it goes ahead and removes the constraint.
>
> I'm not sure what the error messages mean for that new approach. The
> pcs devs may be able to chime in.
I believe this is caused by a bug in the new implementation of `pcs
resource move` command. We have a fix for this almost ready, though,
some more testing is still needed. As you already filed a bugzilla
[0], let's track it there.

And thanks for bringing this up, it helped us to uncover a weird edge
case in the new implementation.

[0]: https://bugzilla.redhat.com/show_bug.cgi?id=2037218
>
> As a workaround, you can use crm_resource --move, which always leaves
> the constraint there.
Old move command is also still available as `pcs resource move-with-constraint`
>
> > Not much in pacemaker logs, perhaps nothing at all.
> > VM does migrate with 'virsh' just fine.
> >
> > -> $ pcs resource config c8kubermaster1
> >   Resource: c8kubermaster1 (class=ocf provider=heartbeat
> > type=VirtualDomain)
> >Attributes:
> > config=/var/lib/pacemaker/conf.d/c8kubermaster1.xml
> > hypervisor=qemu:///system migration_transport=ssh
> >Meta Attrs: allow-migrate=true failure-timeout=30s
> >Operations: migrate_from interval=0s timeout=90s
> > (c8kubermaster1-migrate_from-interval-0s)
> >migrate_to interval=0s timeout=90s
> > (c8kubermaster1-migrate_to-interval-0s)
> >monitor interval=30s
> > (c8kubermaster1-monitor-interval-30s)
> >start interval=0s timeout=60s
> > (c8kubermaster1-start-interval-0s)
> >stop interval=0s timeout=60s
> > (c8kubermaster1-stop-interval-0s)
> >
> > Any and all suggestions & thoughts are much appreciated.
> > many thanks, L
>
> --
> Ken Gaillot 
>
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/