https://bugs.kde.org/show_bug.cgi?id=488874
Bug ID: 488874
Summary: Akonadi kontact module and address group management
issues
Classification: Applications
Product: kontact
Version: 5.24.5
Platform: openSUSE
https://bugs.kde.org/show_bug.cgi?id=488874
Bug ID: 488874
Summary: Akonadi kontact module and address group management
issues
Classification: Applications
Product: kontact
Version: 5.24.5
Platform: openSUSE
https://bugs.kde.org/show_bug.cgi?id=487003
Bug ID: 487003
Summary: kontact crashes if i put a "contact group" in the
partecipant dialog
Classification: Applications
Product: kontact
Version: 5.22.3
Platform: openSUSE
https://bugs.kde.org/show_bug.cgi?id=487003
Bug ID: 487003
Summary: kontact crashes if i put a "contact group" in the
partecipant dialog
Classification: Applications
Product: kontact
Version: 5.22.3
Platform: openSUSE
one else got affected? Is it something undesirable?
>
> TIA
I have also to notice that the ffmpeg6 isnt buillt with the drawtext plugin
Diego
--
Ing. Diego Ercolani
S.S.I.S. s.p.a.
T. 0549-875910
___
Packman mailing list
Packman@links2linux.de
https://l
https://bugs.kde.org/show_bug.cgi?id=472399
Bug ID: 472399
Summary: Inject events from ical file to google calendar
Classification: Frameworks and Libraries
Product: Akonadi
Version: 5.22.3
Platform: openSUSE
OS: Linux
https://bugs.kde.org/show_bug.cgi?id=472399
Bug ID: 472399
Summary: Inject events from ical file to google calendar
Classification: Frameworks and Libraries
Product: Akonadi
Version: 5.22.3
Platform: openSUSE
OS: Linux
https://bugs.kde.org/show_bug.cgi?id=471289
Bug ID: 471289
Summary: automount/x-systemd-automount nfs resource changes
inode on every mount and so reindexes is triggered
Classification: Frameworks and Libraries
Product: frameworks-baloo
Probably you need to wait for a rebuild trigger and the new symbols will be
imported automatically. Please see https://openbuildservice.org/ the system
used by opensuse and packman
Ottieni BlueMail per Android
Il giorno 18 mag 2023, 07:03, alle ore 07:03, Steven Swart
ha scritto:
>Good
New event:
Mar 28 14:37:32 ovirt-node3.ovirt vdsm[4288]: WARN executor state: count=5
workers={, , , , at 0x7fcdc0010898> timeout=7.5, duration=7.50 at
0x7fcdc0010208> discarded task#=189 at 0x7fcdc0010390>}
Mar 28 14:37:32 ovirt-node3.ovirt sanlock[1662]: 2023-03-28 14:37:32 829
[7438]: s4
Worked
I halted a node of the gluster cluster (that seemed to be problematic from the
gluster point of view) and the change of the master storage domain worked
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
It's difficult to answer as the engine normally "freezes" or is taken down
during events... I will try to get them next time
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
I don't know why (but I suppose is related to storage speed) the virtual
machines tend to present a skew in the clock from some days to a century
forward (2177)
I see in the journal of the engine:
Mar 28 13:19:40 ovirt-engine.ovirt NetworkManager[1158]:
[1680009580.2045] dhcp4 (eth0): state
No, now seem "stable" awaiting for next event
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
In the current release of ovirt (4.5.4) I'm currently experiencing a fail in
change master storage domain from a gluster volume to everywhere.
The GUI talk about a "general" error.
watching the engine log:
2023-03-28 11:51:16,601Z WARN
I record entry like this in the journal of everynode:
Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58 1191247
[4105511]: s9 delta_renew read timeout 10 sec offset 0
/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/dom_md/ids
Mar 28
The scheduling policy was the "Suspend Workload if needed" and disabled
parallel migration.
The problem is that The Engine (mapped on external NFS domain implemented by a
linux box without any other vm mapped) simply disappear. I have a single 10Gbps
intel ethernet link that I use to distribute
Hello,
in my installation I have to use poor storage... the oVirt installation doesn't
manage such a case and begin to "balance" and move VMs around... taking too
many snapshots stressing a poor performance all the cluster mess up
Why the vms don't go in "Pause" state but the cluster prefer
Hello,
I noticed that when you have poor "storage" performances, al the VMs are
frustrated with entry like the one in the subject.
Searching around there is a case under redhat:
https://access.redhat.com/solutions/5427
that is suggesting to address the issue (if not possible to have rocket
Finally it seem that the problem was in the external nfs server, it failed
rpc.gsssd and the nfs service become unresponsive... so the hosted-engine
configuration domain wasn't reacheable
___
Users mailing list -- users@ovirt.org
To unsubscribe send an
Hello,
ovirt-release-host-node-4.5.4-1.el8.x86_64
Today I found my cluster in an unconsinstent state
I have three nodes: ovirt-node2 ovirt-node3 ovirt-node4 with self hosted engine
deployed using external nfs storage
My first attempt was to launch hosted-engine --vm-statos on three nodes and I
It's happened again there should be some glitch
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
Hello, I reinstalled the hosted-engine (see
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/QOJYO43SVOMCX6NHDP2N6PF3EIXDTRLP/)
I found a "glitch" in the grafana, somehow navigating the grafana environment
(brand new) have put in inconsistent state the ovirt_history_db. The symptoms
I ended the process but I think there must be a global revision in the
architecture tooo intricate and definitely not consumer ready
This is what I did:
-1. in my environment I have pacemaker cluster between nodes (as to overcome
gluster I tried to implement also an ha nfs, and one node
Thank you very much.
I think the process is very overcomplicated I successfully setup the engine
installing a fresh engine and then restore the backup... but then, when I
tryied to register the new storage... everything gone wrong.
There shoud be some shortcircuits that permit to overcome
Thank you, I'm currently trying to accomplish what you reported. But I'm
currently stuck:
I launched this:
hosted-engine --deploy --4
--restore-from-file=/root/deploy_hosted_engine_230117/230117-scopeall-backup.tar.gz
ovirt-engine-appliance-4.5-20221206133948
Hello,
I have some trouble with my Gluster instance where there is hosted-engine. I
would like to copy data from that hosted engine and reverse to another
hosted-engine storage (I will try NFS).
I think the main method is to put ovirt in
Finally it worked:
After the step previous described:
1. put cluster in global maintenance
2. stop ovirt-engine and ovirt-engine-dwhd
3. in the table dwh_history_timekeeping @enginedb I changed the dwhUuid
4. launched engine-setup, the engine-setup asked to disconnect a "fantomatic"
DWH (I
I found the reference on that file:
https://github.com/oVirt/ovirt-dwh/blob/master/docs/Notes-about-single-dwhd
It's only to notice that I veryfied the contents of
dwh_history_timeskeeping table @engine db and the dwhUuid it's consistent with
the one in the 10-setup-uuid.conf file
While
All the files seem to be correctly intializated.
The only doubt is in the last directory you addressed:
/etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/
there is a file:
[root@ovirt-engine ovirt-engine-dwhd.conf.d]# ls -ltr
total 28
-rw-r--r--. 1 root root 223 Oct 5 09:17 README
-rw---. 1
Thank you for your infos.
> It's not the engine that is writing there, it's dwhd. The engine only
> reads. Did you check /var/log/ovirt-engine-dwh/ ?
What is confusing me are these line in
/var/log/ovirt-engine-dwh/ovirt-engine-dwhd.log
lastErrorSent|2011-07-03 12:46:47.00
etlVersion|4.5.7
Sorry, some environment:
- ovirt hosted-engine (self hosted)
- [root@ovirt-engine ~]# rpm -qa | grep engine
ovirt-engine-setup-plugin-ovirt-engine-4.5.4-1.el8.noarch
ovirt-engine-extension-aaa-ldap-1.4.6-1.el8.noarch
ovirt-engine-backend-4.5.4-1.el8.noarch
Hello to all and happy new year.
My question is "simple".
I need to "reset" the ovirt_engine_history database.
I tried to use:
engine-setup --reconfigure-optional-components
after removing:
- ovirt_engine_history
- set to "False"
OVESETUP_DWH_CORE/enable=bool:True
https://bugs.kde.org/show_bug.cgi?id=395950
Diego Ercolani changed:
What|Removed |Added
Status|NEEDSINFO |REPORTED
Resolution|WAITINGFORINFO
https://bugs.kde.org/show_bug.cgi?id=395950
Diego Ercolani changed:
What|Removed |Added
Status|NEEDSINFO |REPORTED
Resolution|WAITINGFORINFO
https://bugs.kde.org/show_bug.cgi?id=395950
--- Comment #3 from Diego Ercolani ---
I can confirm it happened again I deleted all akonadi database... and the
folder have been messed up again
--
You are receiving this mail because:
You are the assignee for the bug.
https://bugs.kde.org/show_bug.cgi?id=395950
--- Comment #3 from Diego Ercolani ---
I can confirm it happened again I deleted all akonadi database... and the
folder have been messed up again
--
You are receiving this mail because:
You are watching all bug changes.
I saw you're involved in resolution, If you need some information or if I have
to raise to verbose log, please just ask.
Thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
I think I encountered another bug in the engine:
I needed to remove a snapshot, and while I was removing the snapshot guest went
down.
What happened is that the snapshot remove failed and left "inconsistent".
I think there is some issue to address.
Here are the relevant log (the engine.log (see
https://bugs.kde.org/show_bug.cgi?id=421812
Diego Ercolani changed:
What|Removed |Added
Resolution|WAITINGFORINFO |---
Status|NEEDSINFO
https://bugs.kde.org/show_bug.cgi?id=421812
Diego Ercolani changed:
What|Removed |Added
Resolution|WAITINGFORINFO |---
Status|NEEDSINFO
I tried to do what written in the list.
This is what I did:
[root@ovirt-node3 ~]# hosted-engine --set-maintenance --mode=global
[root@ovirt-engine ~]# systemctl list-units | grep ovirt
ovirt-engine-dwhd.service
loaded active running
(I'm always talking about the last ovirt version that is currently engine:
4.5.2.5-1.el8, node: ovirt-node-ng-4.5.2-0.20220810.0)
I tried to add an iSCSI storage domain adding a new vlan network where the
storage is attached this gave a completely mess-up of the storage engine.
I would like
I did it following the {read,write}-perf example reported in paragraph 12.6 and
12.7
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/sect-running_the_volume_top_command,
here are the results:
https://cloud.ssis.sm/index.php/s/9bncnNSopnFReRS
Hello, I think I0ve found another issue:
I have the three node that are under heavy test and, after having problem with
gluster I configured them to use with iSCSI (without multipath now) so I
configured via the gui a new iscsi data domain using a single target under a
single VLAN.
I suspect
I tried to measure IOs using gluster volume top but its results seem very
cryptic to me (I need a deep analyze and don't have the time now)
Thank you very much for your analysis, if I understood the problem is that the
consumer SSD cache is too weak to help in times under a smoll number ~15 not
Parameter cluster.choose-local set to off.
I confirm the filesystem of the bricks are all XFS as required.
I started the farm only to accomplish a test bench of oVirt implementation, so
I used 3 hosts based on ryzen5 processor desktop environment equipped with 4
DDR (4 32GB modules) and 1 disk
During this time (Hosted-Engine Hung, this appears in the host were it's
supposed to have Hosted-Engine Running:
2022-09-15 13:59:27,762+ WARN (Thread-10) [virt.vm]
(vmId='8486ed73-df34-4c58-bfdc-7025dec63b7f') Shutdown by QEMU Guest Agent
failed (agent probably inactive) (vm:5490)
The current set is:
[root@ovirt-node2 ~]# gluster volume get glen cluster.choose-local| awk
'/choose-local/ {print $2}'
off
[root@ovirt-node2 ~]# gluster volume get gv0 cluster.choose-local| awk
'/choose-local/ {print $2}'
off
[root@ovirt-node2 ~]# gluster volume get gv1 cluster.choose-local|
Sorry, I see that the editor bring away all the head spaces that indent the
timestamp.
I retried the test, hoping to find the same error, and I found it. On node3. I
changed the code of the read routine:
cd /rhev/data-center/mnt/glusterSD/ovirt-node2.ovirt:_gv1; while sleep 0.1 ; do
date
Thank you for the analisys:
The version is the last distributed in the ovirt@centos8 distribution:
[root@ovirt-node2 ~]# rpm -qa | grep '\(glusterfs-server\|ovirt-node\)'
ovirt-node-ng-image-update-placeholder-4.5.2-1.el8.noarch
glusterfs-server-10.2-1.el8s.x86_64
Hello. I did a full backup using veeam but I recorded many errors in the
gluster log.
This is the log (https://cloud.ssis.sm/index.php/s/KRimf5MLXK3Ds3d). The log is
from the same node where veeam-proxy and the backupped VMs resides.
Both are running in the gv1 storage domain.
See that hours
Hello,
I don't know if it's normal but in all the nodes of the cluster (except the
one that runs the engine) I have something like:
2022-09-12 15:41:54,563+ INFO (jsonrpc/0) [api.virt] START getStats()
from=::1,57578, vmId=8486ed73-df34-4c58-bfdc-7025dec63b7f (api:48)
2022-09-12
Yes, it seem so, but I cannot record any "erroir" on the interface, I have 0 TX
error and 0 RX error. all three nodes are connected through a single
switch. I set the MTU to 9000 to help gluster transfers but I cannot record any
error.
In the /var/log/vdsm/vdsm.log I log periodically in all
Just to follow that I had to run the same command on all the nodes because in
nodes where I didn't run continued to appear.
This should be resolved now.
Thank you
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to
I really don't understand, I was monitoring vdsm.log of one node (node2)
And I saw a complain:
2022-09-06 14:08:27,105+ ERROR (check/loop) [storage.monitor] Error
checking path
/rhev/data-center/mnt/glusterSD/ovirt-node2.ovirt:_gv1/45b4f14c-8323-482f-90ab-99d8fd610018/dom_md/metadata
I don't have disk problems as I enabled smartd and I perform a periodic test
(smartctl -t long )
but in sanlock I have some problems, and also in gluster glheal logs are not
clean:
The last event I recorded is today at 00:28 (22/09/4 22:28 GMTZ), this is the
time when node3 sent mail:
Perfect! It works
Thank you Sandro.
The help file is discouraging the use:
[root@ovirt-node3 ~]# hosted-engine --clean-metadata --help
Usage: /usr/sbin/hosted-engine --clean_metadata [--force-cleanup]
[--host-id=]
Remove host's metadata from the global status database.
Available only in
Anyone can give a hint please?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
Thank you for the support, hoping to help improve the resiliance of the
implementation.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt
Versions are last:
ovirt-host-4.5.0-3.el8.x86_64
ovirt-engine-4.5.2.4-1.el8.noarch
glusterfs-server-10.2-1.el8s.x86_64
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
Hello, I have a cluster made by 3 nodes in a "self-hosted-engine" topology.
I implemented the storage with gluster implementation in 2 replica + arbiter
topology.
I have two gluster volumes
glen - is the volume used by hosted-engine vm
gv0 - is the volume used by VMs
The physical disks are 4TB
So you need to migrate the self-hosted engine between clusters, I think the
only way is to backup hosted-engine configuration and restore it in the other
cluster via a new hosted-engine deploy in the other cluster and a restore of
the backup.
But I'm very interested on how to manipulate hosted
engine 4.5.2.4
The issue is that in my cluster when I use the:
[root@ovirt-node3 ~]# hosted-engine --vm-status
--== Host ovirt-node3.ovirt (id: 1) status ==--
Host ID: 1
Host timestamp : 1633143
Score : 3400
Engine
This is the bug report I filled:
https://bugzilla.redhat.com/show_bug.cgi?id=2123008
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code
There is only one daemon.log per directory.
Here it is the archive for the daemon.log
https://cloud.ssis.sm/index.php/s/y6XxgH7CcrL5AC3
I will create the bug report referring this thread.
Thank you
___
Users mailing list -- users@ovirt.org
To
I add also that I upgraded the engine on 2022-08-22 so I have the last "stable"
since then:
[root@ovirt-engine dbutils]# rpm -qi ovirt-engine-4.5.2.4-1.el8.noarch
Name: ovirt-engine
Version : 4.5.2.4
Release : 1.el8
Architecture: noarch
Install Date: Mon Aug 22 08:17:41 2022
One process that I killed was:
[root@ovirt-node4 ~]# ps axuww | grep qemu-nbd
vdsm 588156 0.0 0.0 308192 39840 ?Ssl Aug26 0:12
/usr/bin/qemu-nbd --socket
/run/vdsm/nbd/c7653559-508b-4e4a-a591-32dec3e5a29d.sock --persistent --shared=8
--export-name= --cache=none --aio=native
Thanks Arik,
we have tried your solution but with no successful results.
we have gather also other infor and combined in this solution:
we have deleted on DB the row on vm_backups and vm_disk_map related to the
hanged backup.
The we have tried to delete shapshot locked , after the row db
Thank you for your support,
I'm councious about the difficulty to keep everything in-line. I currently try
to find the correct workload to make backup (using CBR) of VMs.
I tryied both vprotect (with current tecnology preview) and Veeam (community
using RHV plugin) And I'm currently experiencing
Hello I saw there are other thread asking how to delete disk snapshots from
backup operation.
We definitively need a tool to kill pending backup operations and locked
snapshots.
I Think this is very frustrating ovirt is a good piece of software but it's
very immature in a dirty asyncronous
Here is my bugreport: https://bugzilla.redhat.com/show_bug.cgi?id=2117917
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
Hello,
* the update missed kvdo module as kmod-kvdo-6.2.6.14-84.el8.x86_64 is not
compatible with current kernel kernel-4.18.0-408.el8.x86_64
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy
Today Veem seem that have unlocked the snapshot an finally I can remove
"autogenerated" snapshots. Anyway, I just extracted last day logs from the
engine, so you can analize it if you want:
https://cloud.ssis.sm/index.php/s/RT9EHBys5eZ87oo
___
Users
Unfortunately no, Veeam said that have failed finalizing backup
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
Anyway, I removed the image from the image_transfer table, in the GUI the state
changed to "OK" but when I try to remove the snapshots Veeam left it says thai
engine cannot remove during backup:
2022-08-01 14:07:26,181Z INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand]
In data lunedì 1 agosto 2022 15:55:56 CEST, Benny Zlotnik ha scritto:
> So looks like the transfer failed, but it was later finalized moving
> it from FINISHED_FAILURE to FINALIZING_SUCCESS, we have a bug to
> prevent clients from changing the transfer's status like this, fix
> should land in
Expanded display is on.
-[ RECORD 1 ]-+
command_id| eecdc5fc-4b7a-44f4-afed-9abb0cd12534
command_type | 1024
phase | 7
last_updated | 2022-07-31 10:49:06.791+00
message
ovirt-engine-4.5.1.3-1.el8.noarch
Hello I have a situation where a disk is stuck in finalizing state derived by
trying to backup via veeam.
backup process is interrupted and I have cleared the job states with the
dbutils scripts (/usr/share/ovirt-engine/setup/dbutils/task_cleaner.sh) even
if
https://bugs.kde.org/show_bug.cgi?id=456742
--- Comment #3 from Diego Ercolani ---
Created attachment 150941
--> https://bugs.kde.org/attachment.cgi?id=150941=edit
New crash information added by DrKonqi
korganizer (5.19.3 (21.12.3)) using Qt 5.15.2
- What I was doing when the applicat
https://bugs.kde.org/show_bug.cgi?id=456742
--- Comment #3 from Diego Ercolani ---
Created attachment 150941
--> https://bugs.kde.org/attachment.cgi?id=150941=edit
New crash information added by DrKonqi
korganizer (5.19.3 (21.12.3)) using Qt 5.15.2
- What I was doing when the applicat
https://bugs.kde.org/show_bug.cgi?id=456742
--- Comment #2 from Diego Ercolani ---
Created attachment 150892
--> https://bugs.kde.org/attachment.cgi?id=150892=edit
New crash information added by DrKonqi
kontact (5.19.3 (21.12.3)) using Qt 5.15.2
- What I was doing when the applicat
https://bugs.kde.org/show_bug.cgi?id=456742
--- Comment #2 from Diego Ercolani ---
Created attachment 150892
--> https://bugs.kde.org/attachment.cgi?id=150892=edit
New crash information added by DrKonqi
kontact (5.19.3 (21.12.3)) using Qt 5.15.2
- What I was doing when the applicat
https://bugs.kde.org/show_bug.cgi?id=456742
--- Comment #1 from Diego Ercolani ---
Created attachment 150741
--> https://bugs.kde.org/attachment.cgi?id=150741=edit
New crash information added by DrKonqi
kontact (5.19.3 (21.12.3)) using Qt 5.15.2
- What I was doing when the applicat
https://bugs.kde.org/show_bug.cgi?id=456742
--- Comment #1 from Diego Ercolani ---
Created attachment 150741
--> https://bugs.kde.org/attachment.cgi?id=150741=edit
New crash information added by DrKonqi
kontact (5.19.3 (21.12.3)) using Qt 5.15.2
- What I was doing when the applicat
https://bugs.kde.org/show_bug.cgi?id=456742
Diego Ercolani changed:
What|Removed |Added
CC||diego.ercol...@gmail.com
--
You
https://bugs.kde.org/show_bug.cgi?id=456742
Diego Ercolani changed:
What|Removed |Added
CC||diego.ercol...@gmail.com
--
You
https://bugs.kde.org/show_bug.cgi?id=456742
Bug ID: 456742
Summary: kontact crashes on start when selected calendars
Product: kontact
Version: unspecified
Platform: openSUSE RPMs
OS: Linux
Status: REPORTED
https://bugs.kde.org/show_bug.cgi?id=456742
Bug ID: 456742
Summary: kontact crashes on start when selected calendars
Product: kontact
Version: unspecified
Platform: openSUSE RPMs
OS: Linux
Status: REPORTED
very nice article on the heal Xlator can be found at
> https://ravispeaks.wordpress.com/2019/04/05/glusterfs-afr-the-complete-guid
> e/ . Most probably you will focus on the troubleshooting section (
> https://ravispeaks.wordpress.com/2019/05/14/gluster-afr-the-complete-guide->
>
usr/share/man/man8/vdoformat.8.gz
- /usr/share/man/man8/vdoprepareforlvm.8.gz
/usr/share/man/man8/vdosetuuid.8.gz
/usr/share/man/man8/vdostats.8.gz
--- 31,39
--
Ing. Diego Ercolani
S.S.I.S. s.p.a.
T. 0549-875910
___
Users mailing list -- us
n dropped in
> el9.
also kvdo tools aren't present I rolled back to el8
--
Ing. Diego Ercolani
S.S.I.S. s.p.a.
T. 0549-875910
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Stat
Cross posted here:
https://lists.gluster.org/pipermail/gluster-users/2022-June/039957.html
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt
As there is Centos8 and Centos9 4.5.1 is it possible to mix centos8 and Centos9
in a single cluster?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
failed. [{path=},
{gfid=d79fc941-c1d0-486a-9cde-74674884f461}, {errno=2}, {error=No such file or
directory}]
I suppose this shoud be related to a problem in the filesystem where brick is
mapped but I'm not sure.
How can I proceed?
--
Ing. Diego Ercolani
S.S.I.S. s.p.a.
T. 0549-875910
I've done something but the problem remain:
[root@ovirt-node2 ~]# gluster volume heal glen info
Brick ovirt-node2.ovirt:/brickhe/glen
/3577c21e-f757-4405-97d1-0f827c9b4e22/master/tasks
Status: Connected
Number of entries: 1
Brick ovirt-node3.ovirt:/brickhe/glen
Anyone can me address to somewhere where I can read some "in deep"
throubleshots for Glusterfs? I cannot find a "quick" manual
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
see this if it's the case:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RL73Z7MEKENSEON5F7PKQL5KJYAWO3LS/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
Same in 4.5.0.3, workaround seem to work event in this version
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
My Environment is ovirt-host-4.5.0-3.el8.x86_64 and
glusterfs-server-10.2-1.el8s.x86_64
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt
1 - 100 of 218 matches
Mail list logo