Am Donnerstag, den 19.12.2013, 12:25 +0200 schrieb Gabi C:
> Hello again!
>
>
> After persisting selinux config, at reboot I get "Curerent mode:
> enforced"" although ""Mode from config file: permissive"" !
>
> Due to this, i think I get an denied for glusterfsd:
>
> type=AVC msg=audit(1387365
Hello again!
After persisting selinux config, at reboot I get "Curerent mode:
enforced"" although ""Mode from config file: permissive"" !
Due to this, i think I get an denied for glusterfsd:
type=AVC msg=audit(1387365750.532:5873): avc: denied { relabelfrom } for
pid=30249 comm="glusterfsd" na
Am Mittwoch, den 18.12.2013, 14:14 +0200 schrieb Gabi C:
> Still, now I cannot start none of the 2 machines! I get
>
> ID 119 VM proxy2 is down. Exit message: Child quit during startup
> handshake: Input/output error.""
Could you try ot find out in what context this IO error appears?
- fabian
Still, now I cannot start none of the 2 machines! I get
ID 119 VM proxy2 is down. Exit message: Child quit during startup
handshake: Input/output error.""
Something similar to bug https://bugzilla.redhat.com/show_bug.cgi?id=1033064,
except that in my case selinux is permissive!
On Wed, Dec 18,
in my case $brick_path =/data
getfattr -d /data return NOTHING on both nodes!!!
On Wed, Dec 18, 2013 at 1:46 PM, Fabian Deutsch wrote:
> Am Mittwoch, den 18.12.2013, 13:26 +0200 schrieb Gabi C:
> > Update on Glusterfs issue
> >
> >
> > I manage to recover lost volume after recretaing the sam
Am Mittwoch, den 18.12.2013, 13:26 +0200 schrieb Gabi C:
> Update on Glusterfs issue
>
>
> I manage to recover lost volume after recretaing the same volume name
> with same bricks, whisch raised an error message, resolved by, on both
> nodes:
>
> setfattr -x trusted.glusterfs.volume-id $brick_pa
Update on Glusterfs issue
I manage to recover lost volume after recretaing the same volume name with
same bricks, whisch raised an error message, resolved by, on both nodes:
setfattr -x trusted.glusterfs.volume-id $brick_path
setfattr -x trusted.gfid $brick_path
On Wed, Dec 18, 2013 at 12:12
node 1:
[root@virtual5 admin]# cat /config/files
/etc/fstab
/etc/shadow
/etc/default/ovirt
/etc/ssh/ssh_host_key
/etc/ssh/ssh_host_key.pub
/etc/ssh/ssh_host_dsa_key
/etc/ssh/ssh_host_dsa_key.pub
/etc/ssh/ssh_host_rsa_key
/etc/ssh/ssh_host_rsa_key.pub
/etc/rsyslog.conf
/etc/libvirt/libvirtd.conf
/e
Am Mittwoch, den 18.12.2013, 12:03 +0200 schrieb Gabi C:
> So here it is:
>
>
> in tab volumes add new volume - Replicated, then added storage -
> data/glusterfs. Then I impoerted Vm, ran them and at some point,
> needing some space for a Redhat Satellite instance I decided to put
> both node in
So here it is:
in tab volumes add new volume - Replicated, then added storage -
data/glusterfs. Then I impoerted Vm, ran them and at some point, needing
some space for a Redhat Satellite instance I decided to put both node in
maintenace stop them add new disk devices and restart, but after restar
Am Mittwoch, den 18.12.2013, 11:42 +0200 schrieb Gabi C:
> Yes, it is the VM part..I just run into an issue. My setup consist in
> 2 nodes with glusterfs and after adding supplemental hard disk, after
> reboot I've lost glusterfs volumes!
Could you exactly explain what you configured?
>
> How
Yes, it is the VM part..I just run into an issue. My setup consist in 2
nodes with glusterfs and after adding supplemental hard disk, after reboot
I've lost glusterfs volumes!
How can I persist any configuration on node and I refer here to
''setenforce 0'' - for ssh login to work- and further
""
Am Mittwoch, den 18.12.2013, 08:34 +0200 schrieb Gabi C:
> Hello!
>
>
> In order to increase disk space I want to add a new disk drive to
> ovirt node. After adding this should I proceed as "normal" - pvcreate,
> vgcreate, lvcreate and so on - or these configuration will not
> persist?
Hey Gabi,
Hello!
In order to increase disk space I want to add a new disk drive to ovirt
node. After adding this should I proceed as "normal" - pvcreate, vgcreate,
lvcreate and so on - or these configuration will not persist?
Thx
___
Users mailing list
Users@ovir
14 matches
Mail list logo