Re: [ovirt-users] iSCSI domain on 4kn drives

2017-11-10 Thread Nir Soffer
On Mon, Sep 5, 2016 at 10:16 AM Martijn Grendelman <
martijn.grendel...@isaac.nl> wrote:

> Op 7-8-2016 om 8:19 schreef Yaniv Kaul:
>
>
> On Fri, Aug 5, 2016 at 4:42 PM, Martijn Grendelman <
> martijn.grendel...@isaac.nl> wrote:
>
> Op 4-8-2016 om 18:36 schreef Yaniv Kaul:
>>
>> On Thu, Aug 4, 2016 at 11:49 AM, Martijn Grendelman <
>> martijn.grendel...@isaac.nl> wrote:
>>
>>> Hi,
>>>
>>> Does oVirt support iSCSI storage domains on target LUNs using a block
>>> size of 4k?
>>>
>>
>> No, we do not - not if it exposes 4K blocks.
>> Y.
>>
>>
>> Is this on the roadmap?
>>
>
> Not in the short term roadmap.
> Of course, patches are welcome. It's mainly in VDSM.
> I wonder if it'll work in NFS.
> Y.
>
>
> I don't think I ever replied to this, but I can confirm that in RHEV 3.6
> it works with NFS.
>

David, do you know if 4k disks over NFS works for sanlock?


>
> Best regards,
> Martijn.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] iSCSI domain on 4kn drives

2017-11-10 Thread Nir Soffer
On Fri, Nov 10, 2017 at 8:43 PM Marcelo Leandro 
wrote:

> Good afternoon,
>
> Exist a plan for a implementation of vdsm works with blocks in 4k?
>

Not yet.

If this is important to you, I suggest to open an ovirt bug and explain
your use case.

You can also get someone to implement this. Note that this will not be easy
change, there
is lost of code assuming 512 bytes block size.

The change also require changes in sanlock. Vdsm and sanlock must agree on
the block
size used for a storage domain.

Nir


>
> Thanks.
>
> 2016-09-05 4:15 GMT-03:00 Martijn Grendelman 
> :
>
>> Op 7-8-2016 om 8:19 schreef Yaniv Kaul:
>>
>>
>> On Fri, Aug 5, 2016 at 4:42 PM, Martijn Grendelman <
>> martijn.grendel...@isaac.nl> wrote:
>>
>>> Op 4-8-2016 om 18:36 schreef Yaniv Kaul:
>>>
>>> On Thu, Aug 4, 2016 at 11:49 AM, Martijn Grendelman <
>>> martijn.grendel...@isaac.nl> wrote:
>>>
 Hi,

 Does oVirt support iSCSI storage domains on target LUNs using a block
 size of 4k?

>>>
>>> No, we do not - not if it exposes 4K blocks.
>>> Y.
>>>
>>>
>>> Is this on the roadmap?
>>>
>>
>> Not in the short term roadmap.
>> Of course, patches are welcome. It's mainly in VDSM.
>> I wonder if it'll work in NFS.
>> Y.
>>
>>
>> I don't think I ever replied to this, but I can confirm that in RHEV 3.6
>> it works with NFS.
>>
>> Best regards,
>> Martijn.
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Ovirt 4.0 and EL 7.4

2017-11-10 Thread VONDRA Alain
Hi Pavel,
You wrote that vdsm has to be patch to run without issues, but where can I find 
the patch or which lines do I have to modify, because I still have issues like 
that :

vdsm vds ERROR failed to retrieve hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1344, in getHardwareInfo...
vdsm[3980]: vdsm vds.dispatcher ERROR SSL error during reading data: unexpected 
eof
vdsm[3980]: vdsm vds ERROR failed to retrieve hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1344, in getHardwareInfo...
vdsm[3980]: vdsm vds ERROR failed to retrieve hardware info
Traceback (most recent call last):
File "/usr/share/vdsm/API.py", line 1344, in getHardwareInfo...

Thanks





Alain VONDRA
Chargé d'Exploitation et de Sécurité des Systèmes d'Information
Direction Administrative et Financière
+33 1 44 39 77 76
UNICEF France
3 rue Duguay Trouin  75006 PARIS
www.unicef.fr





[cid:20-NOV-2017_b6f93210-e459-4491-a078-a0ee02457f91.png]









De : users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] De la part de 
Jorick Astrego
Envoyé : mardi 10 octobre 2017 19:41
À : users@ovirt.org
Objet : Re: [ovirt-users] Ovirt 4.0 and EL 7.4


Hi,

I've redeployed a node with 7.3 to fix this issue but got the same errors with 
ovirt 4.0.

MainThread::DEBUG::2017-10-10 
18:30:30,945::upgrade::90::upgrade::(apply_upgrade) Running upgrade 
upgrade-unified-persistence
MainThread::DEBUG::2017-10-10 18:30:30,951::libvirtconnection::160::root::(get) 
trying to connect libvirt
MainThread::ERROR::2017-10-10 
18:30:41,125::upgrade::94::upgrade::(apply_upgrade) Failed to run 
upgrade-unified-persistence
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/tool/upgrade.py", line 92, in 
apply_upgrade
upgrade.run(ns, args)
  File "/usr/lib/python2.7/site-packages/vdsm/tool/unified_persistence.py", 
line 195, in run
run()
  File "/usr/lib/python2.7/site-packages/vdsm/tool/unified_persistence.py", 
line 46, in run
networks, bondings = _getNetInfo()
  File "/usr/lib/python2.7/site-packages/vdsm/tool/unified_persistence.py", 
line 132, in _getNetInfo
netinfo = NetInfo(netswitch.netinfo())
  File "/usr/lib/python2.7/site-packages/vdsm/network/netswitch.py", line 298, 
in netinfo
_netinfo = netinfo_get(compatibility=compatibility)
  File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/cache.py", line 
109, in get
return _get(vdsmnets)
  File "/usr/lib/python2.7/site-packages/vdsm/network/netinfo/cache.py", line 
70, in _get
libvirt_nets = libvirt.networks()
  File "/usr/lib/python2.7/site-packages/vdsm/network/libvirt.py", line 113, in 
networks
conn = libvirtconnection.get()
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 163, 
in get
password)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 99, 
in open_connection
return utils.retry(libvirtOpen, timeout=10, sleep=0.2)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 547, in retry
return func()
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in openAuth
if ret is None:raise libvirtError('virConnectOpenAuth() failed')
libvirtError: authentication failed: authentication failed



Oct 10 19:35:55 host1 sasldblistusers2: _sasldb_getkeyhandle has failed

Oct 10 19:36:20 host1 libvirtd: 2017-10-10 17:36:20.002+: 13660: error : 
virNetSASLSessionListMechanisms:390 : internal error: cannot list SASL 
mechanisms -4 (SASL(-4): no mechanism available: Internal Error -4 in server.c 
near line 1757)
Oct 10 19:36:20 host1 libvirtd: 2017-10-10 17:36:20.002+: 13660: error : 
remoteDispatchAuthSaslInit:3411 : authentication failed: authentication failed
Oct 10 19:36:20 host1 libvirtd: 2017-10-10 17:36:20.002+: 13650: error : 
virNetSocketReadWire:1808 : End of file while reading data: Input/output error
Oct 10 19:36:20 host1 vdsm-tool: libvirt: XML-RPC error : authentication 
failed: authentication failed
Oct 10 19:36:20 host1 systemd: vdsm-network.service: control process exited, 
code=exited status=1
Oct 10 19:36:20 host1 systemd: Failed to start Virtual Desktop Server Manager 
network restoration.
Oct 10 19:36:20 host1 systemd: Dependency failed for Virtual Desktop Server 
Manager.
Oct 10 19:36:20 host1 systemd: Dependency failed for MOM instance configured 
for VDSM purposes.
Oct 10 19:36:20 host1 systemd: Job mom-vdsm.service/start failed with result 
'dependency'.
Oct 10 19:36:20 host1 systemd: Job vdsmd.service/start failed with result 
'dependency'.
Oct 10 19:36:20 host1 systemd: Unit vdsm-network.service entered failed state.
Oct 10 19:36:20 host1 systemd: vdsm-network.service failed.



cat /etc/redhat-release
CentOS Linux release 7.3.1611 

Re: [ovirt-users] how to clean stuck task

2017-11-10 Thread Gianluca Cecchi
On Fri, Nov 10, 2017 at 3:48 PM,  wrote:

>
>>
> I've seen this behavior too. IIRC the stale cleaning was not instant, it
> took some time to be applied.
>
> Regards.
>
> Gianluca
>>
>
Confirmed.
Quite soon after the command I saw that the status of the "Current"
snapshot line changed from Locked (it was so since 8/11) to OK, but the
task remained at least for half an hour.
Now, after about one hour and a half I connected again to the web admin gui
and I see 0 Tasks, so the problem has been resolved.

Thanks again,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to clean stuck task

2017-11-10 Thread Wesley Stewart
You could also go database diving.  I had an issue where I tried to import
a VM from my export domain and it just got hung.  I tried running the
unlock_entity script but it just kept failing.  It sat there for months
stuck, and found
http://lists.ovirt.org/pipermail/users/2015-April/032346.html

Of course deleting something from your database is quite permanent. I would
wait and upgrade to 4.1.7, but something like the below should work. But
probably not recommended

Drop into postgres
psql -d engine -U postgres

List your tasks and grab the job_id
select * from job order by start_time desc;

select DeleteJob('8424f7a9-2a4c-4567-b528-45bbc1c2534f');
Where the string here is the job ID

On Fri, Nov 10, 2017 at 9:48 AM,  wrote:

> El 2017-11-10 14:41, Gianluca Cecchi escribió:
>
>> On Fri, Nov 10, 2017 at 3:34 PM,  wrote:
>>
>> oVirt upgrade to 4.1.7 will probably cleanup this stale task.
>>> However, if you want to do it before upgrading, run this command:
>>>
>>>PGPASSWORD=...
>>> /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t all -u
>>> engine
>>>
>>> Note that unlock_entity.sh has many flags and this is just an
>>> example (should clean all stale tasks).
>>>
>>> You can find the PGPASSWORD value in the
>>> /etc/ovirt-engine/engine.conf.d/10-setup-database.conf file. As of
>>> 4.2 you won't need to supply credentials anymore [1].
>>>
>>> Regards,
>>>
>>> Nicolás
>>>
>>
>> It seems it didn't work as expected.
>> I got this at command line output
>>
>> "
>>
>> select fn_db_unlock_all();
>>
>>
>> INSERT 0 1
>> unlock all  completed successfully.
>> "
>>
>>
> This is expected.
>
> But the task remains in webadmin gui and I got an alert message in
>> alert section, of this type
>> "
>> /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh : System user
>> root run manually unlock_entity script on entity [type,id] [all,] with
>> db user engine
>> "
>>
>>
> I've seen this behavior too. IIRC the stale cleaning was not instant, it
> took some time to be applied.
>
> Regards.
>
> Gianluca
>>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to clean stuck task

2017-11-10 Thread nicolas

El 2017-11-10 14:41, Gianluca Cecchi escribió:

On Fri, Nov 10, 2017 at 3:34 PM,  wrote:


oVirt upgrade to 4.1.7 will probably cleanup this stale task.
However, if you want to do it before upgrading, run this command:

   PGPASSWORD=...
/usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t all -u
engine

Note that unlock_entity.sh has many flags and this is just an
example (should clean all stale tasks).

You can find the PGPASSWORD value in the
/etc/ovirt-engine/engine.conf.d/10-setup-database.conf file. As of
4.2 you won't need to supply credentials anymore [1].

Regards,

Nicolás


It seems it didn't work as expected.
I got this at command line output

"

select fn_db_unlock_all();
 

INSERT 0 1
unlock all  completed successfully.
"



This is expected.


But the task remains in webadmin gui and I got an alert message in
alert section, of this type
"
/usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh : System user
root run manually unlock_entity script on entity [type,id] [all,] with
db user engine
"



I've seen this behavior too. IIRC the stale cleaning was not instant, it 
took some time to be applied.


Regards.


Gianluca

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to clean stuck task

2017-11-10 Thread Gianluca Cecchi
On Fri, Nov 10, 2017 at 3:34 PM,  wrote:

> oVirt upgrade to 4.1.7 will probably cleanup this stale task. However, if
> you want to do it before upgrading, run this command:
>
>PGPASSWORD=... /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh
> -t all -u engine
>
> Note that unlock_entity.sh has many flags and this is just an example
> (should clean all stale tasks).
>
> You can find the PGPASSWORD value in the 
> /etc/ovirt-engine/engine.conf.d/10-setup-database.conf
> file. As of 4.2 you won't need to supply credentials anymore [1].
>
> Regards,
>
> Nicolás
>


It seems it didn't work as expected.
I got this at command line output

"
select fn_db_unlock_all();


INSERT 0 1
unlock all  completed successfully.
"

But the task remains in webadmin gui and I got an alert message in alert
section, of this type
"
/usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh : System user root
run manually unlock_entity script on entity [type,id] [all,] with db user
engine
"

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] how to clean stuck task

2017-11-10 Thread nicolas
oVirt upgrade to 4.1.7 will probably cleanup this stale task. However, 
if you want to do it before upgrading, run this command:


   PGPASSWORD=... /usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh 
-t all -u engine


Note that unlock_entity.sh has many flags and this is just an example 
(should clean all stale tasks).


You can find the PGPASSWORD value in the 
/etc/ovirt-engine/engine.conf.d/10-setup-database.conf file. As of 4.2 
you won't need to supply credentials anymore [1].


Regards,

Nicolás

  [1]: https://gerrit.ovirt.org/82615

El 2017-11-10 14:16, Gianluca Cecchi escribió:

Hello, 
I have a task that seems stuck in webadmin gui, in the sens tha I have
"Tasks(1)" listed
The task is  Restoring VM Snapshot Active VM before the preview of
VM snaptest
and the VM is powered down.
Screenshot of expanded steps of task, that actually seem all
completed, is here:
https://drive.google.com/file/d/1bfl_gEfVotIrxGC9TDzPHPCeRub41mUa/view?usp=sharing
[1]

Any hint on what to do to clean things? I'm on oVirt
4.1.6.2-1.el7.centos and I would like to clean before upgrading to
4.1.7.

Thanks
Gianluca

Links:
--
[1]
https://drive.google.com/file/d/1bfl_gEfVotIrxGC9TDzPHPCeRub41mUa/view?usp=sharing

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] how to clean stuck task

2017-11-10 Thread Gianluca Cecchi
Hello,
I have a task that seems stuck in webadmin gui, in the sens tha I have
"Tasks(1)" listed
The task is
Restoring VM Snapshot Active VM before the preview of VM snaptest
and the VM is powered down.
Screenshot of expanded steps of task, that actually seem all completed, is
here:
https://drive.google.com/file/d/1bfl_gEfVotIrxGC9TDzPHPCeRub41mUa/view?usp=sharing

Any hint on what to do to clean things? I'm on oVirt 4.1.6.2-1.el7.centos
and I would like to clean before upgrading to 4.1.7.

Thanks
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] attach iso domain failed

2017-11-10 Thread suporte
Hi, 

I have engine installed on one machine and host on another, running Version 
4.1.7.6-1.el7.centos 
Everything looks ok except the iso domain I cannot attach it to the data 
center. Message on the engine dashboard: 

Failed to attach Storage Domain ISO_DOMAIN to Data Center Default. (User: 
admin@internal-authz) 
VDSM command AttachStorageDomainVDS failed: Error in storage domain action: 
(u'sdUUID=875e1af4-ba14-4255-b6ad-c3031672df93, 
spUUID=5a0287ce-0233-0170-00a2-01d8',) 

Checking the configuration everything looks ok: 
[root@engine ~]# grep iso 
/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf 
OVESETUP_CONFIG/isoDomainName=str:ISO_DOMAIN 
OVESETUP_CONFIG/isoDomainSdUuid=str:875e1af4-ba14-4255-b6ad-c3031672df93 
OVESETUP_CONFIG/isoDomainMountPoint=str:/home/iso 
OVESETUP_CONFIG/isoDomainExists=bool:True 
OVESETUP_CONFIG/isoDomainStorageDir=str:/home/iso/875e1af4-ba14-4255-b6ad-c3031672df93/images/----
 

[root@engine ~]# ovirt-iso-uploader list 
Please provide the REST API password for the admin@internal oVirt Engine user 
(CTRL+D to abort): 
ISO Storage Domain Name | ISO Domain Status 
ISO_DOMAIN | ok 

[root@node1 ~]# vdsClient -s 0 getStorageDomainsList 
5eed853b-09ee-430d-bde2-c37394c1ff6c 


I can mount the export manually 


Don't know where to look more. Any idea? thanks 


vdsm log: 
2017-11-10 10:51:00,839+ INFO (periodic/3) [vdsm.api] START 
repoStats(options=None) from=internal, 
task_id=f0f5161a-b3c1-4534-a33a-cf5d17ab8c9c (api:46) 
2017-11-10 10:51:00,840+ INFO (periodic/3) [vdsm.api] FINISH repoStats 
return={u'5eed853b-09ee-430d-bde2-c37394c1ff6c': {'code': 0, 'actual': True, 
'version': 4, 'acquired': True, 'delay': '0.000890471', 'lastCheck': '4.8', 
'valid': True}} from=internal, task_id=f0f5161a-b3c1-4534-a33a-cf5d17ab8c9c 
(api:52) 
2017-11-10 10:51:05,323+ INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 
2017-11-10 10:51:07,277+ INFO (jsonrpc/4) [vdsm.api] START 
repoStats(options=None) from=:::192.168.16.9,42836, flow_id=399705b5, 
task_id=998fe1bc-eff0-4ba7-942c-458906c5e1f1 (api:46) 
2017-11-10 10:51:07,277+ INFO (jsonrpc/4) [vdsm.api] FINISH repoStats 
return={u'5eed853b-09ee-430d-bde2-c37394c1ff6c': {'code': 0, 'actual': True, 
'version': 4, 'acquired': True, 'delay': '0.000844547', 'lastCheck': '1.2', 
'valid': True}} from=:::192.168.16.9,42836, flow_id=399705b5, 
task_id=998fe1bc-eff0-4ba7-942c-458906c5e1f1 (api:52) 
2017-11-10 10:51:07,285+ INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call 
Host.getStats succeeded in 0.01 seconds (__init__:539) 
2017-11-10 10:51:07,345+ INFO (jsonrpc/1) [vdsm.api] START 
getSpmStatus(spUUID=u'5a0287ce-0233-0170-00a2-01d8', options=None) 
from=:::192.168.16.9,42836, flow_id=4b670edd, 
task_id=a6baf012-6dbb-47d6-9f8e-457e42bbc7d7 (api:46) 
2017-11-10 10:51:07,352+ INFO (jsonrpc/1) [vdsm.api] FINISH getSpmStatus 
return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 2L}} 
from=:::192.168.16.9,42836, flow_id=4b670edd, 
task_id=a6baf012-6dbb-47d6-9f8e-457e42bbc7d7 (api:52) 
2017-11-10 10:51:07,352+ INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call 
StoragePool.getSpmStatus succeeded in 0.01 seconds (__init__:539) 
2017-11-10 10:51:08,410+ INFO (jsonrpc/2) [vdsm.api] START 
getStoragePoolInfo(spUUID=u'5a0287ce-0233-0170-00a2-01d8', 
options=None) from=:::192.168.16.9,42838, flow_id=4b670edd, 
task_id=4d623b87-105f-47ef-8385-7d7391378906 (api:46) 
2017-11-10 10:51:08,421+ INFO (jsonrpc/2) [vdsm.api] FINISH 
getStoragePoolInfo return={'info': {'name': 'No Description', 'isoprefix': '', 
'pool_status': 'connected', 'lver': 2L, 'domains': 
u'5eed853b-09ee-430d-bde2-c37394c1ff6c:Active', 'master_uuid': 
u'5eed853b-09ee-430d-bde2-c37394c1ff6c', 'version': '4', 'spm_id': 1, 'type': 
'GLUSTERFS', 'master_ver': 1}, 'dominfo': 
{u'5eed853b-09ee-430d-bde2-c37394c1ff6c': {'status': u'Active', 'diskfree': 
'912109731840', 'isoprefix': '', 'alerts': [], 'disktotal': '912151019520', 
'version': 4}}} from=:::192.168.16.9,42838, flow_id=4b670edd, 
task_id=4d623b87-105f-47ef-8385-7d7391378906 (api:52) 
2017-11-10 10:51:08,422+ INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call 
StoragePool.getInfo succeeded in 0.01 seconds (__init__:539) 
2017-11-10 10:51:11,159+ INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 
2017-11-10 10:51:15,859+ INFO (periodic/0) [vdsm.api] START 
repoStats(options=None) from=internal, 
task_id=8da65b6e-23ba-40fb-82fd-287645bb05cf (api:46) 
2017-11-10 10:51:15,859+ INFO (periodic/0) [vdsm.api] FINISH repoStats 
return={u'5eed853b-09ee-430d-bde2-c37394c1ff6c': {'code': 0, 'actual': True, 
'version': 4, 'acquired': True, 'delay': '0.000718255', 'lastCheck': '1.8', 
'valid': True}} from=internal, task_id=8da65b6e-23ba-40fb-82fd-287645bb05cf 
(api:52) 

[ovirt-users] [OT] how to analyze/avoid dropped rx packets

2017-11-10 Thread Gianluca Cecchi
Hello,
on some Oracle Linux 7 VMs I see that dropped RX packets continue to
increase.
The eth0 interface is virtio for all of them.
When running tcpdump on eth0 the counter stop increasing and so it means
tcpdump does capture them

I found some references about these kind of rx dropped frames with CentOS 7
too and other distros with new kernels.

eg:
https://www.netiq.com/support/kb/doc.php?id=7007165
and
*https://serverfault.com/questions/528290/ifconfig-eth0-rx-dropped-packets
*

In my case it doesn't seem that the nature of the dropped packages is STP,
but IP6

On my VMs I have disabled ipv6

I see that in about 100 seconds I have 180 dropped frames occured, and when
running tcpdump for the same amount of time I have in it about 180 DHCPv6
packets, so I presume it is it the responsible
Any way to avoid this, at VM or hypervisor level?

Thanks in advance,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine installation + GlusterFS cluster

2017-11-10 Thread Kasturi Narra
Hello Artem,

May i know how did you deploy the Hosted Engine and glusterfs
volumes?  There is an easy way to do this using cockpit UI. You could log
into cockpit UI, click on Hosted Engine tab and there are two radio buttons
one for gluster deployment and another or HostedEngine deployment.

1) You could follow the gluster deployment screens which will create all
the volumes required for HC setup and opens all the ports
2) Then you can continue over to HE deployment and make sure you  provide
the answer as 'yes'  when the question 'Do you want to configure this host
and its cluster for gluster?' and answer no for iptables.
3) Once done with HE , you could go over to the UI, add the first master
domain (data) which imports hosted_storage and HE vm into the UI
automatically.
4) Then you can add the addtional hosts  & that is it, you are done.

Hope this helps !!!

Thanks
kasturi

On Thu, Nov 9, 2017 at 8:16 PM, Artem Tambovskiy  wrote:

> One more thing is  - firewall rules.
>
> For 3 gluster bricks I have configured following:
> firewall-cmd --zone=public --add-port=24007-24009/tcp
> --add-port=49152-49664/tcp --permanent
>
> and this seems not enough. have to stop the firewall in order to make the
> cluster working.
>
> I have noticed 490xx being used by gluster, any ideas on that documented
> range?
>
>  lsof -i | grep gluster | grep "490"
> glusterfs 32301root   10u  IPv4 148985  0t0  TCP
> ovirt1:49159->ovirt1:49099 (ESTABLISHED)
> glusterfs 32301root   17u  IPv4 153084  0t0  TCP
> ovirt1:49159->ovirt2:49096 (ESTABLISHED)
> glusterfs 46346root   17u  IPv4 156437  0t0  TCP
> ovirt1:49161->ovirt1:49093 (ESTABLISHED)
> glusterfs 46346root   18u  IPv4 149985  0t0  TCP
> ovirt1:49161->ovirt2:49090 (ESTABLISHED)
> glusterfs 46380root8u  IPv4 151389  0t0  TCP
> ovirt1:49090->ovirt3:49161 (ESTABLISHED)
> glusterfs 46380root   11u  IPv4 148986  0t0  TCP
> ovirt1:49091->ovirt2:49161 (ESTABLISHED)
> glusterfs 46380root   21u  IPv4 153074  0t0  TCP
> ovirt1:49099->ovirt1:49159 (ESTABLISHED)
> glusterfs 46380root   25u  IPv4 153075  0t0  TCP
> ovirt1:49097->ovirt2:49160 (ESTABLISHED)
> glusterfs 46380root   26u  IPv4 153076  0t0  TCP
> ovirt1:49095->ovirt3:49159 (ESTABLISHED)
> glusterfs 46380root   27u  IPv4 153077  0t0  TCP
> ovirt1:49093->ovirt1:49161 (ESTABLISHED)
>
> Regards,
> Artem
>
> On Thu, Nov 9, 2017 at 3:56 PM, Artem Tambovskiy <
> artem.tambovs...@gmail.com> wrote:
>
>> Hi,
>>
>> Just realized that I probably went in the wrong way. Reinstalled
>> everything from the scratch added 4 volumes (hosted_engine, data, export,
>> iso). All looks good so far.
>> But if go to the Cluster properties and tick the checkbox "Enable Cluster
>> Service" - the host will be marked as Non-Operational. Am I messing up the
>> things?
>> Or I'm just fine as long as I already have a Data (Master) Storage Domain
>> over GlusterFS?
>>
>> Regards,
>> Artem
>>
>> On Thu, Nov 9, 2017 at 2:46 PM, Fred Rolland  wrote:
>>
>>> Hi,
>>>
>>> The steps for this kind of setup are described in [1].
>>> However it seems you have already succeeded in installing, so maybe you
>>> need some additional steps [2]
>>> Did you add a storage domain that will act as Master Domain? It is
>>> needed, then the initial Storage Domain should be imported automatically.
>>>
>>>
>>> [1] https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt
>>> -4.1-and-gluster-storage/
>>> [2] https://www.ovirt.org/documentation/gluster-hyperconverged/c
>>> hap-Additional_Steps/
>>>
>>> On Thu, Nov 9, 2017 at 10:50 AM, Artem Tambovskiy <
>>> artem.tambovs...@gmail.com> wrote:
>>>
 Another yet attempt to get a help on hosted-engine deployment with
 glusterfs cluster.
 I already spend a day trying to get bring such a setup to work with no
 luck.

 The hosted engine being successfully deployed but I can't activate the
 host, the storage domain for the host is missing and I can't even add it.
 So either something went wrong during deployment or my glusterfs cluster
 doesn't configured properly.

 That are the prerequisites for this?

 - glusterfs cluster of 3 nodes with replica 3 volume
 - Any specific volume configs?
 - how many volumes should I prepare for hosted engine deployment?

 Any other thoughts?

 Regards,
 Artem

 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


>>>
>>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hosted Engine is down and won't start

2017-11-10 Thread Kasturi Narra
Hello Logan,

   One reason the liveliness check fails is host cannot ping your hosted
engine vm. you can try connecting to HE vm using remote-viewer
vnc://hypervisor-ip:5900 and from the hosted-engine --vm-status output
looks like the HE vm is up and running fine.


   - Please check internal dns setting like resolv.conf setting
   - Can not resolve virtual host name or ip address.

Thanks
kasturi


On Fri, Nov 10, 2017 at 12:56 PM, Logan Kuhn 
wrote:

> We lost the backend storage that hosts our self hosted engine tonight.
> We've recovered it and there was no data corruption on the volume
> containing the HE disk.  However, when we try to start the HE it doesn't
> give an error, but it also doesn't start.
>
> The VM isn't pingable and the liveliness check always fails.
>
>  [root@ovirttest1 ~]# hosted-engine --vm-status | grep -A20 ovirttest1
> Hostname   : ovirttest1.wolfram.com
> Host ID: 1
> Engine status  : {"reason": "failed liveliness check",
> "health": "bad", "vm": "up", "detail": "up"}
> Score  : 3400
> stopped: False
> Local maintenance  : False
> crc32  : 2c2f3ec9
> local_conf_timestamp   : 18980042
> Host timestamp : 18980039
> Extra metadata (valid at timestamp):
>metadata_parse_version=1
>metadata_feature_version=1
>timestamp=18980039 (Fri Nov 10 01:17:59 2017)
>host-id=1
>score=3400
>vm_conf_refresh_time=18980042 (Fri Nov 10 01:18:03 2017)
>conf_on_shared_storage=True
>maintenance=False
>state=GlobalMaintenance
>stopped=False
>
> The environment is in Global Maintenance so that we can isolate it to
> starting on a specific host to eliminate as many variables as possible.
> I've attached the agent and broker logs
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users