[ovirt-users] Re: SPICE and Windows 10

2019-05-26 Thread Colin Coe
Hi all

I've installed the v0.19 driver on a couple of test VMs.  Particularly in
multi-monitor VMs, this version is significantly better than what's
provided in the RHV Tools 4.3-6 ISO image.

I've opened an RFE with GSS to have v0.19 added to the RHV tools ISO.

Thanks

CC

On Fri, May 24, 2019 at 10:27 PM Sandro Bonazzola 
wrote:

>
>
> Il giorno ven 24 mag 2019 alle ore 14:58 Victor Toso <
> victort...@redhat.com> ha scritto:
>
>> On Fri, May 24, 2019 at 07:08:12PM +0800, Colin Coe wrote:
>> > Hi Victor
>> >
>> > The SPICE server is
>> > rpm -q spice-server
>> > spice-server-0.14.0-6.el7_6.1.x86_64
>> >
>> > On the VM we're using SPICE QXL.
>> >
>> > Looks like
>> >
>> https://www.spice-space.org/download/windows/qxl-wddm-dod/qxl-wddm-dod-0.19/spice-qxl-wddm-dod-0.19.zip
>> > has the performance fixes you mentioned.
>> >
>> > Any ideas if/when this will be shipped with/on the RHV Tools ISO?
>>
>> Not sure if on 4.3, perhaps 4.4.
>> 4.3 indeed has 0.18 version of qxl-wddm-dod
>>
>
> Not sure why we are discussing RHV specific parts in oVirt mailing list
> instead of in a customer case, but opened
> https://bugzilla.redhat.com/show_bug.cgi?id=1713700 to track this.
> Yuri, can you please follow up on that bug adding the references to the
> build to be included in the iso?
>
> Also the oVirt version of the guest tools is missing the updated driver,
> opened https://bugzilla.redhat.com/show_bug.cgi?id=1713705 to track it.
>
> Thanks,
>
>
>>
>> CC'ing Sandro.
>>
>> Cheers,
>> Victor
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
> 
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D3EJBWEWDT6ZOBD3UP5L5ZC7IVSMGHVL/


[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-26 Thread michael
I made them manually.  First created the LVM drives, then the VDO devices, then 
gluster volumes
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PVU6V6YY34XSX2NC5TKTU5YD4RAU4S7X/


[ovirt-users] oVirt & Grafana

2019-05-26 Thread michael
I have ovirt connected to my grafana and I can make some rudimentary 
dashboards, does anyone have any they've already made?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4I63VGGOQBFA756QH6FA3YAA2ELOTIQ2/


[ovirt-users]Re: Bond Mode 1 (Active-Backup),vm unreachable for minutes when bond link change

2019-05-26 Thread henaumars
Glade to hear you, sorry for so much spelling mistakes。

I update my vm os to cnetos7.6 and change my bond configuretion as:
ifcfg-bond0:
# Generated by VDSM version 4.30.9.1
DEVICE=bond0
BONDING_OPTIOS='mode=1 miion=100 downdelay=200 updelay=200'
BRIDGE=ovirtmgmt
MACADDR=a4:be:26:16:e9:b2
ONBOOT=yes
MTU=1500
DEFROUTE=no
NM_CONTROLLED=no
IPV6INIT=no

And there are no 'ifcfg-XXX.bkp' in the network-scripts folder。
But the vm still unreachable when bond link change。

When I plug the second NIC out,the message puts:
localhost  kernel: bond0 :Releasing backup interface eno1
localhost  kernel: device eno1 left promiscuous mode
localhost  kernel: bond0 : making interface eno2 the new active one
localhost  kernel:  device eno2 entered promiscuous mode
localhost  kernel: i40e :1a:00.0 eno1: returing to hw mac address 
a4:be:26:16:e9:b1
localhost  lldpad: recvfrom(Event interface) : No buffer space availabe
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NN4FKOTERWXFEYF2ROGFJPLNXB53SN3N/


[ovirt-users] ovirt metrics ansible error

2019-05-26 Thread Jayme
I'm running in to this ansible error during oVirt metrics installation
(following procedures at:
https://ovirt.org/documentation/metrics-install-guide/Installing_Metrics_Store.html
 )

This is happening late in the process, after successfully deploying the
installation VM and then running second step from the metrics VM.

CHECK [memory_availability : master0.xx.com]
*
fatal: [master0.xxx.com]: FAILED! => {"changed": true, "checks":
{"disk_availability": {}, "docker_image_availability": {"changed": true},
"docker_storage": {"failed": true, "failures": [["OpenShiftCheckException",
"Could not find imported module support code for docker_info.  Looked for
either AnsibleDockerClient.py or docker_common.py\nTraceback (most recent
call last):\n  File
\"/usr/share/ansible/openshift-ansible/roles/openshift_health_checker/action_plugins/openshift_health_check.py\",
line 225, in run_check\nresult = check.run()\n  File
\"/usr/share/ansible/openshift-ansible/roles/openshift_health_checker/openshift_checks/docker_storage.py\",
line 53, in run\ndocker_info = self.execute_module(\"docker_info\",
{})\n  File
\"/usr/share/ansible/openshift-ansible/roles/openshift_health_checker/openshift_checks/__init__.py\",
line 211, in execute_module\nresult = self._execute_module(module_name,
module_args, self.tmp, self.task_vars)\n  File
\"/usr/lib/python2.7/site-packages/ansible/plugins/action/__init__.py\",
line 809, in _execute_module\n(module_style, shebang, module_data,
module_path) = self._configure_module(module_name=module_name,
module_args=module_args, task_vars=task_vars)\n  File
\"/usr/lib/python2.7/site-packages/ansible/plugins/action/__init__.py\",
line 203, in _configure_module\nenvironment=final_environment)\n  File
\"/usr/lib/python2.7/site-packages/ansible/executor/module_common.py\",
line 1023, in modify_module\nenvironment=environment)\n  File
\"/usr/lib/python2.7/site-packages/ansible/executor/module_common.py\",
line 859, in _find_module_utils\nrecursive_finder(module_name,
b_module_data, py_module_names, py_module_cache, zf)\n  File
\"/usr/lib/python2.7/site-packages/ansible/executor/module_common.py\",
line 621, in recursive_finder\nraise AnsibleError('
'.join(msg))\nAnsibleError: Could not find imported module support code for
docker_info.  Looked for either AnsibleDockerClient.py or
docker_common.py\n"]], "msg": "Could not find imported module support code
for docker_info.  Looked for either AnsibleDockerClient.py or
docker_common.py\nTraceback (most recent call last):\n  File
\"/usr/share/ansible/openshift-ansible/roles/openshift_health_checker/action_plugins/openshift_health_check.py\",
line 225, in run_check\nresult = check.run()\n  File
\"/usr/share/ansible/openshift-ansible/roles/openshift_health_checker/openshift_checks/docker_storage.py\",
line 53, in run\ndocker_info = self.execute_module(\"docker_info\",
{})\n  File
\"/usr/share/ansible/openshift-ansible/roles/openshift_health_checker/openshift_checks/__init__.py\",
line 211, in execute_module\nresult = self._execute_module(module_name,
module_args, self.tmp, self.task_vars)\n  File
\"/usr/lib/python2.7/site-packages/ansible/plugins/action/__init__.py\",
line 809, in _execute_module\n(module_style, shebang, module_data,
module_path) = self._configure_module(module_name=module_name,
module_args=module_args, task_vars=task_vars)\n  File
\"/usr/lib/python2.7/site-packages/ansible/plugins/action/__init__.py\",
line 203, in _configure_module\nenvironment=final_environment)\n  File
\"/usr/lib/python2.7/site-packages/ansible/executor/module_common.py\",
line 1023, in modify_module\nenvironment=environment)\n  File
\"/usr/lib/python2.7/site-packages/ansible/executor/module_common.py\",
line 859, in _find_module_utils\nrecursive_finder(module_name,
b_module_data, py_module_names, py_module_cache, zf)\n  File
\"/usr/lib/python2.7/site-packages/ansible/executor/module_common.py\",
line 621, in recursive_finder\nraise AnsibleError('
'.join(msg))\nAnsibleError: Could not find imported module support code for
docker_info.  Looked for either AnsibleDockerClient.py or
docker_common.py\n"}, "memory_availability": {}, "package_availability":
{"changed": false, "invocation": {"module_args": {"packages": ["PyYAML",
"bash-completion", "bind", "ceph-common", "dnsmasq", "docker", "firewalld",
"flannel", "glusterfs-fuse", "httpd-tools", "iptables",
"iptables-services", "iscsi-initiator-utils", "libselinux-python",
"nfs-utils", "ntp", "openssl", "origin", "origin-clients",
"origin-hyperkube", "origin-node", "pyparted", "python-httplib2",
"yum-utils"]}}}, "package_version": {"changed": false, "invocation":
{"module_args": {"package_list": [{"check_multi": false, "name": "origin",
"version": ""}, {"check_multi": false, "name": "origin-master", "version":
""}, {"check_multi": false, "name": "origin-node", 

[ovirt-users] Re: Is it possible to install oVirt metrics store without a RH subscription?

2019-05-26 Thread Roy Golan
No, we use OKD (which is openshift upstream)


On Sun, 26 May 2019 at 12:25, Jayme  wrote:

> Is a paid Redhat subscription required to install oVirt metrics store?
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UCJ5WQGZFPBHQRSYEHKAEDG3D3E4SDDO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HTESWHS5PUQAG4OBUQJZS62DBTMW7AKR/


[ovirt-users] Re: Is it possible to install oVirt metrics store without a RH subscription?

2019-05-26 Thread Shirly Radco
--

Shirly Radco

BI Senior Software Engineer

Red Hat 




On Sun, May 26, 2019 at 12:25 PM Jayme  wrote:

> Is a paid Redhat subscription required to install oVirt metrics store?
>

No.
In oVirt, the metrics store is base on upstream OpenShift OKD, for getting
Elasticsearch and Kibana.
And on the hosts and engine we deploy Collectd and Rsyslog, for the
collection of metrics and logs and shipping them to the central metrics
store.

___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UCJ5WQGZFPBHQRSYEHKAEDG3D3E4SDDO/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2U6RBAYJMRHCXAE2RVPOSI3XSMXZJSJ6/


[ovirt-users] Re: Single instance scaleup.

2019-05-26 Thread Strahil Nikolov
 Yeah,it seems different from the docs.I'm adding the gluster users list ,as 
they are more experienced into that.
@Gluster-users,
can you provide some hint how to add aditional replicas to the below volumes , 
so they become 'replica 2 arbiter 1' or 'replica 3' type volumes ?

Best Regards,Strahil Nikolov

В неделя, 26 май 2019 г., 15:16:18 ч. Гринуич+3, Leo David 
 написа:  
 
 Thank you Strahil,The engine and ssd-samsung are distributed...So these are 
the ones that I need to have replicated accross new nodes.I am not very sure 
about the procedure to accomplish this.Thanks,
Leo
On Sun, May 26, 2019, 13:04 Strahil  wrote:


Hi Leo,
As you do not have a distributed volume , you can easily switch to replica 2 
arbiter 1 or replica 3 volumes.

You can use the following for adding the bricks:

https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Expanding_Volumes.html

Best Regards,
Strahil Nikoliv
On May 26, 2019 10:54, Leo David  wrote:

Hi Stahil,Thank you so much for yout input !
 gluster volume info

Volume Name: engine
Type: Distribute
Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.80.191:/gluster_bricks/engine/engine
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
performance.low-prio-threads: 32
performance.strict-o-direct: off
network.remote-dio: off
network.ping-timeout: 30
user.cifs: off
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
cluster.eager-lock: enableVolume Name: ssd-samsung
Type: Distribute
Volume ID: 76576cc6-220b-4651-952d-99846178a19e
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.80.191:/gluster_bricks/sdc/data
Options Reconfigured:
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
user.cifs: off
network.ping-timeout: 30
network.remote-dio: off
performance.strict-o-direct: on
performance.low-prio-threads: 32
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on
The other two hosts will be 192.168.80.192/193  - this is gluster dedicated 
network over 10GB sfp+ switch.- host 2 wil have identical harware configuration 
with host 1 ( each disk is actually a raid0 array )- host 3 has:   -  1 ssd for 
OS   -  1 ssd - for adding to engine volume in a full replica 3   -  2 ssd's in 
a raid 1 array to be added as arbiter for the data volume ( ssd-samsung )So the 
plan is to have "engine"  scaled in a full replica 3,  and "ssd-samsung" 
scalled in a replica 3 arbitrated.



On Sun, May 26, 2019 at 10:34 AM Strahil  wrote:


Hi Leo,

Gluster is quite smart, but in order to provide any hints , can you provide 
output of 'gluster volume info '.
If you have 2 more systems , keep in mind that it is best to mirror the storage 
on the second replica (2 disks on 1 machine -> 2 disks on the new machine), 
while for the arbiter this is not neccessary.

What is your network and NICs ? Based on my experience , I can recommend at 
least 10 gbit/s  interfase(s).

Best Regards,
Strahil Nikolov
On May 26, 2019 07:52, Leo David  wrote:

Hello Everyone,Can someone help me to clarify this ?I have a single-node 4.2.8 
installation ( only two gluster storage domains - distributed  single drive 
volumes ). Now I just got two identintical servers and I would like to go for a 
3 nodes bundle.Is it possible ( after joining the new nodes to the cluster ) to 
expand the existing volumes across the new nodes and change them to replica 3 
arbitrated ?If so, could you share with me what would it be the procedure 
?Thank you very much !
Leo



-- 
Best regards, Leo David

  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KLQAIK2SYERFL4IBPC7RQ6UT6ZRVU7GW/


[ovirt-users] Re: Single instance scaleup.

2019-05-26 Thread Leo David
Thank you Strahil,
The engine and ssd-samsung are distributed...
So these are the ones that I need to have replicated accross new nodes.
I am not very sure about the procedure to accomplish this.
Thanks,

Leo

On Sun, May 26, 2019, 13:04 Strahil  wrote:

> Hi Leo,
> As you do not have a distributed volume , you can easily switch to replica
> 2 arbiter 1 or replica 3 volumes.
>
> You can use the following for adding the bricks:
>
>
> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Expanding_Volumes.html
>
> Best Regards,
> Strahil Nikoliv
> On May 26, 2019 10:54, Leo David  wrote:
>
> Hi Stahil,
> Thank you so much for yout input !
>
>  gluster volume info
>
>
> Volume Name: engine
> Type: Distribute
> Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.80.191:/gluster_bricks/engine/engine
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> performance.low-prio-threads: 32
> performance.strict-o-direct: off
> network.remote-dio: off
> network.ping-timeout: 30
> user.cifs: off
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> cluster.eager-lock: enable
> Volume Name: ssd-samsung
> Type: Distribute
> Volume ID: 76576cc6-220b-4651-952d-99846178a19e
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.80.191:/gluster_bricks/sdc/data
> Options Reconfigured:
> cluster.eager-lock: enable
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> user.cifs: off
> network.ping-timeout: 30
> network.remote-dio: off
> performance.strict-o-direct: on
> performance.low-prio-threads: 32
> features.shard: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> transport.address-family: inet
> nfs.disable: on
>
> The other two hosts will be 192.168.80.192/193  - this is gluster
> dedicated network over 10GB sfp+ switch.
> - host 2 wil have identical harware configuration with host 1 ( each disk
> is actually a raid0 array )
> - host 3 has:
>-  1 ssd for OS
>-  1 ssd - for adding to engine volume in a full replica 3
>-  2 ssd's in a raid 1 array to be added as arbiter for the data volume
> ( ssd-samsung )
> So the plan is to have "engine"  scaled in a full replica 3,  and
> "ssd-samsung" scalled in a replica 3 arbitrated.
>
>
>
>
> On Sun, May 26, 2019 at 10:34 AM Strahil  wrote:
>
> Hi Leo,
>
> Gluster is quite smart, but in order to provide any hints , can you
> provide output of 'gluster volume info '.
> If you have 2 more systems , keep in mind that it is best to mirror the
> storage on the second replica (2 disks on 1 machine -> 2 disks on the new
> machine), while for the arbiter this is not neccessary.
>
> What is your network and NICs ? Based on my experience , I can recommend
> at least 10 gbit/s  interfase(s).
>
> Best Regards,
> Strahil Nikolov
> On May 26, 2019 07:52, Leo David  wrote:
>
> Hello Everyone,
> Can someone help me to clarify this ?
> I have a single-node 4.2.8 installation ( only two gluster storage domains
> - distributed  single drive volumes ). Now I just got two identintical
> servers and I would like to go for a 3 nodes bundle.
> Is it possible ( after joining the new nodes to the cluster ) to expand
> the existing volumes across the new nodes and change them to replica 3
> arbitrated ?
> If so, could you share with me what would it be the procedure ?
> Thank you very much !
>
> Leo
>
>
>
> --
> Best regards, Leo David
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GFDDRUF3FIRXIKGS6M3I757PINVUNFLU/


[ovirt-users] oVirt 4.3.4 RC1 to RC2 - Dashboard error / VM/Host/Gluster Volumes OK

2019-05-26 Thread Strahil Nikolov
Hello All,
Just upgraded my engine from 4.3.4 RC1 to RC2 and my Dashboard is giving an 
error (see attached screenshot) despite everything seem to end well:
Error!
Could not fetch dashboard data. Please ensure that data warehouse is properly 
installed and configured.
I have checked and the VMs and Hosts + Gluster Volumes arep roperly detected 
(yet all my VMs are powered off since before RC2 upgrade).

Any clues that might help you solve that before I roll back (I have a gluster 
snapshot on 4.3.3-7) ?
Best Regards,Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ISW3HVK6FILOLO3UL3WGR2HUPCGHDPQQ/


[ovirt-users] Re: Single instance scaleup.

2019-05-26 Thread Strahil
Hi Leo,
As you do not have a distributed volume , you can easily switch to replica 2 
arbiter 1 or replica 3 volumes.

You can use the following for adding the bricks:

https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Expanding_Volumes.html

Best Regards,
Strahil NikolivOn May 26, 2019 10:54, Leo David  wrote:
>
> Hi Stahil,
> Thank you so much for yout input !
>
>  gluster volume info
>
>
> Volume Name: engine
> Type: Distribute
> Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.80.191:/gluster_bricks/engine/engine
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> performance.low-prio-threads: 32
> performance.strict-o-direct: off
> network.remote-dio: off
> network.ping-timeout: 30
> user.cifs: off
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> cluster.eager-lock: enable
> Volume Name: ssd-samsung
> Type: Distribute
> Volume ID: 76576cc6-220b-4651-952d-99846178a19e
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.80.191:/gluster_bricks/sdc/data
> Options Reconfigured:
> cluster.eager-lock: enable
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> user.cifs: off
> network.ping-timeout: 30
> network.remote-dio: off
> performance.strict-o-direct: on
> performance.low-prio-threads: 32
> features.shard: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> transport.address-family: inet
> nfs.disable: on
>
> The other two hosts will be 192.168.80.192/193  - this is gluster dedicated 
> network over 10GB sfp+ switch.
> - host 2 wil have identical harware configuration with host 1 ( each disk is 
> actually a raid0 array )
> - host 3 has:
>    -  1 ssd for OS
>    -  1 ssd - for adding to engine volume in a full replica 3
>    -  2 ssd's in a raid 1 array to be added as arbiter for the data volume ( 
> ssd-samsung )
> So the plan is to have "engine"  scaled in a full replica 3,  and 
> "ssd-samsung" scalled in a replica 3 arbitrated.
>
>
>
>
> On Sun, May 26, 2019 at 10:34 AM Strahil  wrote:
>>
>> Hi Leo,
>>
>> Gluster is quite smart, but in order to provide any hints , can you provide 
>> output of 'gluster volume info '.
>> If you have 2 more systems , keep in mind that it is best to mirror the 
>> storage on the second replica (2 disks on 1 machine -> 2 disks on the new 
>> machine), while for the arbiter this is not neccessary.
>>
>> What is your network and NICs ? Based on my experience , I can recommend at 
>> least 10 gbit/s  interfase(s).
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On May 26, 2019 07:52, Leo David  wrote:
>>>
>>> Hello Everyone,
>>> Can someone help me to clarify this ?
>>> I have a single-node 4.2.8 installation ( only two gluster storage domains 
>>> - distributed  single drive volumes ). Now I just got two identintical 
>>> servers and I would like to go for a 3 nodes bundle.
>>> Is it possible ( after joining the new nodes to the cluster ) to expand the 
>>> existing volumes across the new nodes and change them to replica 3 
>>> arbitrated ?
>>> If so, could you share with me what would it be the procedure ?
>>> Thank you very much !
>>>
>>> Leo
>
>
>
> -- 
> Best regards, Leo David___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7YUO7CF2UA4XGSQERBOUB66BKOUG5NMY/


[ovirt-users] Is it possible to install oVirt metrics store without a RH subscription?

2019-05-26 Thread Jayme
Is a paid Redhat subscription required to install oVirt metrics store?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UCJ5WQGZFPBHQRSYEHKAEDG3D3E4SDDO/


[ovirt-users] Re: Single instance scaleup.

2019-05-26 Thread Leo David
Hi Stahil,
Thank you so much for yout input !

 gluster volume info


Volume Name: engine
Type: Distribute
Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.80.191:/gluster_bricks/engine/engine
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.owner-uid: 36
storage.owner-gid: 36
features.shard: on
performance.low-prio-threads: 32
performance.strict-o-direct: off
network.remote-dio: off
network.ping-timeout: 30
user.cifs: off
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
cluster.eager-lock: enable
Volume Name: ssd-samsung
Type: Distribute
Volume ID: 76576cc6-220b-4651-952d-99846178a19e
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 192.168.80.191:/gluster_bricks/sdc/data
Options Reconfigured:
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
user.cifs: off
network.ping-timeout: 30
network.remote-dio: off
performance.strict-o-direct: on
performance.low-prio-threads: 32
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on

The other two hosts will be 192.168.80.192/193  - this is gluster dedicated
network over 10GB sfp+ switch.
- host 2 wil have identical harware configuration with host 1 ( each disk
is actually a raid0 array )
- host 3 has:
   -  1 ssd for OS
   -  1 ssd - for adding to engine volume in a full replica 3
   -  2 ssd's in a raid 1 array to be added as arbiter for the data volume
( ssd-samsung )
So the plan is to have "engine"  scaled in a full replica 3,  and
"ssd-samsung" scalled in a replica 3 arbitrated.




On Sun, May 26, 2019 at 10:34 AM Strahil  wrote:

> Hi Leo,
>
> Gluster is quite smart, but in order to provide any hints , can you
> provide output of 'gluster volume info '.
> If you have 2 more systems , keep in mind that it is best to mirror the
> storage on the second replica (2 disks on 1 machine -> 2 disks on the new
> machine), while for the arbiter this is not neccessary.
>
> What is your network and NICs ? Based on my experience , I can recommend
> at least 10 gbit/s  interfase(s).
>
> Best Regards,
> Strahil Nikolov
> On May 26, 2019 07:52, Leo David  wrote:
>
> Hello Everyone,
> Can someone help me to clarify this ?
> I have a single-node 4.2.8 installation ( only two gluster storage domains
> - distributed  single drive volumes ). Now I just got two identintical
> servers and I would like to go for a 3 nodes bundle.
> Is it possible ( after joining the new nodes to the cluster ) to expand
> the existing volumes across the new nodes and change them to replica 3
> arbitrated ?
> If so, could you share with me what would it be the procedure ?
> Thank you very much !
>
> Leo
>
>

-- 
Best regards, Leo David
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PJ2OO6SNVG4VQZDLJEEEJPTGLPZVQMUV/


[ovirt-users] Re: Single instance scaleup.

2019-05-26 Thread Strahil
Hi Leo,

Gluster is quite smart, but in order to provide any hints , can you provide 
output of 'gluster volume info '.
If you have 2 more systems , keep in mind that it is best to mirror the storage 
on the second replica (2 disks on 1 machine -> 2 disks on the new machine), 
while for the arbiter this is not neccessary.

What is your network and NICs ? Based on my experience , I can recommend at 
least 10 gbit/s  interfase(s).

Best Regards,
Strahil NikolovOn May 26, 2019 07:52, Leo David  wrote:
>
> Hello Everyone,
> Can someone help me to clarify this ?
> I have a single-node 4.2.8 installation ( only two gluster storage domains - 
> distributed  single drive volumes ). Now I just got two identintical servers 
> and I would like to go for a 3 nodes bundle.
> Is it possible ( after joining the new nodes to the cluster ) to expand the 
> existing volumes across the new nodes and change them to replica 3 arbitrated 
> ?
> If so, could you share with me what would it be the procedure ?
> Thank you very much !
>
> Leo___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NCIRPADBDM67ASZNYGN677QQ4JXPROLM/