[ovirt-devel] Re: Status update on "Hyperconverged Gluster oVirt support"

2018-09-28 Thread Hetz Ben Hamo
Hi,

Gobinda, great work!

One thing though - the device names (sda, sdb etc..)

On many servers, it's hard to know which disk is which. Lets say I have 10
spinning disk + 2 SSD's. Which is sda? what about NVME? worse - sometimes
replacing disks replaces the sda to something else. We used to have the
same problem with NICs and now this has been resolved on CentOS/RHEL 7.X

Could the HCI part - the disk selection part specifically - give more
details? maybe Disk ID or WWN, or anything that can identify a disk?

Also - SSD caching, most of the time it is recommended to use 2 drives if
possible for good performance. Can a user select X number of drives?

Thanks


On Fri, Sep 28, 2018 at 6:43 PM Gobinda Das  wrote:

> Hi All,
>  Status update on "Hyperconverged Gluster oVirt support"
>
> Features Completed:
> 
>
>   cockpit-ovirt
>   -
>   1- Asymmetric brick configuration.Brick can be configured per host basis
> i.e. If the user wanted to make use of sdb from host1, sdc from host2, and
> sdd from host3.
>   2- Dedupe and Compression integration via VDO support (see
> https://github.com/dm-vdo/kvdo). Gluster bricks are created on vdo devices
>   3- LVM cache configuration support (Configure cache by using fast block
> device such as SSD drive to imrove the performance of a larger and slower
> logical volumes)
>   4- Auto addition of 2nd and 3rd hosts in a 3 node setup during deployment
>   5- Auto creation of storage domains based on gluster volumes created
> during setup
>   6- Single node deployment support via Cockpit UI. For details on single
> node deployment -
> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_node_hyperconverged/
>   7- Gluster Management Dashboard (Dashboard will show the nodes in
> cluster,Volumes and bricks. User can expand the cluster and also can create
> new volume in existing cluster nodes )
>
>   oVirt
>   ---
>   1- Reset brick support from UI to allow users to replace a faulty brick
>   2- Create brick from engine now supports configuring an SSD device as
> lvmcache device when bricks are created on spinning disks
>   3- VDO monitoring
>
>  GlusterFS
> ---
>  Enhancements to performance with fuse by 15x
>  1. Cluster after eager lock change for better detection of multiple
> clients
>  2. Changing qemu option aio to "native" instead of "threads".
>
>  end-to-end deployment:
>  
>  1- End to end deployment of a Gluster + Ovirt hyperconverged environment
> using ansible roles (
> https://github.com/gluster/gluster-ansible/tree/master/playbooks ). The
> only pre-requisite is a CentOS node/oVirt node
>
> Future Plan:
> ==
>  cockpit-ovirt:
>
>   1- ansible-roles integration for deployment
>   2- Support for different volume types
>
>  vdsm:
>   1- Python3 compatibility of vdsm-gluster
>   2- Native 4K support
>
> --
> Thanks,
> Gobinda
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XCFITLLQTODFK6NIRPBTRKKYCWKO6KBP/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WTSJBFP73RTJV6EO4XYZUAHNTOVXYBLS/


[ovirt-devel] Re: Status update on "Hyperconverged Gluster oVirt support"

2018-09-29 Thread Yaniv Kaul
On Fri, Sep 28, 2018, 7:16 PM Hetz Ben Hamo  wrote:

> Hi,
>
> Gobinda, great work!
>
> One thing though - the device names (sda, sdb etc..)
>
> On many servers, it's hard to know which disk is which. Lets say I have 10
> spinning disk + 2 SSD's. Which is sda? what about NVME? worse - sometimes
> replacing disks replaces the sda to something else. We used to have the
> same problem with NICs and now this has been resolved on CentOS/RHEL 7.X
>
> Could the HCI part - the disk selection part specifically - give more
> details? maybe Disk ID or WWN, or anything that can identify a disk?
>

/dev/disk/by-id is the right identifier.
During installation, it'd be nice if it could show as much data as possible
- sdX, /dev/disk/by-id, size and perhaps manufacturer.
Y.


> Also - SSD caching, most of the time it is recommended to use 2 drives if
> possible for good performance. Can a user select X number of drives?
>
> Thanks
>
>
> On Fri, Sep 28, 2018 at 6:43 PM Gobinda Das  wrote:
>
>> Hi All,
>>  Status update on "Hyperconverged Gluster oVirt support"
>>
>> Features Completed:
>> 
>>
>>   cockpit-ovirt
>>   -
>>   1- Asymmetric brick configuration.Brick can be configured per host
>> basis i.e. If the user wanted to make use of sdb from host1, sdc from
>> host2, and sdd from host3.
>>   2- Dedupe and Compression integration via VDO support (see
>> https://github.com/dm-vdo/kvdo). Gluster bricks are created on vdo
>> devices
>>   3- LVM cache configuration support (Configure cache by using fast block
>> device such as SSD drive to imrove the performance of a larger and slower
>> logical volumes)
>>   4- Auto addition of 2nd and 3rd hosts in a 3 node setup during
>> deployment
>>   5- Auto creation of storage domains based on gluster volumes created
>> during setup
>>   6- Single node deployment support via Cockpit UI. For details on single
>> node deployment -
>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_node_hyperconverged/
>>   7- Gluster Management Dashboard (Dashboard will show the nodes in
>> cluster,Volumes and bricks. User can expand the cluster and also can create
>> new volume in existing cluster nodes )
>>
>>   oVirt
>>   ---
>>   1- Reset brick support from UI to allow users to replace a faulty brick
>>   2- Create brick from engine now supports configuring an SSD device as
>> lvmcache device when bricks are created on spinning disks
>>   3- VDO monitoring
>>
>>  GlusterFS
>> ---
>>  Enhancements to performance with fuse by 15x
>>  1. Cluster after eager lock change for better detection of multiple
>> clients
>>  2. Changing qemu option aio to "native" instead of "threads".
>>
>>  end-to-end deployment:
>>  
>>  1- End to end deployment of a Gluster + Ovirt hyperconverged environment
>> using ansible roles (
>> https://github.com/gluster/gluster-ansible/tree/master/playbooks ). The
>> only pre-requisite is a CentOS node/oVirt node
>>
>> Future Plan:
>> ==
>>  cockpit-ovirt:
>>
>>   1- ansible-roles integration for deployment
>>   2- Support for different volume types
>>
>>  vdsm:
>>   1- Python3 compatibility of vdsm-gluster
>>   2- Native 4K support
>>
>> --
>> Thanks,
>> Gobinda
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XCFITLLQTODFK6NIRPBTRKKYCWKO6KBP/
>>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WTSJBFP73RTJV6EO4XYZUAHNTOVXYBLS/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JK3XRJWJ3ZEKL7NDNG7EYKRKDO2ZGI5Q/


[ovirt-devel] Re: Status update on "Hyperconverged Gluster oVirt support"

2018-09-29 Thread Hetz Ben Hamo
/dev/disk-by-id could be problematic. it only showing disks that have been
formatted.

For example, I've just created a node with 3 disks and on Anaconda I chose
only the first disk. After the node installation and reboot, I see on
/dev/disk/by-id only the DM, and the DVD, not the two unformatted disks
(which can be seen using lsscsi command).
Anaconda, however, does see the disks, details etc...


On Sat, Sep 29, 2018 at 12:57 PM Yaniv Kaul  wrote:

>
>
> On Fri, Sep 28, 2018, 7:16 PM Hetz Ben Hamo  wrote:
>
>> Hi,
>>
>> Gobinda, great work!
>>
>> One thing though - the device names (sda, sdb etc..)
>>
>> On many servers, it's hard to know which disk is which. Lets say I have
>> 10 spinning disk + 2 SSD's. Which is sda? what about NVME? worse -
>> sometimes replacing disks replaces the sda to something else. We used to
>> have the same problem with NICs and now this has been resolved on
>> CentOS/RHEL 7.X
>>
>> Could the HCI part - the disk selection part specifically - give more
>> details? maybe Disk ID or WWN, or anything that can identify a disk?
>>
>
> /dev/disk/by-id is the right identifier.
> During installation, it'd be nice if it could show as much data as
> possible - sdX, /dev/disk/by-id, size and perhaps manufacturer.
> Y.
>
>
>> Also - SSD caching, most of the time it is recommended to use 2 drives if
>> possible for good performance. Can a user select X number of drives?
>>
>> Thanks
>>
>>
>> On Fri, Sep 28, 2018 at 6:43 PM Gobinda Das  wrote:
>>
>>> Hi All,
>>>  Status update on "Hyperconverged Gluster oVirt support"
>>>
>>> Features Completed:
>>> 
>>>
>>>   cockpit-ovirt
>>>   -
>>>   1- Asymmetric brick configuration.Brick can be configured per host
>>> basis i.e. If the user wanted to make use of sdb from host1, sdc from
>>> host2, and sdd from host3.
>>>   2- Dedupe and Compression integration via VDO support (see
>>> https://github.com/dm-vdo/kvdo). Gluster bricks are created on vdo
>>> devices
>>>   3- LVM cache configuration support (Configure cache by using fast
>>> block device such as SSD drive to imrove the performance of a larger and
>>> slower logical volumes)
>>>   4- Auto addition of 2nd and 3rd hosts in a 3 node setup during
>>> deployment
>>>   5- Auto creation of storage domains based on gluster volumes created
>>> during setup
>>>   6- Single node deployment support via Cockpit UI. For details on
>>> single node deployment -
>>> https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_node_hyperconverged/
>>>   7- Gluster Management Dashboard (Dashboard will show the nodes in
>>> cluster,Volumes and bricks. User can expand the cluster and also can create
>>> new volume in existing cluster nodes )
>>>
>>>   oVirt
>>>   ---
>>>   1- Reset brick support from UI to allow users to replace a faulty brick
>>>   2- Create brick from engine now supports configuring an SSD device as
>>> lvmcache device when bricks are created on spinning disks
>>>   3- VDO monitoring
>>>
>>>  GlusterFS
>>> ---
>>>  Enhancements to performance with fuse by 15x
>>>  1. Cluster after eager lock change for better detection of multiple
>>> clients
>>>  2. Changing qemu option aio to "native" instead of "threads".
>>>
>>>  end-to-end deployment:
>>>  
>>>  1- End to end deployment of a Gluster + Ovirt hyperconverged
>>> environment using ansible roles (
>>> https://github.com/gluster/gluster-ansible/tree/master/playbooks ). The
>>> only pre-requisite is a CentOS node/oVirt node
>>>
>>> Future Plan:
>>> ==
>>>  cockpit-ovirt:
>>>
>>>   1- ansible-roles integration for deployment
>>>   2- Support for different volume types
>>>
>>>  vdsm:
>>>   1- Python3 compatibility of vdsm-gluster
>>>   2- Native 4K support
>>>
>>> --
>>> Thanks,
>>> Gobinda
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XCFITLLQTODFK6NIRPBTRKKYCWKO6KBP/
>>>
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WTSJBFP73RTJV6EO4XYZUAHNTOVXYBLS/
>>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/arch

[ovirt-devel] Re: Status update on "Hyperconverged Gluster oVirt support"

2018-09-29 Thread Yaniv Kaul
On Sat, Sep 29, 2018, 5:03 PM Hetz Ben Hamo  wrote:

> /dev/disk-by-id could be problematic. it only showing disks that have been
> formatted.
>
> For example, I've just created a node with 3 disks and on Anaconda I chose
> only the first disk. After the node installation and reboot, I see on
> /dev/disk/by-id only the DM, and the DVD, not the two unformatted disks
> (which can be seen using lsscsi command).
> Anaconda, however, does see the disks, details etc...
>

That's not what I know. Might be something with udev or some filtering, but
certainly I was not aware it's related to formatting.
Y.


>
> On Sat, Sep 29, 2018 at 12:57 PM Yaniv Kaul  wrote:
>
>>
>>
>> On Fri, Sep 28, 2018, 7:16 PM Hetz Ben Hamo  wrote:
>>
>>> Hi,
>>>
>>> Gobinda, great work!
>>>
>>> One thing though - the device names (sda, sdb etc..)
>>>
>>> On many servers, it's hard to know which disk is which. Lets say I have
>>> 10 spinning disk + 2 SSD's. Which is sda? what about NVME? worse -
>>> sometimes replacing disks replaces the sda to something else. We used to
>>> have the same problem with NICs and now this has been resolved on
>>> CentOS/RHEL 7.X
>>>
>>> Could the HCI part - the disk selection part specifically - give more
>>> details? maybe Disk ID or WWN, or anything that can identify a disk?
>>>
>>
>> /dev/disk/by-id is the right identifier.
>> During installation, it'd be nice if it could show as much data as
>> possible - sdX, /dev/disk/by-id, size and perhaps manufacturer.
>> Y.
>>
>>
>>> Also - SSD caching, most of the time it is recommended to use 2 drives
>>> if possible for good performance. Can a user select X number of drives?
>>>
>>> Thanks
>>>
>>>
>>> On Fri, Sep 28, 2018 at 6:43 PM Gobinda Das  wrote:
>>>
 Hi All,
  Status update on "Hyperconverged Gluster oVirt support"

 Features Completed:
 

   cockpit-ovirt
   -
   1- Asymmetric brick configuration.Brick can be configured per host
 basis i.e. If the user wanted to make use of sdb from host1, sdc from
 host2, and sdd from host3.
   2- Dedupe and Compression integration via VDO support (see
 https://github.com/dm-vdo/kvdo). Gluster bricks are created on vdo
 devices
   3- LVM cache configuration support (Configure cache by using fast
 block device such as SSD drive to imrove the performance of a larger and
 slower logical volumes)
   4- Auto addition of 2nd and 3rd hosts in a 3 node setup during
 deployment
   5- Auto creation of storage domains based on gluster volumes created
 during setup
   6- Single node deployment support via Cockpit UI. For details on
 single node deployment -
 https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_node_hyperconverged/
   7- Gluster Management Dashboard (Dashboard will show the nodes in
 cluster,Volumes and bricks. User can expand the cluster and also can create
 new volume in existing cluster nodes )

   oVirt
   ---
   1- Reset brick support from UI to allow users to replace a faulty
 brick
   2- Create brick from engine now supports configuring an SSD device as
 lvmcache device when bricks are created on spinning disks
   3- VDO monitoring

  GlusterFS
 ---
  Enhancements to performance with fuse by 15x
  1. Cluster after eager lock change for better detection of multiple
 clients
  2. Changing qemu option aio to "native" instead of "threads".

  end-to-end deployment:
  
  1- End to end deployment of a Gluster + Ovirt hyperconverged
 environment using ansible roles (
 https://github.com/gluster/gluster-ansible/tree/master/playbooks ).
 The only pre-requisite is a CentOS node/oVirt node

 Future Plan:
 ==
  cockpit-ovirt:

   1- ansible-roles integration for deployment
   2- Support for different volume types

  vdsm:
   1- Python3 compatibility of vdsm-gluster
   2- Native 4K support

 --
 Thanks,
 Gobinda
 ___
 Devel mailing list -- devel@ovirt.org
 To unsubscribe send an email to devel-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XCFITLLQTODFK6NIRPBTRKKYCWKO6KBP/

>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WTSJBFP73RTJV6EO4XYZUAHNTOVXYBLS/
>>>
>>