[ceph-users] Problem formatting erasure coded image

2019-09-22 Thread David Herselman
Hi, I'm seeing errors in Windows VM guests's event logs, for example: The IO operation at logical block address 0x607bf7 for Disk 1 (PDO name \Device\001e) was retried Log Name: System Source: Disk Event ID: 153 Level: Warning Initialising the disk to use GPT is successful but attempting to

Re: [ceph-users] Need advice with setup planning

2019-09-22 Thread Martin Verges
Hello Salsa, Amazing! Where were you 3 months ago? Only problem is that I think we have > no moer budget for this so I can't get approval for software license. > We where here and on some Ceph days as well. We do provide a completely free version but with limited features (no HA, LDAP,...) but

Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-22 Thread Wladimir Mutel
Ashley Merrick wrote: Correct, in a large cluster no problem. I was talking in Wladimir setup where they are running single node with a failure domain of OSD. Which would be a loss of all OSD's and all data. Sure I am aware that running with 1 NVMe is risky, so we have a plan to add a

Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-22 Thread Ashley Merrick
Correct, in a large cluster no problem. I was talking in Wladimir setup where they are running single node with a failure domain of OSD. Which would be a loss of all OSD's and all data. On Sun, 22 Sep 2019 03:42:52 +0800 solarflow99 wrote now my

Re: [ceph-users] Looking for the best way to utilize 1TB NVMe added to the host with 8x3TB HDD OSDs

2019-09-22 Thread Ashley Merrick
Correct, in a large cluster no problem. I was talking in Wladimir setup where they are running single node with a failure domain of OSD. Which would be a loss of all OSD's and all data. On Sun, 22 Sep 2019 03:42:52 +0800 solarflow99 wrote now my