>>But what about Windows? Does NTFS support barriers too?
Windows > 2003 support FUA (like in newer linux kernel). So it's safe.
virtio-win driver support it too since 1 or 2 year.
I have had a discuss about it some year ago, see :
https://github.com/YanVugenfirer/kvm-guest-drivers-windows/issu
Sage, would it not be more effective to separate the data as internal and
external in a sense. So all maintenance related activities will be classed as
internal (like scrubbing, deep-scrubbing, import and export data, etc) and will
not effect the cache and all other activities (really client io)
Sage, I guess this will be a problem with the cache pool when you do the export
for the first time. However, after the first export is done, the diff data will
be read and copied across and looking at the cache pool, I would say the diff
data will be there anyway as the changes are going to be c
Like to turn the manage button off to prevent accidental change of cluster
settings. Has anyone does that and how? Thanks.
— Yuming
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Fri, 22 Aug 2014, Andrei Mikhailovsky wrote:
> So it looks like using rbd export / import will negatively effect the
> client performance, which is unfortunate. Is this really the case? Any
> plans on changing this behavior in future versions of ceph?
There will always be some impact from imp
On Fri, 22 Aug 2014, Andrei Mikhailovsky wrote:
> Does that also mean that scrubbing and deep-scrubbing also squishes data
> out of the cache pool? Could someone from the ceph community confirm
> this?
Scrubbing activities have no effect on the cache; don't worry. :)
sage
>
> Thanks
>
>
>
I believe the scrubbing happens at the pool level, when the backend pool is
scrubbed it is independent of the cache pool. It would be nice to get some
definite answers from someone who knows a lot more.
Robert LeBlanc
On Fri, Aug 22, 2014 at 3:16 PM, Andrei Mikhailovsky
wrote:
> Does that also
Does that also mean that scrubbing and deep-scrubbing also squishes data out of
the cache pool? Could someone from the ceph community confirm this?
Thanks
- Original Message -
From: "Robert LeBlanc"
To: "Andrei Mikhailovsky"
Cc: ceph-users@lists.ceph.com
Sent: Friday, 22 August, 2014
In the performance webinar last week (it is online for viewing), they
mentioned that they are looking a ways to prevent single reads from
entering cache or other optimizations. From what I understand it is still a
very new feature so I'm sure it will see some good improvements.
On Fri, Aug 22, 20
So it looks like using rbd export / import will negatively effect the client
performance, which is unfortunate. Is this really the case? Any plans on
changing this behavior in future versions of ceph?
Cheers
Andrei
- Original Message -
From: "Robert LeBlanc"
To: "Andrei Mikhailovsky"
Hi all,
After having played for a while with Ceph and its S3 gateway, I've come to the
conclusion that the default behaviour is that a FULL_CONTROL acl on a bucket
does not give you FULL_CONTROL on its underlying keys..it is an issue regarding
the usage we want to make of our ceph cluster .. So
My understanding is that all reads are copied to the cache pool. This would
indicate that cache will be evicted. I don't know to what extent this will
affect the hot cache because we have not used a cache pool yet. I'm
currently looking into bcache fronting the disks to provide caching there.
Robe
Hello guys,
I am planning to perform regular rbd pool off-site backup with rbd export and
export-diff. I've got a small ceph firefly cluster with an active writeback
cache pool made of couple of osds. I've got the following question which I hope
the ceph community could answer:
Will this rbd e
Thanks, haomai. I worry about if all guest file systems are so serious like you
said. For rhel5, barrier is not enabled by default. For windows, I even do not
know how to enable it? Do you think there is no chance that guest file system
being corrupted on host/guest crash?
发自我的 iPad
> 在 2014
Thanks, Alexandre. But what about Windows? Does NTFS support barriers too?
Should I have confident that win2k3 guest could survive from data loss on
host/guest crash?
发自我的 iPad
> 在 2014年8月22日,23:07,Alexandre DERUMIER 写道:
>
> Hi,
> for RHEL5, I'm not sure
>
> be barriers supported is maybe n
Hi,
for RHEL5, I'm not sure
be barriers supported is maybe not implemented in virtio devices,lvm,dm raid
and some filesystem,
depend of the kernel version.
Not sure what is backported in rhel5 kernel
see
http://monolight.cc/2011/06/barriers-caches-filesystems/
- Mail original -
Seriously it's safe to use rbd cache for vm. Rbd cache behavior like
disk cache which is flushed via "fsync" or "fdatasync" calls.
A serious application will take care of it.
On Fri, Aug 22, 2014 at 7:05 PM, Yufang Zhang wrote:
> Hi guys,
>
> Apologize if this question has been asked before. I'd
this time ceph01-vm down, no big log happen , other 2 ok.do not what's
the reason, this is not my first time install Ceph. but this is first
time i meet that mon down again and again.
ceph.conf on each OSDs and MONs
[global]
fsid = 075f1aae-48de-412e-b024-b0f014dbc8cf
mon_initial_members =
On 08/22/2014 10:21 AM, debian Only wrote:
i have 3 mons in Ceph 0.80.5 on Wheezy. have one RadosGW
when happen this first time, i increase the mon log device.
this time mon.ceph02-vm down, only this mon down, other 2 is ok.
pls some one give me some guide.
27M Aug 22 02:11 ceph-mon.ceph04
Hi guys,
Apologize if this question has been asked before. I'd like to know if it is
safe to enable rbd cache with qemu (cache mode set as writeback) in
production? Currently, there are 4 types of guest os supported in our
production: REHL5, RHEL6, Win2k3, Win2k8. Our host is RHEL6.2 on which
qemu
i have 3 mons in Ceph 0.80.5 on Wheezy. have one RadosGW
when happen this first time, i increase the mon log device.
this time mon.ceph02-vm down, only this mon down, other 2 is ok.
pls some one give me some guide.
27M Aug 22 02:11 ceph-mon.ceph04-vm.log
43G Aug 22 02:11 ceph-mon.ceph02-vm.l
Hi Irek,
Got it, Thanks :)
—
idzzy
On August 22, 2014 at 6:17:52 PM, Irek Fasikhov (malm...@gmail.com) wrote:
node1: 4[TB], node2: 4[TB], node3: 4[TB] :)
22 авг. 2014 г. 12:53 пользователь "idzzy" написал:
Hi Irek,
Understood.
Let me ask about only this.
> No, it's for the entire cluster.
node1: 4[TB], node2: 4[TB], node3: 4[TB] :)
22 авг. 2014 г. 12:53 пользователь "idzzy" написал:
> Hi Irek,
>
> Understood.
>
> Let me ask about only this.
>
> > No, it's for the entire cluster.
>
> Is this meant that total disk amount size of all nodes is over than 11.8
> TB?
> e.g node1: 4[TB],
Hi Irek,
Understood.
Let me ask about only this.
> No, it's for the entire cluster.
Is this meant that total disk amount size of all nodes is over than 11.8 TB?
e.g node1: 4[TB], node2: 4[TB], node3: 4[TB]
not each node.
e.g node1: 11.8[TB], node2: 11.8[TB], node3:11.8 [TB]
Thank you.
On
I recommend you use replication, because radosgw uses asynchronous
replication.
Yes divided by nearfull ratio.
No, it's for the entire cluster.
2014-08-22 11:51 GMT+04:00 idzzy :
> Hi,
>
> If not use replication, Is it only to divide by nearfull_ratio?
> (does only radosgw support replication?)
Hi,
If not use replication, Is it only to divide by nearfull_ratio?
(does only radosgw support replication?)
10T/0.85 = 11.8 TB of each node?
# ceph pg dump | egrep "full_ratio|nearfulll_ratio"
full_ratio 0.95
nearfull_ratio 0.85
Sorry I’m not familiar with ceph architecture.
Thanks for the rep
Thanks Greg, a good nights sleep and your eyes made the difference: Here’s the
relevant part from /etc/cinder/cinder.conf to make that happen
[DEFAULT]
...
enabled_backends=quobyte,rbd
default_volume_type=rbd
[quobyte]
volume_backend_name=quobyte
quobyte_volume_url=quobyte://host.example.com/o
Hi Craig,
many thanks for your help. I decided to reinstall ceph.
Regards,
Mike
Von: Craig Lewis [cle...@centraldesktop.com]
Gesendet: Dienstag, 19. August 2014 22:24
An: Riederer, Michael
Cc: ceph-users@lists.ceph.com
Betreff: Re: [ceph-users] HEALTH_WARN 4 pgs i
28 matches
Mail list logo