On Thu, Apr 2, 2015 at 8:03 PM, Mark Nelson mnel...@redhat.com wrote:
Thought folks might like to see this:
http://hothardware.com/reviews/intel-ssd-750-series-nvme-pci-express-solid-state-drive-review
Quick summary:
- PCIe SSD based on the P3700
- 400GB for $389!
- 1.2GB/s writes and
On Fri, 3 Apr 2015 11:16:11 +0300 Andrey Korolyov wrote:
On Thu, Apr 2, 2015 at 8:03 PM, Mark Nelson mnel...@redhat.com wrote:
Thought folks might like to see this:
http://hothardware.com/reviews/intel-ssd-750-series-nvme-pci-express-solid-state-drive-review
Quick summary:
- PCIe
Hi,
Thank you for your answer! Meanwhile I did some investigations and found
the reason: quota works on PUTs perfectly, but there are no checks on
POSTs. I've made a pull-request: https://github.com/ceph/ceph/pull/4240
2015-04-02 18:40 GMT+03:00 Yehuda Sadeh-Weinraub yeh...@redhat.com:
Hi,
And indeed there's nothing in the log for mon.a between 17:49:32.77602
and 17:50:10.929258, which seems not great. I'd look and see if
something is happening with your disks, maybe?
Mmm, indeed.
I had checked all the disk with SMART and the RAID controller wasn't
reporting any as
Great, I opened issue # 11323.
Thanks,
Yehuda
- Original Message -
From: Sergey Arkhipov sarkhi...@asdco.ru
To: Yehuda Sadeh-Weinraub yeh...@redhat.com
Cc: ceph-users@lists.ceph.com
Sent: Friday, April 3, 2015 1:00:02 AM
Subject: Re: [ceph-users] RADOS Gateway quota management
Hi,
Is there a way to create S3 subusers? From the documentation here(
http://ceph.com/docs/master/radosgw/admin/#user-management), it appears
that a subuser only has a Swift interface.
--
Thanks and Regards,
Ravikiran
___
ceph-users mailing list
Problem: live-migrating a VM, the migration will complete but cause a VM to
become unstable. The VM may become unreachable on the network, or go through a
cycle where it hangs for ~10 mins at a time. A hard-reboot is the only way to
resolve this.
Related libvirt logs:
2015-03-30
I pulled down the gitbuilder package (ceph version 0.93-223-g5c2ecc3
(5c2ecc3b8901e6491f1fde8858b51794ffa892e2) ) and redid the cluster.
The small test files ( time cp small1/* small2/. ) went from the 2 min
30 seconds to 1 min 40 secs. With some initial tuning I was able to
get it down to 1 min