Re: [ceph-users] [RGW] SignatureDoesNotMatch using curl

2017-09-25 Thread Дмитрий Глушенок
You must use triple "\n" with GET in stringToSign. See http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html > 18 сент. 2017 г., в 12:23, junho_k...@tmax.co.kr > написал(а): > > I’m trying to use Ceph Object Storage in CLI. > I used curl to make a request to the RGW with S3 wa

Re: [ceph-users] iSCSI production ready?

2017-08-02 Thread Дмитрий Глушенок
Will it be a separate project? There is a third RC for Luminous without a word about iSCSI Gateway. > 17 июля 2017 г., в 14:54, Jason Dillaman написал(а): > > On Sat, Jul 15, 2017 at 11:01 PM, Alvaro Soto > wrote: >> Hi guys, >> does anyone know any news about in wha

Re: [ceph-users] CRC mismatch detection on read (XFS OSD)

2017-07-31 Thread Дмитрий Глушенок
On Fri, Jul 28, 2017 at 8:16 AM Дмитрий Глушенок <mailto:gl...@jet.msk.su>> wrote: > Hi! > > Just found strange thing while testing deep-scrub on 10.2.7. > 1. Stop OSD > 2. Change primary copy's contents (using vi) > 3. Start OSD > > Then 'rados get&#x

[ceph-users] CRC mismatch detection on read (XFS OSD)

2017-07-28 Thread Дмитрий Глушенок
Hi! Just found strange thing while testing deep-scrub on 10.2.7. 1. Stop OSD 2. Change primary copy's contents (using vi) 3. Start OSD Then 'rados get' returns "No such file or directory". No error messages seen in OSD log, cluster status "HEALTH_OK". 4. ceph pg repair Then 'rados get' works

Re: [ceph-users] oVirt/RHEV and Ceph

2017-07-25 Thread Дмитрий Глушенок
Cinder is used as a management gateway only while hypervisors (QEMU) are directly communicating with Ceph cluster passing RBD volumes to VMs (without mapping/mounting RBD on hypervisor level). > 25 июля 2017 г., в 6:18, Brady Deetz написал(а): > > Thanks for pointing to some documentation. I'd

Re: [ceph-users] Mount CephFS with dedicated user fails: mount error 13 = Permission denied

2017-07-24 Thread Дмитрий Глушенок
Check your kernel version, prior to 4.9 it was needed to allow read on root path: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/014804.html > 24 июля 2017 г., в 12:36, c.mo...@web.de написал(а): > > Hello! > > I want to mount CephFS with a dedicated user in order to avoid p

Re: [ceph-users] How's cephfs going?

2017-07-21 Thread Дмитрий Глушенок
All three mons has value "simple". > 21 июля 2017 г., в 15:47, Ilya Dryomov написал(а): > > On Thu, Jul 20, 2017 at 6:35 PM, Дмитрий Глушенок <mailto:gl...@jet.msk.su>> wrote: >> Hi Ilya, >> >> While trying to reproduce the issue I've foun

Re: [ceph-users] How's cephfs going?

2017-07-20 Thread Дмитрий Глушенок
ing/#disconnected-remounted-fs). Do you still think it should be posted to http://tracker.ceph.com/issues/15255 ? > 20 июля 2017 г., в 17:02, Ilya Dryomov написал(а): > > On Thu, Jul 20, 2017 at 3:23 PM, Дмитрий Глушенок wrote: >> Looks like I have similar issue as described i

Re: [ceph-users] How's cephfs going?

2017-07-20 Thread Дмитрий Глушенок
Looks like I have similar issue as described in this bug: http://tracker.ceph.com/issues/15255 Writer (dd in my case) can be restarted and then writing continues, but until restart dd looks like hanged on write. > 20 июля 2017 г., в 16:12, Дмитрий Глушенок написал(а): > > Hi, >

Re: [ceph-users] How's cephfs going?

2017-07-20 Thread Дмитрий Глушенок
with it. > 19 июля 2017 г., в 13:20, Дмитрий Глушенок написал(а): > > You right. Forgot to mention that the client was using kernel 4.9.9. > >> 19 июля 2017 г., в 12:36, 许雪寒 mailto:xuxue...@360.cn>> >> написал(а): >> >> Hi, thanks for your sharing:-)

Re: [ceph-users] How's cephfs going?

2017-07-19 Thread Дмитрий Глушенок
Unfortunately no. Using FUSE was discarded due to poor performance. > 19 июля 2017 г., в 13:45, Blair Bethwaite > написал(а): > > Interesting. Any FUSE client data-points? > > On 19 July 2017 at 20:21, Дмитрий Глушенок wrote: >> RBD (via krbd) was in action at the

Re: [ceph-users] How's cephfs going?

2017-07-19 Thread Дмитрий Глушенок
bd. It's not clear whether your issue is specifically related > to CephFS or actually something else. > > Cheers, > > On 19 July 2017 at 19:32, Дмитрий Глушенок wrote: >> Hi, >> >> I can share negative test results (on Jewel 10.2.6). All tests were >> perfor

Re: [ceph-users] How's cephfs going?

2017-07-19 Thread Дмитрий Глушенок
> Thanks again:-) > > 发件人: Дмитрий Глушенок [mailto:gl...@jet.msk.su] > 发送时间: 2017年7月19日 17:33 > 收件人: 许雪寒 > 抄送: ceph-users@lists.ceph.com > 主题: Re: [ceph-users] How's cephfs going? > > Hi, > > I can share negative test results (on Jewel 10.2.6). All tests were

Re: [ceph-users] How's cephfs going?

2017-07-19 Thread Дмитрий Глушенок
Hi, I can share negative test results (on Jewel 10.2.6). All tests were performed while actively writing to CephFS from single client (about 1300 MB/sec). Cluster consists of 8 nodes, 8 OSD each (2 SSD for journals and metadata, 6 HDD RAID6 for data), MON/MDS are on dedicated nodes. 2 MDS at al

[ceph-users] OSD returns back and recovery process

2017-06-21 Thread Дмитрий Глушенок
Hello! It is clear what happens after OSD goes OUT - PGs are backfilled to other OSDs and PGs whose primary copies were on lost OSD gets new primary OSDs. But when OSD returns back it looks like all that data, for which the OSD was holding primary copies, are read from that OSD and re-written t

Re: [ceph-users] librbd + rbd-nbd

2017-04-07 Thread Дмитрий Глушенок
shant > > -- > Prashant Murthy > Sr Director, Software Engineering | Salesforce > Mobile: 919-961-3041 > > > -- > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/list

Re: [ceph-users] centos 7.3 libvirt (2.0.0-10.el7_3.2) and openstack volume attachment w/ cephx broken

2016-12-20 Thread Дмитрий Глушенок
Hi, I was playing with oVirt/Cinder integration and faced the issue. At the same time virsh on CentOS 7.3 was working fine with RBD images. So, as a workaround following procedure can be used to permanently set the secret on libvirt host: # vi /tmp/secret.xml db11828c-f9e8-48cb-81dd-29fc00e

Re: [ceph-users] 2x replication: A BIG warning

2016-12-07 Thread Дмитрий Глушенок
#x27;m very interested in this calculation. > What assumption do you have done? > Network speed, osd degree of fulfilment, etc? > > Thanks > > Wolfgang > > On 12/07/2016 11:16 AM, Дмитрий Глушенок wrote: >> Hi, >> >> Let me add a little math to your w

Re: [ceph-users] 2x replication: A BIG warning

2016-12-07 Thread Дмитрий Глушенок
RAID10 also will suffer from LSE on big disks, isn't it? > 7 дек. 2016 г., в 13:35, Christian Balzer написал(а): > > > > Hello, > > On Wed, 7 Dec 2016 13:16:45 +0300 Дмитрий Глушенок wrote: > >> Hi, >> >> Let me add a little math to your warni

Re: [ceph-users] 2x replication: A BIG warning

2016-12-07 Thread Дмитрий Глушенок
Hi, Let me add a little math to your warning: with LSE rate of 1 in 10^15 on modern 8 TB disks there is 5,8% chance to hit LSE during recovery of 8 TB disk. So, every 18th recovery will probably fail. Similarly to RAID6 (two parity disks) size=3 mitigates the problem. By the way - why it is a c

Re: [ceph-users] Math behind : : OSD count vs OSD process vs OSD ports

2015-11-18 Thread Дмитрий Глушенок
Hi Vickey, > 18 нояб. 2015 г., в 11:36, Vickey Singh > написал(а): > > Can anyone please help me understand this. > > Thank You > > > On Mon, Nov 16, 2015 at 5:55 PM, Vickey Singh > wrote: > Hello Community > > Need your help in understanding this. > >

Re: [ceph-users] rados bench leaves objects in tiered pool

2015-11-03 Thread Дмитрий Глушенок
ensive operations and >> modifying them to do more than the simple info scan would be fairly >> expensive in terms of computation and IO. >> >> I think there are some caching commands you can send to flush updates >> which would cause the objects to be entirely deleted,

[ceph-users] rados bench leaves objects in tiered pool

2015-11-03 Thread Дмитрий Глушенок
Hi, While benchmarking tiered pool using rados bench it was noticed that objects are not being removed after test. Test was performed using "rados -p rbd bench 3600 write". The pool is not used by anything else. Just before end of test: POOLS: NAME ID USED %U