Re: Many dns domain names in radosgw

2012-11-19 Thread Sławomir Skowron
Sat, Nov 17, 2012 at 1:50 PM, Sławomir Skowron wrote: > > Welcome, > > > > I have a question. Is there, any way to support multiple domains names > > in one radosgw on virtual host type connection in S3 ?? > > > Are you aiming at having multiple virtual domai

Re: Geo-replication with RBD

2013-01-31 Thread Sławomir Skowron
We are using nginx, on top of rgw. In nginx we manage to create logic, for using a AMQP, and async operations via queues. Then workers, on every side getiing data from own queue, and then coping data from source, to destination in s3 API. Works for PUT/DELETE, and work automatic when production goe

Re: Geo-replication with RBD

2013-02-18 Thread Sławomir Skowron
queue is re-checked, for some time. I am in middle of writing some article, about this, but my sickness, have slow down this process slightly. On Thu, Jan 31, 2013 at 10:50 AM, Gandalf Corvotempesta wrote: > 2013/1/31 Sławomir Skowron : >> We are using nginx, on top of rgw. In ng

Re: Geo-replication with RBD

2013-02-18 Thread Sławomir Skowron
with restart, or many other cases. Volumes are in many sizes: 1-500GB external block device for kvm vm, like EBS. On Mon, Feb 18, 2013 at 3:07 PM, Sławomir Skowron wrote: > Hi, Sorry for very late response, but i was sick. > > Our case is to make a failover rbd instance in another cluste

Re: Geo-replication with RBD

2013-02-19 Thread Sławomir Skowron
> Regards, > Sébastien Han. > > > On Mon, Feb 18, 2013 at 3:20 PM, Sławomir Skowron wrote: >> Hi, Sorry for very late response, but i was sick. >> >> Our case is to make a failover rbd instance in another cluster. We are >> storing block device images, for so

Re: Geo-replication with RBD

2013-02-20 Thread Sławomir Skowron
Like i say, yes. Now it is only option, to migrate data from one cluster to other, and now it must be enough, with some auto features. But is there any timeline, or any brainstorming in ceph internal meetings, about any possible replication in block level, or something like that ?? On 20 lut 2013

[0.48.3] cluster health - 1 pgs incomplete state

2013-02-20 Thread Sławomir Skowron
Hi, I have some problem. After OSD expand, and cluster crush re-organize i have 1 pg in incomplete state. How can i solve this problem ?? ceph -s health HEALTH_WARN 1 pgs incomplete; 1 pgs stuck inactive; 1 pgs stuck unclean monmap e21: 3 mons at {0=10.178.64.4:6790/0,1=10.178.64.5:6790/0,2

Re: RGW Blocking on 1-2 PG's - argonaut

2013-03-04 Thread Sławomir Skowron
h. pool 3 '.rgw.buckets' rep size 3 crush_ruleset 0 object_hash rjenkins pg_num 4800 pgp_num 4800 last_change 908 owner 0 When will bee possible to expand number of pg's ?? Best Regards Slawomir Skowron On Mon, Mar 4, 2013 at 3:16 PM, Yehuda Sadeh wrote: > On Mon, Mar 4, 2013

Re: RGW Blocking on 1-2 PG's - argonaut

2013-03-04 Thread Sławomir Skowron
On Mon, Mar 4, 2013 at 6:02 PM, Sage Weil wrote: > On Mon, 4 Mar 2013, S?awomir Skowron wrote: >> Ok, thanks for response. But if i have crush map like this in attachment. >> >> All data should be balanced equal, not including hosts with 0.5 weight. >> >> How make data auto balanced ?? when i know

Re: RGW Blocking on 1-2 PG's - argonaut

2013-03-04 Thread Sławomir Skowron
good. On Mon, Mar 4, 2013 at 6:25 PM, Gregory Farnum wrote: > On Mon, Mar 4, 2013 at 9:23 AM, Sławomir Skowron wrote: >> On Mon, Mar 4, 2013 at 6:02 PM, Sage Weil wrote: >>> On Mon, 4 Mar 2013, S?awomir Skowron wrote: >>>> Ok, thanks for response. But if i have cru

Re: RGW Blocking on 1-2 PG's - argonaut

2013-03-04 Thread Sławomir Skowron
Mon, Mar 4, 2013 at 6:42 PM, Sławomir Skowron wrote: > Alone (one of this slow osd in mentioned tripple) > > 2013-03-04 18:39:27.683035 osd.23 [INF] bench: wrote 1024 MB in blocks > of 4096 KB in 15.241943 sec at 68795 KB/sec > > in for loop (some slow request appear): > >

Re: RGW Blocking on 1-2 PG's - argonaut

2013-03-06 Thread Sławomir Skowron
Hi, i do some test, to reproduce this problem. As you can see, only one drive (each drive in same PG) is much more utilize, then others, and there are some ops in queue on this slow osd. This test is getting heads from s3 objects, alphabetically sorted. This is strange. why this files is going in

Re: RGW Blocking on 1-2 PG's - argonaut

2013-03-06 Thread Sławomir Skowron
Great, thanks. Now i understand everything. Best Regards SS Dnia 6 mar 2013 o godz. 15:04 Yehuda Sadeh napisał(a): > On Wed, Mar 6, 2013 at 5:06 AM, Sławomir Skowron wrote: >> Hi, i do some test, to reproduce this problem. >> >> As you can see, only one drive (each driv

mkfs on osd - failed in 0.47

2012-05-21 Thread Sławomir Skowron
Ubuntu precise: Linux obs-10-177-66-4 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux # mount /dev/sdc on /vol0/data/osd.0 type ext4 (rw,noexec,nodev,noatime,nodiratime,user_xattr,data=writeback,barrier=0) # ceph-osd -i 0 --mkjournal --mkfs --monmap /tm

Re: mkfs on osd - failed in 0.47

2012-05-21 Thread Sławomir Skowron
roken for a few > hours in a way that will manifest somewhat like this. > > On Mon, May 21, 2012 at 1:49 PM, Stefan Priebe wrote: >> Am 21.05.2012 22:41, schrieb Sławomir Skowron: >> >>> # ceph-osd -i 0 --mkjournal --mkfs --monmap /tmp/monmap >>> 2012-05-21

Re: Designing a cluster guide

2012-05-21 Thread Sławomir Skowron
Maybe good for journal will be two cheap MLC Intel drives on Sandforce (320/520), 120GB or 240GB, and HPA changed to 20-30GB only for separate journaling partitions with hardware RAID1. I like to test setup like this, but maybe someone have any real life info ?? On Mon, May 21, 2012 at 5:07 PM, T

Re: mkfs on osd - failed in 0.47

2012-05-21 Thread Sławomir Skowron
Great Thanks. On Mon, May 21, 2012 at 11:24 PM, Sage Weil wrote: > On Mon, 21 May 2012, Stefan Priebe wrote: >> Am 21.05.2012 22:58, schrieb Sawomir Skowron: >> > Yes on root. 30 minutes ago on 0.46 same operation works. >> > >> ... >> > Repo: >> > >> > deb http://ceph.com/debian/ precise main >>

Re: Designing a cluster guide

2012-05-21 Thread Sławomir Skowron
I have some performance from rbd cluster near 320MB/s on VM from 3 node cluster, but with 10GE, and with 26 2.5" SAS drives used on every machine it's not everything that can be. Every osd drive is raid0 with one drive via battery cached nvram in hardware raid ctrl. Every osd take much ram for cach

Re: Designing a cluster guide

2012-05-21 Thread Sławomir Skowron
http://en.wikipedia.org/wiki/Host_protected_area On Tue, May 22, 2012 at 8:30 AM, Stefan Priebe - Profihost AG wrote: > Am 21.05.2012 23:22, schrieb Sławomir Skowron: >> Maybe good for journal will be two cheap MLC Intel drives on Sandforce >> (320/520), 120GB or 240GB, and HP

Re: mkfs on osd - failed in 0.47

2012-05-22 Thread Sławomir Skowron
10:16 fsid -rw-r--r-- 1 root root 536870912 May 22 10:16 journal drwx-- 2 root root 16384 May 22 10:15 lost+found -rw-r--r-- 1 root root 4 May 22 10:16 store_version -rwx-- 1 root root 0 May 22 10:16 xattr_test On Mon, May 21, 2012 at 11:25 PM, Sławomir Skowron

Re: mkfs on osd - failed in 0.47

2012-05-22 Thread Sławomir Skowron
Ok, now it is clear to me. I disable filestore_xattr_use_omap for now, and i will try to move puppet class to xfs for a new cluster init :) Thanks On Tue, May 22, 2012 at 7:47 PM, Greg Farnum wrote: > On Tuesday, May 22, 2012 at 1:21 AM, Sławomir Skowron wrote: >> One more thing: >

Re: RGW, future directions

2012-05-22 Thread Sławomir Skowron
On Tue, May 22, 2012 at 8:07 PM, Yehuda Sadeh wrote: > RGW is maturing. Beside looking at performance, which highly ties into > RADOS performance, we'd like to hear whether there are certain pain > points or future directions that you (you as in the ceph community) > would like to see us taking. >

Re: RGW, future directions

2012-05-22 Thread Sławomir Skowron
On Tue, May 22, 2012 at 9:09 PM, Yehuda Sadeh wrote: > On Tue, May 22, 2012 at 11:25 AM, Sławomir Skowron wrote: >> On Tue, May 22, 2012 at 8:07 PM, Yehuda Sadeh wrote: >>> RGW is maturing. Beside looking at performance, which highly ties into >>> RADOS performance

Re: RGW, future directions

2012-05-24 Thread Sławomir Skowron
On Thu, May 24, 2012 at 7:15 AM, Wido den Hollander wrote: > > > On 22-05-12 20:07, Yehuda Sadeh wrote: >> >> RGW is maturing. Beside looking at performance, which highly ties into >> RADOS performance, we'd like to hear whether there are certain pain >> points or future directions that you (you a

RBD stale on VM, and RBD cache enable problem

2012-06-11 Thread Sławomir Skowron
I have two questions. My newly created cluster with xfs on all osd, ubuntu precise, kernel 3.2.0-23-generic. Ceph 0.47.2-1precise pool 0 'data' rep size 3 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1228 owner 0 crash_replay_interval 45 pool 1 'metadata' rep size 3 crush_

Re: RBD stale on VM, and RBD cache enable problem

2012-07-04 Thread Sławomir Skowron
On Tue, Jul 3, 2012 at 7:39 PM, Gregory Farnum wrote: > On Mon, Jun 11, 2012 at 12:53 PM, Sławomir Skowron wrote: >> I have two questions. My newly created cluster with xfs on all osd, >> ubuntu precise, kernel 3.2.0-23-generic. Ceph 0.47.2-1precise >> >> pool 0 '

Increase number of PG

2012-07-20 Thread Sławomir Skowron
I know that this feature is disabled, are you planning to enable this in near future ?? I have many of drives, and my S3 instalation use only few of them in one time, and i need to improve that. When i use it as rbd it use all of them. Regards Slawomir Skowron -- To unsubscribe from this list:

Re: Increase number of PG

2012-07-22 Thread Sławomir Skowron
Dnia 21 lip 2012 o godz. 20:08 Yehuda Sadeh napisał(a): > On Sat, Jul 21, 2012 at 10:13 AM, Gregory Farnum wrote: >> On Fri, Jul 20, 2012 at 1:15 PM, Tommi Virtanen > (mailto:t...@inktank.com)> wrote: >>> On Fri, Jul 20, 2012 at 8:31 AM, Sławomir Skowron >> (m

Re: Increase number of PG

2012-07-23 Thread Sławomir Skowron
Ok everything is clear now, Thanks. I will try this in planning service works. Regards Slawomir Skowron. On 23 lip 2012, at 18:00, Tommi Virtanen wrote: > On Sun, Jul 22, 2012 at 11:57 PM, Sławomir Skowron wrote: >> My workload looks like this: >> >> - Max 20% are PUTs

Re: RadosGW hanging

2012-08-14 Thread Sławomir Skowron
5] After restart this two OSD, delayed operations has gone. When scrubbing in pg is online, again, then number of waiting objecter requests in rgw going up, and in this case scrubbing is not going to be end, for many hours, i have quite big problem. Is this some known bug ?? or maybe new one ?? O

ceph 0.48.1 osd die

2012-08-21 Thread Sławomir Skowron
Ubuntu precise, ceph 0.48.1 After crush change, whole cluster reorganize, but one machine get very hight load, and 4 OSD on this machine die with this in log. After that i reboot machine, and re-init this OSD (i left one to diagnose if needed), for full stability. Now everything is ok, but maybe

Re: Ceph remap/recovery stuck

2012-08-24 Thread Sławomir Skowron
OSD ?? On Thu, Aug 23, 2012 at 3:52 PM, Sławomir Skowron wrote: > 3 osd after crash rebuilds ok, but rebuild of two more osd (12 and > 30), i can't make cluster to be active+clean > > I do rebuild like in doc: > > stop osd, > remove from crush, > rm from map, > r

Re: Ideal hardware spec?

2012-08-24 Thread Sławomir Skowron
Dnia 24 sie 2012 o godz. 17:05 Mark Nelson napisał(a): > On 08/24/2012 09:17 AM, Stephen Perkins wrote: >> Morning Wido (and all), >> I'd like to see a "best" hardware config as well... however, I'm interested in a SAS switching fabric where the nodes do not have any storage (excep

Re: Ceph remap/recovery stuck

2012-08-24 Thread Sławomir Skowron
Nice thanks. Dnia 24 sie 2012 o godz. 18:35 Sage Weil napisał(a): > On Fri, 24 Aug 2012, S?awomir Skowron wrote: >> I have found workaround. >> >> Change CRUSH to replication to osd in rule for this pool, and after >> recovery, remapped data, i just change same rule into rack awarenes, >> and wh

Re: e: mon memory issue

2012-08-31 Thread Sławomir Skowron
I have this problem too. My mon's in 0.48.1 cluster have 10GB RAM each, with 78 osd, and 2k request per minute (max) in radosgw. Now i have run one via valgrind. I will send output when mon grow up. On Fri, Aug 31, 2012 at 6:03 PM, Sage Weil wrote: > On Fri, 31 Aug 2012, Xiaopong Tran wrote: > >

Re: e: mon memory issue

2012-09-04 Thread Sławomir Skowron
at 8:34 PM, Sławomir Skowron wrote: > I have this problem too. My mon's in 0.48.1 cluster have 10GB RAM > each, with 78 osd, and 2k request per minute (max) in radosgw. > > Now i have run one via valgrind. I will send output when mon grow up. > > On Fri, Aug 31, 2012 at 6:03

Re: e: mon memory issue

2012-09-05 Thread Sławomir Skowron
Unfortunately here is the problem in my Ubuntu 12.04.1 --9399-- You may be able to write your own handler. --9399-- Read the file README_MISSING_SYSCALL_OR_IOCTL. --9399-- Nevertheless we consider this a bug. Please report --9399-- it at http://valgrind.org/support/bug_reports.html. ==9399== Warn

Re: Inject configuration change into cluster

2012-09-05 Thread Sławomir Skowron
Ok, but in global case, when i use, a chef/puppet or any other, i wish to change configuration in ceph.conf, and reload daemon to get new configuration changes from ceph.conf, this feature would be very helpful in ceph administration. Inject is ok, for testing, or debuging. On Tue, Sep 4, 2012 at

Re: e: mon memory issue

2012-09-05 Thread Sławomir Skowron
On Wed, Sep 5, 2012 at 5:51 PM, Sage Weil wrote: > On Wed, 5 Sep 2012, S?awomir Skowron wrote: >> Unfortunately here is the problem in my Ubuntu 12.04.1 >> >> --9399-- You may be able to write your own handler. >> --9399-- Read the file README_MISSING_SYSCALL_OR_IOCTL. >> --9399-- Nevertheless we

Re: e: mon memory issue

2012-09-10 Thread Sławomir Skowron
a_B=0 mem_stacks_B=0 heap_tree=empty On Wed, Sep 5, 2012 at 8:44 PM, Sławomir Skowron wrote: > On Wed, Sep 5, 2012 at 5:51 PM, Sage Weil wrote: >> On Wed, 5 Sep 2012, S?awomir Skowron wrote: >>> Unfortunately here is the problem in my Ubuntu 12.04.1 >>> >>> --9399--

Access Dienied for bucket upload - 403 code

2012-09-11 Thread Sławomir Skowron
Every acl operation ending with 403 in PUT. ~# s3 -u test oc Bucket Status ocAccess Denied Anyone know w

Re: Access Dienied for bucket upload - 403 code

2012-09-11 Thread Sławomir Skowron
On Tue, Sep 11, 2012 at 6:48 PM, Yehuda Sadeh wrote: > On Tue, Sep 11, 2012 at 9:45 AM, Yehuda Sadeh wrote: >> On Tue, Sep 11, 2012 at 7:28 AM, Sławomir Skowron wrote: >>> Every acl operation ending with 403 in PUT. >>> >>> ~# s3 -u test oc

Re: Access Dienied for bucket upload - 403 code

2012-09-11 Thread Sławomir Skowron
ec76c6700 10 --> Content-Length: 78 2012-09-11 22:37:34.712769 7faec76c6700 10 --> Accept-Ranges: bytes 2012-09-11 22:37:34.712772 7faec76c6700 10 --> Content-type: application/xml 2012-09-11 22:37:34.712887 7faec76c6700 2 req 71188:0.004554:s3:PUT /ocdn/files/pulscms/MjU7MDA_/ecebacddde95

Re: Access Dienied for bucket upload - 403 code

2012-09-11 Thread Sławomir Skowron
2012-09-11 23:22:48.315801 7f67bd722700 2 req 75477:0.027806:s3:PUT /ocdn/test3:put_obj:http status=200 2012-09-11 23:22:48.316010 7f67bd722700 1 == req done req=0x274a630 http_status=200 == On Tue, Sep 11, 2012 at 10:46 PM, Yehuda Sadeh wrote: > On Tue, Sep 11, 2012 at 1:41 PM, Sławomir S

Re: Access Dienied for bucket upload - 403 code

2012-09-11 Thread Sławomir Skowron
Ok, but why this happend. There is no new code started before this problem. Is there any way to recover cluster to normal operation withoud Access Denied in s3 any acl operation ?? On Tue, Sep 11, 2012 at 11:32 PM, Yehuda Sadeh wrote: > On Tue, Sep 11, 2012 at 2:28 PM, Sławomir Skowron wr

Re: Access Dienied for bucket upload - 403 code

2012-09-11 Thread Sławomir Skowron
our s3 library got updated and now uses a newer s3 dialect? > > Basically you need to update the bucket acl, e.g.: > > # s3 -u create foo > # s3 -u getacl foo | s3 -u setacl oldbucket < acl > > On Tue, Sep 11, 2012 at 2:38 PM, Sławomir Skowron wrote: >> Ok, but wh

Re: Access Dienied for bucket upload - 403 code

2012-09-11 Thread Sławomir Skowron
I grep only for 7faec76c6700 Where are stored acl data for bucket in ceph?? Maybe acl are broken for anonymous user ?? Ceph supporting global acl for bucket?? Dnia 12 wrz 2012 o godz. 01:27 Yehuda Sadeh napisał(a): > On Tue, Sep 11, 2012 at 1:41 PM, Sławomir Skowron wrote: >> And

Re: Access Dienied for bucket upload - 403 code

2012-09-12 Thread Sławomir Skowron
012-09-12 09:41:21.441029 7fa257fff700 10 --> Accept-Ranges: bytes 2012-09-12 09:41:21.441032 7fa257fff700 10 --> Content-type: application/xml 2012-09-12 09:41:21.441138 7fa257fff700 2 req 7430:0.006620:s3:GET /ocdn/images/pulscms/ZWI7MDMsMWUwLDAsMSwx/2cbff537b4543942d6571124b9cc3910.

[Solved] Re: Access Dienied for bucket upload - 403 code

2012-09-12 Thread Sławomir Skowron
Problem solved. Everything because Nginx, and a request_uri in fcgi params to radosgw. Now request_uri is ok, and problem disappear. Big thanks for help Yehuda. Regards Dnia 12 wrz 2012 o godz. 01:27 Yehuda Sadeh napisał(a): > On Tue, Sep 11, 2012 at 1:41 PM, Sławomir Skowron wrote: >

Re: Inject configuration change into cluster

2012-09-16 Thread Sławomir Skowron
Ok, i will try, but i have all day meeting today, and tomorrow. One more question, is there any way to check configuration not from ceph.conf, but from running daemon in cluster ?? On Fri, Sep 14, 2012 at 9:12 PM, Sage Weil wrote: > Hi, > > > Dnia 5 wrz 2012 o godz. 17:53 "Sage Weil" napisa?(a)

unexpected problem with radosgw fcgi

2012-11-07 Thread Sławomir Skowron
I have realize that requests from fastcgi in nginx from radosgw returning: HTTP/1.1 200, not a HTTP/1.1 200 OK Any other cgi that i run, for example php via fastcgi return this like RFC says, with OK. Is someone experience this problem ?? I see in code: ./src/rgw/rgw_rest.cc line 36 const sta

Re: unexpected problem with radosgw fcgi

2012-11-08 Thread Sławomir Skowron
Ok, i will digg in nginx, thanks. Dnia 8 lis 2012 o godz. 22:48 Yehuda Sadeh napisał(a): > On Wed, Nov 7, 2012 at 6:16 AM, Sławomir Skowron wrote: >> I have realize that requests from fastcgi in nginx from radosgw returning: >> >> HTTP/1.1 200, not a HTTP/1.1 200 OK >&

Many dns domain names in radosgw

2012-11-17 Thread Sławomir Skowron
Welcome, I have a question. Is there, any way to support multiple domains names in one radosgw on virtual host type connection in S3 ?? Regards SS -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info at

Two questions

2011-07-26 Thread Sławomir Skowron
Hello. I have some questions. 1. Is there any chance to change default 4MB object size to for example 1MB or less ?? 2. I have create cluster of two mons, and 32 osd (1TB each) on two machines. At this radosgw with apache2 for test. When i putting data from s3 client to rados, everything is ok, b

Re: Two questions

2011-07-27 Thread Sławomir Skowron
Thanks. 2011/7/27 Wido den Hollander : > Hi, > > On Wed, 2011-07-27 at 07:58 +0200, Sławomir Skowron wrote: >> Hello. I have some questions. >> >> 1. Is there any chance to change default 4MB object size to for >> example 1MB or less ?? > > If you are u

Re: Two questions

2011-07-27 Thread Sławomir Skowron
clilent, and radosgw. Can you explain that to me on this example from real life ?? 2011/7/27 Wido den Hollander : > Hi, > > On Wed, 2011-07-27 at 12:19 +0200, Sławomir Skowron wrote: >> Thanks. >> >> 2011/7/27 Wido den Hollander : >> > Hi, >> &g

Re: Two questions

2011-07-28 Thread Sławomir Skowron
Dnia 27 lip 2011 o godz. 18:15 Gregory Farnum napisał(a): > 2011/7/27 Sławomir Skowron : >> Ok, I will show example: >> >> rados df >> pool name KB objects clones degraded >> unfound rdrd KB

Re: Two questions

2011-07-28 Thread Sławomir Skowron
Dnia 27 lip 2011 o godz. 17:52 Sage Weil napisał(a): > On Wed, 27 Jul 2011, S?awomir Skowron wrote: >> Hello. I have some questions. >> >> 1. Is there any chance to change default 4MB object size to for >> example 1MB or less ?? > > Yeah. You can use the cephfs to set the default layout on the r

Re: Two questions

2011-07-28 Thread Sławomir Skowron
2011/7/28 Sławomir Skowron : > Dnia 27 lip 2011 o godz. 17:52 Sage Weil napisał(a): > >> On Wed, 27 Jul 2011, S?awomir Skowron wrote: >>> Hello. I have some questions. >>> >>> 1. Is there any chance to change default 4MB object size to for >>> exa

Re: Two questions

2011-07-28 Thread Sławomir Skowron
2011/7/28 Sławomir Skowron : > Dnia 27 lip 2011 o godz. 18:15 Gregory Farnum > napisał(a): > >> 2011/7/27 Sławomir Skowron : >>> Ok, I will show example: >>> >>> rados df >>> pool name                 KB      objects       clones     degraded

Re: Two questions

2011-07-29 Thread Sławomir Skowron
Yes i have made a test, and now everything is ok. Thanks for help. iSS Dnia 28 lip 2011 o godz. 18:36 Gregory Farnum napisał(a): > 2011/7/28 Sławomir Skowron : >> Because of my test before i mount ext4 filesystems in /data/osd.(osd >> id), but /data was a symlink to /var/da

Re: radosgw should support cdmi?

2011-09-22 Thread Sławomir Skowron
Any decision ?? I would like to see this in RadosGW, especially, range support in Object Restfull API. On Tue, Sep 20, 2011 at 5:39 AM, Sage Weil wrote: > http://www.snia.org/cdmi > > There's a cdmi plugfest going on here at SDC.  Also: > > https://github.com/scality/Droplet > -- > To unsubscribe

Problem with radosgw in 0.37

2011-11-08 Thread Sławomir Skowron
Maybe, i have forgot something, but there is no doc about that. I create a configuration with nginx and radosgw for S3. On top of radosgw standing nginx witch cache capability. Everything was ok in version 0.32 of ceph. A have create a new filesystem with a newest 0.37 version, and now i have som

Re: Problem with radosgw in 0.37

2011-11-08 Thread Sławomir Skowron
it off, setting the following under the global > (or client) section in your ceph.conf: > >        rgw print continue = false > > If it doesn't help we'll need to dig deeper. Thanks, > Yehuda > > 2011/11/8 Sławomir Skowron : >> Maybe, i have forgot somethi

Problem with attaching rbd device in qemu-kvm

2011-12-01 Thread Sławomir Skowron
I have some problems. Can you help me ?? ceph cluster: ceph 0.38, oneiric, kernel 3.0.0 x86_64 - now only one machine. kvm hypervisor: kvm version 1.0-rc4 (0.15.92), libvirt 0.9.2, kernel 3.0.0, Ubuntu oneiric x86_64. I create image from qemu-img on machine with kvm VM's, and it works very well:

Re: Problem with attaching rbd device in qemu-kvm

2011-12-09 Thread Sławomir Skowron
147 : Ignoring invalid update watch -1 2011/12/1 Josh Durgin : > On 12/01/2011 11:37 AM, Sławomir Skowron wrote: >> >> I have some problems. Can you help me ?? >> >> ceph cluster: >> ceph 0.38, oneiric, kernel 3.0.0 x86_64 - now only one machine. >> >&g

Re: Problem with attaching rbd device in qemu-kvm

2011-12-13 Thread Sławomir Skowron
Durgin : > On 12/09/2011 04:48 AM, Sławomir Skowron wrote: >> >> Sorry, for my lag, but i was sick. >> >> I handled problem with apparmor before and now it's not a problem, >> even when i send mail before, it was solved. >> >> Ok when i create image

Re: Problem with attaching rbd device in qemu-kvm

2011-12-16 Thread Sławomir Skowron
2011/12/13 Josh Durgin : > On 12/13/2011 04:56 AM, Sławomir Skowron wrote: >> >> Finally, i manage the problem with rbd with kvm 1.0, and libvirt >> 0.9.8, or i think i manage :), but i get stuck with one thing after. >> >> 2011-12-13 12:13:31.173+: 21512: err

Re: Problem with attaching rbd device in qemu-kvm

2011-12-19 Thread Sławomir Skowron
new rbd drives appear in VM. Is there any chance to hotadd rbd device to working VM without reboot ?? 2011/12/16 Sławomir Skowron : > 2011/12/13 Josh Durgin : >> On 12/13/2011 04:56 AM, Sławomir Skowron wrote: >>> >>> Finally, i manage the problem with rbd with kvm 1.0

Re: Problem with attaching rbd device in qemu-kvm

2011-12-20 Thread Sławomir Skowron
Ehhh too long with this :) I forget to load acpiphp. Thanks for everything, now its working beautifully. After 10h of iozone on rbd devices, and no hang, or problem. 2011/12/20 Josh Durgin : > On 12/19/2011 07:37 AM, Sławomir Skowron wrote: >> >> Hi, >> >> Actual s

Adding new mon to existing cluster in ceph v0.39(+?)

2012-01-10 Thread Sławomir Skowron
I have some problem with adding a new mon to existing ceph cluster. Now cluster contains a 3 mon's, but i started with only one in one machine. Then adding a second, and third machine, with new mon's, and OSD. Adding, a new OSD is quiet simple, but adding, a new mon is compilation of some pieces i

.rgw expand number of pg's

2012-01-10 Thread Sławomir Skowron
How to expand number of pg's in rgw pool ?? -- - Pozdrawiam Sławek "sZiBis" Skowron -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: .rgw expand number of pg's

2012-01-10 Thread Sławomir Skowron
e moment, expanding the number of pgs in a pool is not working. > We hope to get it working in the somewhat near future (probably a few > months). Are you attempting to expand the number of osds and running > out of pgs? > -Sam > > 2012/1/10 Sławomir Skowron : >> How to e

Re: Adding new mon to existing cluster in ceph v0.39(+?)

2012-01-18 Thread Sławomir Skowron
u encounter? > -Sam > > 2012/1/10 Sławomir Skowron : >> I have some problem with adding a new mon to existing ceph cluster. >> >> Now cluster contains a 3 mon's, but i started with only one in one >> machine. Then adding a second, and third machine, with new mon

Re: Disable logging for radosgw inside rados pool

2012-02-08 Thread Sławomir Skowron
Excellent, thanks. Pozdrawiam iSS Dnia 8 lut 2012 o godz. 15:45 Yehuda Sadeh Weinraub napisał(a): > 2012/2/8 Sławomir Skowron : >> Is there any way to disable logging inside rados for radosgw. >> >> pool name category KB objects

Re: ceph_setattr mask

2012-02-14 Thread Sławomir Skowron
Pozdrawiam iSS On 14 lut 2012, at 01:05, Noah Watkins wrote: > Howdy, > > It looks like ceph_fs.h contains the mask flags (e.g. CEPH_SETATTR_MODE) used > in ceph_setattr, but I do not see these flags in any header installed from > .deb files (grep /usr/include/*). > > Am I missing a location?

Tier and MetroCluster

2012-02-17 Thread Sławomir Skowron
I have some question, about future plans. I know ceph is LAN DFS, not a WAN in any kind, but ;) 1. Is there any plan about tier support. Example: I have ceph cluster with fast SAS drives, and a losts of RAM, and SSD acceleration, and 10GE network. I use only RBD, and RadosGW. Cluster have relativ

Re: Tier and MetroCluster

2012-02-17 Thread Sławomir Skowron
Dnia 17 lut 2012 o godz. 19:06 Tommi Virtanen napisał(a): > 2012/2/17 Sławomir Skowron : >> 1. Is there any plan about tier support. Example: >> I have ceph cluster with fast SAS drives, and a losts of RAM, and SSD >> acceleration, and 10GE network. I use only RBD, and Rad

Serious problem after increase pg_num in pool

2012-02-20 Thread Sławomir Skowron
After increase number pg_num from 8 to 100 in .rgw.buckets i have some serious problems. pool name category KB objects clones degraded unfound rdrd KB wr wr KB .intent-log - 4662 19

Re: Serious problem after increase pg_num in pool

2012-02-20 Thread Sławomir Skowron
2012-02-20 20:34:07.619113 osd.20 10.177.64.4:6839/6735 64 : [ERR] mkpg 7.5f up [51,20,64] != acting [20,51,64] 2012/2/20 Sławomir Skowron : > After increase number pg_num from 8 to 100 in .rgw.buckets i have some > serious problems. > > pool name       category                 KB

Re: Serious problem after increase pg_num in pool

2012-02-20 Thread Sławomir Skowron
40 GB in 3 copies in rgw bucket, and some data in RBD, but they can be destroyed. Ceph -s reports 224 GB in normal state. Pozdrawiam iSS Dnia 20 lut 2012 o godz. 21:19 Sage Weil napisał(a): > Ooh, the pg split functionality is currently broken, and we weren't > planning on fixing it for a whi

Re: Serious problem after increase pg_num in pool

2012-02-20 Thread Sławomir Skowron
radosgw etc - new cluster is up with old dara I can be done to migrate objects in .rgw.buckets pool via obsync ?? Dnia 21 lut 2012 o godz. 07:46 "Sławomir Skowron" napisał(a): > 40 GB in 3 copies in rgw bucket, and some data in RBD, but they can be > destroyed. > > Ceph -s re

Re: Missing required features 2000

2012-02-21 Thread Sławomir Skowron
Ok Sorry for trouble, this is old version of ceph on one machine. All packages has been updated to 0.42, except a ceph package :( 2012/2/21 Sławomir Skowron : > What is that mean v0.42 in mon log: > > 2012-02-21 14:31:56.188513 7f4c59cee700 -- 10.177.64.4:6789/0 >> > 10.177.64

Re: Serious problem after increase pg_num in pool

2012-02-21 Thread Sławomir Skowron
Unfortunately 3 hours ago i made a decision about re-init cluster :( Some data are available via rados, but cluster was unstable, and migration of data was difficult, on time pression from outside :) After init a new cluster on one machine, with clean pools i was able to increase number of pg in

RadosGW problems with copy in s3

2012-02-28 Thread Sławomir Skowron
After some parallel copy command via botto for many files everything, going to slow down, and eventualy got timeout from nginx@radosgw. # ceph -s 2012-02-28 12:16:57.818566pg v20743: 8516 pgs: 8516 active+clean; 2154 MB data, 53807 MB used, 20240 GB / 21379 GB avail 2012-02-28 12:16:57.845274

Re: RadosGW problems with copy in s3

2012-02-29 Thread Sławomir Skowron
1% /vol0/data/osd.7 /dev/sdt 275G 604M 260G 1% /vol0/data/osd.19 2012/2/28 Yehuda Sadeh Weinraub : > (resending to list) > > On Tue, Feb 28, 2012 at 11:53 AM, Sławomir Skowron > wrote: >> >> 2012/2/28 Yehuda Sadeh Weinraub : >> > On Tue, Feb 28, 20

Re: RadosGW problems with copy in s3

2012-03-05 Thread Sławomir Skowron
2012/3/1 Sławomir Skowron : > 2012/2/29 Yehuda Sadeh Weinraub : >> On Wed, Feb 29, 2012 at 5:06 AM, Sławomir Skowron >> wrote: >>> >>> Ok, it's intentional. >>> >>> We are checking meta info about files, then, checking md5 of file >>

Re: RadosGW problems with copy in s3

2012-03-05 Thread Sławomir Skowron
On 5 mar 2012, at 19:59, Yehuda Sadeh Weinraub wrote: > On Mon, Mar 5, 2012 at 2:23 AM, Sławomir Skowron > wrote: >> 2012/3/1 Sławomir Skowron : >>> 2012/2/29 Yehuda Sadeh Weinraub : >>>> On Wed, Feb 29, 2012 at 5:06 AM, Sławomir Skowron >>>

Re: RadosGW problems with copy in s3

2012-03-06 Thread Sławomir Skowron
2012/3/5 Sławomir Skowron : > On 5 mar 2012, at 19:59, Yehuda Sadeh Weinraub > wrote: > >> On Mon, Mar 5, 2012 at 2:23 AM, Sławomir Skowron >> wrote: >>> 2012/3/1 Sławomir Skowron : >>>> 2012/2/29 Yehuda Sadeh Weinraub : >>>>> On We

Re: RadosGW problems with copy in s3

2012-03-06 Thread Sławomir Skowron
On 6 mar 2012, at 18:53, Yehuda Sadeh Weinraub wrote: > On Tue, Mar 6, 2012 at 2:08 AM, Sławomir Skowron wrote: > >> All logs from osd.24, osd.62, and osd.36 with osd debug =20 and >> filestore debug = 20 from 2012-03-06 10:25 and more. >> >> http://217.144.

Re: RadosGW problems with copy in s3

2012-03-26 Thread Sławomir Skowron
After some tests. PUT/GET/DELETE via Radosgw now in version 0.44 works much better. End of this topic. Thanks. 2012/3/6 Sławomir Skowron : > On 6 mar 2012, at 18:53, Yehuda Sadeh Weinraub > wrote: > >> On Tue, Mar 6, 2012 at 2:08 AM, Sławomir Skowron wrote: >> >>>

RBD attach via libvirt to kvm vm - VM kernel hang

2012-03-28 Thread Sławomir Skowron
Dom0 - Ubuntu oneiric, kernel 3.0.0-16-server. ii kvm 1:84+dfsg-0ubuntu16+1.0+noroms+0ubuntu10 dummy transitional package from kvm to qemu-kvm ii qemu1.0+noroms-0ubuntu10 dummy transitional package from qemu to qemu-kvm ii qemu-common

Re: RBD attach via libvirt to kvm vm - VM kernel hang

2012-03-28 Thread Sławomir Skowron
On 28 mar 2012, at 18:24, Josh Durgin wrote: > On 03/28/2012 09:13 AM, Tommi Virtanen wrote: >> 2012/3/28 Sławomir Skowron: >>> VM on-01 with config like in attachment (dumpxml) - hang, after >>> attaching rbd device with kernel_bug in attachment. >> >&g

Re: Problem after upgrade to 0.45

2012-04-13 Thread Sławomir Skowron
More info, that after i use filestore_xattr_use_omap = 1 in conf, and ceph -w is like that in attachment in mail before. I have downgrade to 0.44, and everything is ok now, but why this happen ?? 2012/4/13 Sławomir Skowron : > 2012-04-13 11:03:20.017166 7f63d62b47a0 -- 0.0.0.0:6848/9

Best practice - upgrade ceph cluster

2012-04-20 Thread Sławomir Skowron
Maybe it's a lame question, but is anybody knows simplest procedure, for most non-disrubtive upgrade of ceph cluster with real workload on it ?? It's most important if we want semi-automate this process with some tools. Maybe there is a cookbook for this operation ?? I know that automate this is n

Re: Best practice - upgrade ceph cluster

2012-04-20 Thread Sławomir Skowron
On 20 kwi 2012, at 21:35, Greg Farnum wrote: > On Friday, April 20, 2012 at 12:00 PM, Sławomir Skowron wrote: >> Maybe it's a lame question, but is anybody knows simplest procedure, >> for most non-disrubtive upgrade of ceph cluster with real workload on >> it ?? &g