That's useful information.
Thanks.
2014/1/3 Wido den Hollander
> On 01/02/2014 04:00 PM, Kuo Hugo wrote:
>
>> Hi all,
>>
>> I did a test to ensure Rados's recovering.
>>
>> 1. echo string into a object from a placement group's directory on
Hi all,
I did a test to ensure Rados's recovering.
1. echo string into a object from a placement group's directory on a OSD.
2. After osd scrub, the ceph health shows " 1pgs inconsistent " . Will it
be fixed later?
Thanks
___
ceph-users mailing list
ce
Push
2013/12/31 Kuo Hugo
> Hi all,
>
> I have several question about osd scrub.
>
>
>- Does the scrub job run in the background automatically? Is it
>working periodically ?
>- Need I to trigger scrub or deep-scrub progress ?
>- How to know th
Hi all,
I have several question about osd scrub.
- Does the scrub job run in the background automatically? Is it working
periodically ?
- Need I to trigger scrub or deep-scrub progress ?
- How to know the current scrub progressing ?
- How to estimate the done time of scrub ?
App
re:
> http://ceph.com/docs/master/radosgw/s3/bucketops/#put-bucket
>
>
>
> Best Regards
>
> Wei
>
> *From:* ceph-users-boun...@lists.ceph.com [mailto:
> ceph-users-boun...@lists.ceph.com] *On Behalf Of *Kuo Hugo
> *Sent:* Monday, December 30, 2013 2:46 PM
> *To:* cep
Some of strings can not be the bucket name. What's the pattern
wiftstack@bm01:~/hugo$ time swift upload b1 1G
Error trying to create container 'b1': 400 Bad Request: InvalidBucketName
Object HEAD failed: http://192.168.2.51:80/swift/v1/b1/1G 400 Bad Request
Thanks
__
your
experience? Is that possible to hit the CPU bound ?
Cheers / Hugo
2013/12/26 Yehuda Sadeh
> On Thu, Dec 26, 2013 at 3:00 AM, Kuo Hugo wrote:
> > Hi all,
> >
> >
> > I think the FastCGI module is the latest one on my server.
> >
> > root@p01:/var/log# d
Sadeh
> On Wed, Dec 25, 2013 at 9:12 AM, Kuo Hugo wrote:
> > Hi folks,
> >
> > I'm in progress to tune the performance of RadosGW on my server. After
> some
> > kindly helps from you guys. I figure out several problems for optimizing
> the
> > Rad
Hi folks,
I'm in progress to tune the performance of RadosGW on my server. After some
kindly helps from you guys. I figure out several problems for optimizing
the RadosGW to handle higher concurrency requests from users.
Apache optimization #
radosgw open file #
rgw thread pools #
rgw_ops throttl
Thanks Yehuda,
hint of admin socket is useful.
The problem was fixed.
Cheers
2013/12/25 Yehuda Sadeh
> On Tue, Dec 24, 2013 at 8:16 AM, Kuo Hugo wrote:
> > Hi folks,
> >
> > After some more tests. I can not addressed the bottleneck currently.
> Never
> > hit
make it faster by tuning Apache's setting? The CPU
util on this node
2. For hitting network maximum bandwidth, more HDDs or journal with SSD
will help?
Any suggestion would be appreciate ~
2013/12/24 Kuo Hugo
> Hi folks,
>
>
> There're 30 HDDs on three 24 threads severs
Thanks
It's solved by add ceph extra repo.
appreciate ~~!
Hugo
2013/12/24 Sage Weil
> On Mon, 23 Dec 2013, JJ Galvez wrote:
> > On Dec 23, 2013 10:50 AM, "Kuo Hugo" wrote:
> > >
> > > The libcurl-gnutls version is 7.22 on Ubuntu. But radosgw req
Hi folks,
There're 30 HDDs on three 24 threads severs. Each has 2 10G NICs. one for
public and one for cluster . A dedicated 32threads server for RadosGW.
My setting is to achieve same availability as Swift. So that the pool
size=3 anf min_size=2. for all RadosGW related pools. Each pool's pg i
t use the default number of threads
> (instead of 100)?
>
> For example:
>
> 'rados bench -p test3 30 write'
>
>
> Shain
>
> Shain Miley | Manager of Systems and Infrastructure, Digital Media |
> smi...@npr.org | 202.513.3649
>
>
The libcurl-gnutls version is 7.22 on Ubuntu. But radosgw requires > 7.28
Error on dependency check
root@p01:~# radosgw
-bash: /usr/bin/radosgw: No such file or directory
root@p01:~# apt-get install radosgw
Reading package lists... Done
Building dependency tree
Reading state information... Done
Hi folks,
I'm doing a test for Rados bench now. The cluster is deployed by
ceph-deploy
Ceph version: Emperor
FS : XFS
I created a pool test3 with size1 :
pool 13 'test3' rep size 1 min_size 1 crush_ruleset 0 object_hash rjenkins
pg_num 2000 pgp_num 2000 last_change 166 owner 0
The Rados bench
hat thrash. You could
> try to use also the radosgw admin socket (if you manage to set it up).
>
> Yehuda
>
> >
> >> -Original Message-
> >> From: Yehuda Sadeh [mailto:yeh...@inktank.com]
> >> Sent: Donnerstag, 12. September 2013 06:38
> >> To:
43.6013
Min latency:1.03948
+Hugo Kuo+
(+886) 935004793
2013/9/12 Yehuda Sadeh
> On Wed, Sep 11, 2013 at 10:37 PM, Kuo Hugo wrote:
> > Yes. I restart it by /etc/init.d/radosgw for times before. :D
> >
> > btw, I check several things here to prevent any pe
1200reqs/sec --> 300reqs/sec. That's a potential
issue which I observed. Any way to work around would be great.
+Hugo Kuo+
(+886) 935004793
2013/9/12 Yehuda Sadeh
> On Wed, Sep 11, 2013 at 10:25 PM, Kuo Hugo wrote:
> >
> > thanks
> >
> > 1) I'm s
r improve the performance of concurrency connection.
4) I'm considering to put some research on apache's configurations.
5) Do ya have a similar benchmark run for high concurrency connections
before?
Cheers
+Hugo Kuo+
(+886) 935004793
2013/9/12 Yehuda Sadeh
> On Wed, Sep 11, 2013 a
nfig information includes rgw_thread_pool_size
, is this what you mentioned ?
2. Why that the value is on OSD?
3. Where is the value of rgw_thread_pool_size that OSDs referenced from
?
+Hugo Kuo+
(+886) 935004793
2013/9/12 Yehuda Sadeh
> On Wed, Sep 11, 2013 at 9:34 PM, Ku
o Kuo+
(+886) 935004793
2013/9/11 Yehuda Sadeh
> On Wed, Sep 11, 2013 at 7:57 AM, Kuo Hugo wrote:
> >
> > Hi Yehuda,
> >
> > I tried ... a question for modifying param.
> > How to make it effect to the RadosGW ? is it by restarting radosgw ?
> > The value
Thanks for the quick reply.
+Hugo Kuo+
(+886) 935004793
2013/9/10 Yehuda Sadeh
> by default .rgw.buckets holds the objects data.
>
> On Mon, Sep 9, 2013 at 8:39 PM, Kuo Hugo wrote:
> > Hi Folks,
> > I found that RadosGW created the following pools. The copies number
Hi Folks,
I found that RadosGW created the following pools. The copies number is 2 by
default. I'd like to tweak the replicas to 3 for better reliability. I
tried to find the definition/usage of each pool but no luck.
Could someone provide related information about the usage of each pool and
which
24 matches
Mail list logo