Hi ,Radoslaw Zarzynski
Thx your reply,
I add debug rgw =20 in ceph.conf , according to your suggestion
and the log as following
2016-09-20 10:14:38.761600 7f2049ffb700 20 RGWEnv::set(): HTTP_HOST:
10.62.9.140:7480
2016-09-20 10:14:38.761613 7f2049ffb700 20 RGWEnv::set():
HTTP_ACCEPT_ENCODING: ide
Hi ,Radoslaw Zarzynski
Thx your reply,
I add debug rgw =20 in ceph.conf , according to your suggestion
and the log as following
2016-09-20 10:14:38.761600 7f2049ffb700 20 RGWEnv::set(): HTTP_HOST:
10.62.9.140:7480
2016-09-20 10:14:38.761613 7f2049ffb700 20 RGWEnv::set():
HTTP_ACCEPT_ENCODING: ide
Sorry make that 'ceph tell osd.* version'
> On Sep 19, 2016, at 2:55 PM, WRIGHT, JON R (JON R)
> wrote:
>
> When you say client, we're actually doing everything through Openstack vms
> and cinder block devices.
>
> librbd and librados are:
>
> /usr/lib/librbd.so.1.0.0
>
> /usr/lib/librados.
Do you still have OSDs that aren't upgraded?
What does a 'ceph tell osd.* show' ?
> On Sep 19, 2016, at 2:55 PM, WRIGHT, JON R (JON R)
> wrote:
>
> When you say client, we're actually doing everything through Openstack vms
> and cinder block devices.
>
> librbd and librados are:
>
> /usr/li
When you say client, we're actually doing everything through Openstack
vms and cinder block devices.
librbd and librados are:
/usr/lib/librbd.so.1.0.0
/usr/lib/librados.so.2
But I think this problem may have been related to a disk going back. We
got Disk I/O errors over the weekend and are
On Thu, Sep 15, 2016 at 1:41 AM, Wido den Hollander wrote:
>
>> Op 14 september 2016 om 14:56 schreef "Dennis Kramer (DT)"
>> :
>>
>>
>> Hi Burkhard,
>>
>> Thank you for your reply, see inline:
>>
>> On Wed, 14 Sep 2016, Burkhard Linke wrote:
>>
>> > Hi,
>> >
>> >
>> > On 09/14/2016 12:43 PM, Den
On Mon, Sep 19, 2016 at 8:25 PM, Daniel Schneller
wrote:
> Hello!
>
>
> We are observing a somewhat strange IO pattern on our OSDs.
>
>
> The cluster is running Hammer 0.94.1, 48 OSDs, 4 TB spinners, xfs,
>
> colocated journals.
I think we need to upgrade to newer hammer version.
>
>
> Over peri
We regretably have to increase PG's in a ceph cluster this way more often than
anyone should ever need to. As such, we have scripted it out. A basic version
of the script that should work for you is below.
First, create a function to check for any pg states that you don't want to
continue if
Are you talking about global IOPS or per-VM/per-RBD device?
And at what queue depth?
It all comes down to latency. Not sure what the numbers can be on recent
versions of Ceph and on modern OSes but I doubt it will be <1ms for the OSD
daemon alone. That gives you 1000 real synchronous IOPS. With h
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Matteo
Dacrema
Sent: 19 September 2016 15:24
To: ceph-users@lists.ceph.com
Subject: [ceph-users] capacity planning - iops
Hi All,
I’m trying to estimate how many iops ( 4k direct random write ) my ceph
cluster
Hi All,
I’m trying to estimate how many iops ( 4k direct random write ) my ceph
cluster should deliver.
I’ve Journal on SSDs and SATA 7.2k drives for OSD.
The question is: does journal on SSD increase the number of maximum write iops
or I need to consider only the IOPS provided by SATA drives
Hello!
I have a small strange problem in my a cluster.
The cluster works well until i'm add the "user_subvol_rm_allowed" mount
option for btrfs filestore in my ceph.conf.
After restart an host, in an osd logs i found:
***
2016-09-19 16:47:02.189162 7fd732e21840 2 osd.10 0 mounting
/var/lib/ceph/
The latest version of ceph, it's easy to develop a cluster with
multiple nodes in an only single machine by using vstart.sh.
How if I want to develop a cluster with multiple nodes on a single machine
for an
old version of ceph like infernalis or an older one?
Any answer is very appreciated. thank
Ceph is pretty awesome but I'm not sure it can be expected to keep I/O
going if there is no available capacity. Granted, the osds aren't always
balanced evenly but generally if you've got one drive hitting full ratio,
you've probably got a lot more not far behind.
Although probably not recommend,
Hello!
We are observing a somewhat strange IO pattern on our OSDs.
The cluster is running Hammer 0.94.1, 48 OSDs, 4 TB spinners, xfs,
colocated journals.
Over periods of days on end we see groups of 3 OSDs being busy with
lots and lots of small writes for several minutes at a time.
Once one
An RBD image is comprised of individual backing objects, each object
no more than 4MB in size (assuming the default RBD object size when
the image was created). You can refer to this document [1] to get a
better idea of how striping is used. Effectively, a standard RBD image
has a stripe count of 1
> -Original Message-
> From: Dan van der Ster [mailto:d...@vanderster.com]
> Sent: 19 September 2016 12:11
> To: Nick Fisk
> Cc: ceph-users
> Subject: Re: [ceph-users] RBD Snapshots and osd_snap_trim_sleep
>
> Hi Nick,
>
> I assume you had osd_snap_trim_sleep > 0 when that snapshot wa
Hi Nick,
I assume you had osd_snap_trim_sleep > 0 when that snapshot was being
deleted? I ask because we haven't seen this problem, but use
osd_snap_trim_sleep = 0.1
-- Dan
On Mon, Sep 19, 2016 at 11:20 AM, Nick Fisk wrote:
> Hi,
>
> Does the osd_snap_trim_sleep throttle effect the deletion of
Hello,
We definitely need more info.
Does only HEAD-on-account fail or other requests
are affected as well?
Can we get more verbose RadosGW's log? In such
situation debug_rgw=20 might be necessary.
Regards,
Radoslaw Zarzynski
On Mon, Sep 19, 2016 at 12:33 PM, Orit Wasserman wrote:
> Any ideas?
On Fri, Sep 16, 2016 at 7:15 PM, Jim Kilborn wrote:
> John,
>
>
>
> thanks for the tips.
>
>
>
> I ran a recursive long listing of the cephfs volume, and didn’t receive any
> errors. So I guess it wasn’t serious.
Interesting, so sounds like you hit a transient RADOS read issue.
Keep an eye out f
Hi,
Does the osd_snap_trim_sleep throttle effect the deletion of RBD snapshots?
I've done some searching but am seeing conflicting results on whether this only
effects RADOS pool snapshots.
I've just deleted a snapshot which comprised of somewhere around 150k objects
and it brought the cluste
Hi,
I’ve 3 different cluster.
The first I’ve been able to upgrade from 1024 to 2048 pgs with 10 minutes of
"io freeze”.
The second I’ve been able to upgrade from 368 to 512 in a sec without any
performance issue, but from 512 to 1024 it take over 20 minutes to create pgs.
The third I’ve to upgra
we are trying to copycat qiniu cloud api because our products heavily rely on
them but in some circumstances we want a private cloud solution,
users(our own developers) still need to split files and our api server will
reveive requests and send to ceph through rgw
are u suggesting this link
h
Hi john,all...
I have been using the patch
ceph-fuse(http://gitbuilder.ceph.com/ceph-rpm-centos7-x86_64-basic/ref/wip-17270).
Ceph fuse with writing IO won't crash when adding osd .
But fuse-client with reading IO crush when adding osd.
Details log has been attached at http://tracker.ceph.
On Mon, Sep 19, 2016 at 4:20 PM, 王海生-软件研发部
wrote:
> yes we are building top of radosgw .using a nginx to wrap a speficial api
> over the underlying interfaces
I got your idea. You can implement s3 multi part upload in the
specifial api, so it will be transparent to your users
>
>
>
yes we are building top of radosgw .using a nginx to wrap a speficial api over
the underlying interfaces
-- Original --
From: "Haomai Wang";
Date: Mon, Sep 19, 2016 04:18 PM
To: "王海生-软件研发部";
Cc: "ceph-users";
Subject: Re: [ceph-users] ceph object merge f
On Mon, Sep 19, 2016 at 4:10 PM, 王海生-软件研发部
wrote:
> Dear all
> we have setup a ceph cluster using object storage, there is a case in our
> product, where client can upload large file ,right now we are thinking split
> large file into serveral pieces and send it to ceph,can i send each piece to
> c
help
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Dear all
we have setup a ceph cluster using object storage, there is a case in our
product, where client can upload large file ,right now we are thinking split
large file into serveral pieces and send it to ceph,can i send each piece to
ceph ,get the object id and at last merge these pieces into
Hi, everyone.
On the "Block Storage" page, I found this: "Ceph RBD interfaces with the same
Ceph object storage system that provides the librados interface and the Ceph FS
file system, and it stores block device images as objects.".
Does it mean literally that a RBD image is implemented as an o
So it seems that there is more than 1 PG with problems and something
not-normal occured to the cluster. Taken as granted that your
underlying storage/filesystems/networking work as expected you should
check the timestamps/md5sums/attrs of the PGs' objects across the
cluster and if you conclude that
31 matches
Mail list logo