thx Alan and Anthony for sharing on these P3700 drives.
Anthony, just to follow up on your email: my OS is CentOS7.2. Can
you please elaborate on nvme on the CentOS7.2, I'm in no way expert on
nvme, but I can here see that
https://www.pcper.com/files/imagecache/article_max_width/news/2015-06-08/
k fine. I'm
> guessing the reason may be relating to the TBW figure being higher on the
> more expensive models, maybe they don't want to have to
> replace warn NVME's on warranty?
>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun.
hi Corin. We run latest hammer on CentOS7.2, with 3 mons and have not
seen this problem. I'm not sure if there are any other possible
differences between the healthy nodes and the one that has excessive
consumption of memory? thx will
On Fri, Nov 18, 2016 at 6:35 PM, Corin Langosch
wrote:
> Hi,
>
hat didn't
> support it. Civetweb does support it.
>
> Yehuda
>
>>
>> On Sun, Nov 13, 2016 at 8:03 PM, William Josefsson
>> wrote:
>>>
>>> Hi list, can anyone please clarify if the default 'rgw print continue
>>> = true', is supp
Hi list, I wonder if there is anyone who have experience with Intel
P3700 SSD drives as Journals, and can share their experience?
I was thinking of using the P3700 SSD 400GB as journal in my ceph
deployment. It is benchmarked in Sebastian hann ssd page as well.
However a vendor I spoke to didn't q
Hi all, I got these error messages daily on radosgw for multiple users:
2016-11-12 13:49:08.905114 7fbba7fff700 20 RGWUserStatsCache: sync
user=myuserid1
2016-11-12 13:49:08.905956 7fbba7fff700 0 ERROR: can't read user header: ret=-2
2016-11-12 13:49:08.905978 7fbba7fff700 0 ERROR: sync_user() f
Hi Nick, I found the graph very useful explaining the concept. thx for sharing.
I'm currently planning to setup a new cluster and wanted to get low
latency by using,
2U server,
6xIntel P3700 400GB for journal and
18x1.8TB Hitachi Spinning 10k SAS. My OSD:Journal ratio would be 3:1.
All over 10Gbi
Hi list, can anyone please clarify if the default 'rgw print continue
= true', is supported by civetweb?
I'm using radosgw with civetweb, and this document (may be outdated?)
mentions to install apache,
http://docs.ceph.com/docs/hammer/install/install-ceph-gateway/. This
ticket seems to keep 'prin
Hi Jelle, we do Arista 7050TX 10GbE Switching as TORs. We cluster them
with VARP HA, active-active which works very well here. The host nics
are X540 copper. Interfaces are all 2x10G LACP bonds. While some may
argue Fiber is nice, and will lower latency expecially for longer
distances, for us coppe
On Mon, Oct 17, 2016 at 6:16 PM, Nick Fisk wrote:
> Did you also set /check the c-states, this can have a large impact as well?
Hi Nick. I did try intel_idle.max_cstate=0, and I've got quite a
significant improvement as attached below. Thanks for this advice!
This is still with DIRECT=1, SYNC=1,
;
> On Mon, 17 Oct 2016 16:30:48 +0800 William Josefsson wrote:
>
>> Thx Christian for helping troubleshooting the latency issues. I have
>> attached my fio job template below.
>>
> There's no trouble here per se, just facts of life (Ceph).
>
> You'll be well a
ue to the additional kernel parameter. thx will
On Tue, Oct 18, 2016 at 6:40 PM, William Josefsson
wrote:
> On Mon, Oct 17, 2016 at 6:16 PM, Nick Fisk wrote:
>>> -Original Message-
>>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>>
On Mon, Oct 17, 2016 at 6:16 PM, Nick Fisk wrote:
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> William Josefsson
>> Sent: 17 October 2016 10:39
>> To: n...@fisk.me.uk
>> Cc: ceph-users@lists.ceph.
cpu MHz : 2614.125
On Mon, Oct 17, 2016 at 5:17 PM, Nick Fisk wrote:
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> William Josefsson
>> Sent: 17 October 2016 09:31
>> To: Christian Balzer
>> C
mjobs=66
[simple-write-70]
numjobs=70
On Mon, Oct 17, 2016 at 10:47 AM, Christian Balzer wrote:
>
> Hello,
>
>
> On Sun, 16 Oct 2016 19:07:17 +0800 William Josefsson wrote:
>
>> Ok thanks for sharing. yes my journals are Intel S3610 200GB, which I
>> partition in 4 partitio
.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued: total=r=0/w=208023/d=0, short=r=0/w=0/d=0
On Sun, Oct 16, 2016 at 4:18 PM, Christian Balzer wrote:
>
> Hello,
>
> On Sun, 16 Oct 2016 15:03:24 +0800 William Josefsson wrote:
>
>> Hi list, while I know that writes in the RAD
Hi list, while I know that writes in the RADOS backend are sync() can
anyone please explain when the cluster will return on a write call for
RBD from VMs? Will data be considered synced one written to the
journal or all the way to the OSD drive?
Each host in my cluster has 5x Intel S3610, and 18x1
Hi, I have tried to understand how CEPH stores and retrieves data, and
I have a few beginners questions about this explanation
http://ceph.com/wp-content/uploads/2012/12/pg-placement1.png
1. hash("foo"); what exactly is foo, is that the filename that the
client tries to write, or is it the object
Hi,
I'm CentOS7/Hammer 0.94.9 (upgraded from RGW s3 objects created in
0.94.7) and I have radosgw multipart and shadow objects in
.rgw.buckets even though I have deleted all buckets 2weeks ago, can
anybody advice on how to prune or garbage collect the orphan and
multipart objects? Pls help. Thx wi
t; *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *William Josefsson
> *Sent:* Tuesday, July 12, 2016 4:15 PM
> *To:* ceph-users@lists.ceph.com
> *Subject:* [ceph-users] Can't remove /var/lib/ceph/osd/ceph-53 dir
>
>
>
> Hi Cephers,
>
> I got probl
Hi Cephers,
I got problem removing /var/lib/ceph/osd/ceph-53 dir which was used by
OSD.53 that I have removed.
The way that I remove the OSD:
1. ceph osd out 53
2. sudo service ceph stop osd.53
3. ceph osd crush remove osd.53
4. ceph auth del osd.53
5. ceph osd rm 53
6. sudo umount /var/lib/ceph/
--osd-journal=/dev/sdc5
Does anyone knows why it doesn't work if I map the journal using
/dev/disk/by-partuuid/xxx-xxx ?
Thanks.
On Mon, Jul 11, 2016 at 9:09 PM, William Josefsson <
william.josef...@gmail.com> wrote:
> Hi Everyone,
>
> I have a problem with OSD stuck in boo
Hi Everyone,
I have a problem with OSD stuck in booting state.
sudo ceph daemon osd.7 status
{
"cluster_fsid": "724e501f-f4a3-4731-a832-c73685aabd21",
"osd_fsid": "058cac6e-6c66-4eeb-865b-3d22f0e91a99",
"whoami": 7,
"state": "booting",
"oldest_map": 1255,
"newest_map": 249
Hi everyone,
I have problem with swapping drive and partition names on reboot. My Ceph is
Hammer on CentOS7, Dell R730 6xSSD (2xSSD OS RAID1 PERC, 4xSSD=Journal drives),
18x1.8T SAS for OSDs.
Whenever I reboot, drives randomly seem to change names. This is extremely
dangerous and frustrating w
24 matches
Mail list logo