> On Fri, Jun 14, 2019 at 8:27 AM Janne Johansson
> wrote:
> >
> > Den fre 14 juni 2019 kl 13:58 skrev Sean Redmond <
> sean.redmo...@gmail.com>:
> >>
> >> Hi Ceph-Uers,
> >> I noticed that Soft Iron now have hardware acceleration for E
Hi Ceph-Uers,
I noticed that Soft Iron now have hardware acceleration for Erasure
Coding[1], this is interesting as the CPU overhead can be a problem in
addition to the extra disk I/O required for EC pools.
Does anyone know if any other work is ongoing to support generic FPGA
Hardware
Hi,
You can export and import PG's using ceph_objectstore_tool, but if the osd
won't start you may have trouble exporting a PG.
It maybe useful to share the errors you get when trying to start the osd.
Thanks
On Fri, Aug 3, 2018 at 10:13 PM, Sean Patronis wrote:
>
>
> Hi all.
>
> We have an
Hi,
I also had the same issues and took to disabling this feature.
Thanks
On Mon, Jul 30, 2018 at 8:42 AM, Micha Krause wrote:
> Hi,
>
> I have a Jewel Ceph cluster with RGW index sharding enabled. I've
>> configured the index to have 128 shards. I am upgrading to Luminous. What
>> will
Hi,
You may need to consider the latency between the az's, it may make it
difficult to get very high iops - I suspect that is the reason ebs is
replicated within a single AZ.
Have you any data that shows the latency between the az's?
Thanks
On Sat, 28 Jul 2018, 05:52 Mansoor Ahmed, wrote:
>
Hi,
Do you have on going resharding? 'radosgw-admin reshard list' should so you
the status.
Do you see the number of objects in .rgw.bucket.index pool increasing?
I hit a lot of problems trying to use auto resharding in 12.2.5 - I have
disabled it for the moment.
Thanks
[1]
Hi Sean (Good name btw),
Can you please link me to the tracker 12.2.6 fixes? I have disabled
resharding in 12.2.5 due to it running endlessly.
Thanks
On Tue, Jul 10, 2018 at 9:07 AM, Sean Purdy
wrote:
> While we're at it, is there a release date for 12.2.6? It fixes a
> reshard/versioning
Hi,
It sounds like the .rgw.bucket.index pool has grown maybe due to some
problem with dynamic bucket resharding.
I wonder if the (stale/old/not used) bucket index's needs to be purged
using something like the below
radosgw-admin bi purge --bucket= --bucket-id=
Not sure how you would find the
Hi,
I know the s4600 thread well as I had over 10 of those drives fail before I
took them all out of production.
Intel did say a firmware fix was on the way but I could not wait and opted
for SM863A and never looked back...
I will be sticking with SM863A for now on futher orders.
Thanks
On
size_kb": -1,
"max_objects": -1
}
}
I have attempted a bucket index check and fix on this, however, it does not
appear to have made a difference and no fixes or errors reported from it.
Does anyone have any advice on how to proceed with removing this content?
At this stage
> thanks and regards,
>
> Matt
>
> On Tue, Apr 24, 2018 at 10:45 AM, Sean Redmond <sean.redmo...@gmail.com>
> wrote:
> > Hi,
> > We are currently using Jewel 10.2.7 and recently, we have been
> experiencing
> > some issues with objects being deleted using
Hi,
We are currently using Jewel 10.2.7 and recently, we have been experiencing
some issues with objects being deleted using the gc. After a bucket was
unsuccessfully deleted using –purge-objects (first error next discussed
occurred), all of the rgw’s are occasionally becoming unresponsive and
SM863a 2.5" Enterprise SSD, SATA3 6Gb/s, 2-bit MLC V-NAND
Regards
Sean Redmond
On Wed, Jan 10, 2018 at 11:08 PM, Sean Redmond <sean.redmo...@gmail.com>
wrote:
> Hi David,
>
> Thanks for your email, they are connected inside Dell R730XD (2.5 inch 24
> disk model) in None RA
rder the NVMe conversion kit
> and have ordered HGST UltraStar SN200 2.5 inch SFF drives with a 3 DWPD
> rating.
>
>
>
>
>
> Regards
>
> David Herselman
>
>
>
> *From:* Sean Redmond [mailto:sean.redmo...@gmail.com]
> *Sent:* Thursday, 11 January 2018 12:45 AM
> *To:* Dav
Hi,
I have a case where 3 out to 12 of these Intel S4600 2TB model failed
within a matter of days after being burn-in tested then placed into
production.
I am interested to know, did you every get any further feedback from the
vendor on your issue?
Thanks
On Thu, Dec 21, 2017 at 1:38 PM, David
Hi,
Did you see this http://docs.ceph.com/docs/master/install/get-packages/ It
contains details on how to add the apt repo's provided by the ceph project.
You may also want to consider 16.04 if this is a production install as
17.10 has a pretty short life (
Can you share - ceph osd tree / crushmap and `ceph health detail` via
pastebin?
Is recovery stuck or it is on going?
On 7 Dec 2017 07:06, "Karun Josy" wrote:
> Hello,
>
> I am seeing health error in our production cluster.
>
> health: HEALTH_ERR
>
Hi Florent,
I have always done mons ,osds, rgw, mds, clients
Packages that don't auto restart services on update IMO is a good thing.
Thanks
On Tue, Dec 5, 2017 at 3:26 PM, Florent B wrote:
> On Debian systems, upgrading packages does not restart services !
>
> On
Hi,
Is it possible to add new empty osds to your cluster? Or do these also
crash out?
Thanks
On 18 Nov 2017 14:32, "Ashley Merrick" wrote:
> Hello,
>
>
>
> So seems noup does not help.
>
>
>
> Still have the same error :
>
>
>
> 2017-11-18 14:26:40.982827 7fb4446cd700
Hi,
You should upgrade them all to the latest point release if you don't want
to upgrade to the latest major release.
Start with the mons, then the osds.
Thanks
On 3 Mar 2017 18:05, "Curt Beason" wrote:
> Hello,
>
> So this is going to be a noob question probably. I read
Hi,
Is the current strange DNS issue with docs.ceph.com related to this also? I
noticed that docs.ceph.com is getting a different A record from
ns4.redhat.com vs ns{1..3}.redhat.com
dig output here > http://pastebin.com/WapDY9e2
Thanks
On Thu, Jan 19, 2017 at 11:03 PM, Dan Mick
Looks like there maybe an issue with the ceph.com and tracker.ceph.com
website at the moment
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
er one. I'm just trying to get a feel for how stable the
> technology is in general.
>
>
> Stable. Multiple customers of me run it in production with the kernel
> client and serious load on it. No major problems.
>
> Wido
>
> On Mon, Jan 16, 2017 at 3:19 PM Sean Redmond <
What's your use case? Do you plan on using kernel or fuse clients?
On 16 Jan 2017 23:03, "Tu Holmes" wrote:
> So what's the consensus on CephFS?
>
> Is it ready for prime time or not?
>
> //Tu
>
> ___
> ceph-users mailing list
>
If you need the docs you can try reading them here
https://github.com/ceph/ceph/tree/master/doc
On Mon, Jan 2, 2017 at 7:45 PM, Andre Forigato
wrote:
> Hello Marcus,
>
> Yes, it´s down. :-(
>
>
> André
>
> - Mensagem original -
> > De: "Marcus Müller"
Hi,
Hmm, could you try and dump the crush map - decompile it - modify it to
remove the DNE osd's, compile it and load it back into ceph?
http://docs.ceph.com/docs/master/rados/operations/crush-map/#get-a-crush-map
Thanks
On Thu, Dec 29, 2016 at 1:01 PM, Łukasz Chrustek wrote:
Hi Ceph-Users,
I have been running into a few issue with cephFS metadata pool corruption
over the last few weeks, For background please see
tracker.ceph.com/issues/17177
# ceph -v
ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
I am currently facing a side effect of this issue
haring.
Hopefully the above is useful to you, If you need more information I will
do my best to provide it, you can also find me in #ceph (s3an2) if it is
helpful.
Thanks
On Mon, Dec 12, 2016 at 12:17 PM, John Spray <jsp...@redhat.com> wrote:
> On Sat, Dec 10, 2016 at 1:50 PM, Sean Re
sue and I suspect that i will face an mds assert of the same type sooner
>> or later, can you please explain a bit further what operations did you do
>> to clean the problem?
>> Cheers
>> Goncalo
>>
>> From: ceph-users [cep
is it possible to identify stray directory fragments?
Thanks
On Thu, Dec 8, 2016 at 6:30 PM, John Spray <jsp...@redhat.com> wrote:
> On Thu, Dec 8, 2016 at 3:45 PM, Sean Redmond <sean.redmo...@gmail.com>
> wrote:
> > Hi,
> >
> > We had no changes going on with the ceph
c 8, 2016 at 3:11 PM, Sean Redmond <sean.redmo...@gmail.com>
> wrote:
> > Hi,
> >
> > I have a CephFS cluster that is currently unable to start the mds server
> as
> > it is hitting an assert, the extract from the mds log is below, any
> point
Hi,
I have a CephFS cluster that is currently unable to start the mds server as
it is hitting an assert, the extract from the mds log is below, any
pointers are welcome:
ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)
2016-12-08 14:50:18.577038 7f7d9faa3700 1 mds.0.47077
Looks like ceph.com, tracker.ceph.com download.ceph.com websites / repo are
having an issue at the moment, I guess it maybe related to the
below:DreamCompute
US-East 2 Cluster - Network connectivity issues
Hi Satheesh,
Do you have anything in the ceilometer error logs?
Thanks
On Wed, Nov 30, 2016 at 6:05 PM, Patrick McGarry
wrote:
> Hey Satheesh,
>
> Moving this over to ceph-user where it'll get the appropriate
> eyeballs. Might also be worth a visit to the #ceph irc
:
> I finally reproduced this issue. Adding following lines to httpd.conf
> can workaround this issue.
>
> EnableMMAP off
> EnableSendfile off
>
>
>
>
> On Sat, Sep 3, 2016 at 11:07 AM, Yan, Zheng <uker...@gmail.com> wrote:
> > On Fri, Sep 2, 2016 at 5:10 PM
Hi,
Yes this is pretty stable, I am running it in production.
Thanks
On Tue, Nov 8, 2016 at 10:38 AM, M Ranga Swami Reddy
wrote:
> Hello,
> Can you please confirm, if the ceph 10.2.3 is ready for production use.
>
> Thanks
> Swami
>
>
Hi,
I would be interested in this case when a mds in standby-replay fails.
Thanks
On Wed, Oct 19, 2016 at 4:06 PM, Scottix wrote:
> I would take the analogy of a Raid scenario. Basically a standby is
> considered like a spare drive. If that spare drive goes down. It is good
Maybe this would be an option for you:
http://docs.ceph.com/docs/jewel/rbd/rbd-mirroring/
On Tue, Oct 18, 2016 at 8:18 PM, yan cui wrote:
> Hi Guys,
>
>Our company has a use case which needs the support of Ceph across two
> data centers (one data center is far away
Hi,
Yes there is a problem at the moment, there is another ML thread with more
details.
The eu repo mirror should still be working eu.ceph.com
Thanks
On 11 Oct 2016 3:07 p.m., "wenngong" wrote:
> Hi Dear,
>
> I am trying to study and install ceph from official website. But
Hi,
Looks like the ceph website and related sub domains are giving errors for
the last few hours.
I noticed the below that I use are in scope.
http://ceph.com/
http://docs.ceph.com/
http://download.ceph.com/
http://tracker.ceph.com/
Thanks
___
Hi,
In the end this was tracked back to a switch MTU problem, once that was
fixed any version of ceph-deploy osd prepair/create worked as expected.
Thanks
On Mon, Oct 10, 2016 at 11:02 AM, Eugen Block wrote:
> Did the prepare command succeed? I don't see any output referring to
Hi,
The host that is taken down has 12 disks in it?
Have a look at the down PG's '18 pgs down' - I suspect this will be what is
causing the I/O freeze.
Is your cursh map setup correctly to split data over different hosts?
Thanks
On Tue, Sep 13, 2016 at 11:45 AM, Daznis
Have you seen this :
https://github.com/nfs-ganesha/nfs-ganesha/wiki/Fsalsupport#CEPH
On Wed, Sep 7, 2016 at 3:30 PM, jan hugo prins wrote:
> Hi,
>
> One of the use-cases I'm currently testing is the possibility to replace
> a NFS storage cluster using a Ceph cluster.
>
>
, Gregory Farnum <gfar...@redhat.com> wrote:
> On Fri, Sep 2, 2016 at 11:35 AM, Sean Redmond <sean.redmo...@gmail.com>
> wrote:
> > Hi,
> >
> > That makes sense, I have worked around this by forcing the sync within
> the
> > application running under a
p 1, 2016 at 8:02 AM, Sean Redmond <sean.redmo...@gmail.com>
> wrote:
> > Hi,
> >
> > It seems to be using syscall mmap() from what I read this indicates it is
> > using memory-mapped IO.
> >
> > Please see a strace here: http://pastebin.com/6wjhSNrP
; I think about this again. This issue could be caused by stale session.
> Could you check kernel logs of your servers. Are there any ceph
> related kernel message (such as "ceph: mds0 caps stale")
>
> Regards
> Yan, Zheng
>
>
> On Thu, Sep 1, 2016 at 11:02 PM, Sea
Hi,
It seems to be using syscall mmap() from what I read this indicates it is
using memory-mapped IO.
Please see a strace here: http://pastebin.com/6wjhSNrP
Thanks
On Wed, Aug 31, 2016 at 5:51 PM, Sean Redmond <sean.redmo...@gmail.com>
wrote:
> I am not sure how to tell?
&g
wrote:
> On Wed, Aug 31, 2016 at 12:49 AM, Sean Redmond <sean.redmo...@gmail.com>
> wrote:
> > Hi,
> >
> > I have been able to pick through the process a little further and
> replicate
> > it via the command line. The flow seems looks like this:
> >
&
I have updated the tracker with some log extracts as I seem to be hitting
this or a very similar issue.
I was unsure of the correct syntax for the command ceph-objectstore-tool to
try and extract that information.
On Wed, Aug 31, 2016 at 5:56 AM, Brad Hubbard wrote:
>
> On
but its not
clear to me what the expected behavior is when a cephfs client is trying to
read a file contents that is currently still being flushed to the file
system by the cephfs client that created the file.
On Tue, Aug 30, 2016 at 5:49 PM, Sean Redmond <sean.redmo...@gmail.com>
wrote:
between the time it takes the uploader01 server to
commit the file to the file system and the fast incoming read request from
the visiting user to server1 or server2.
Thanks
On Tue, Aug 30, 2016 at 10:21 AM, Sean Redmond <sean.redmo...@gmail.com>
wrote:
> You are correct it only seems
You are correct it only seems to impact recently modified files.
On Tue, Aug 30, 2016 at 3:36 AM, Yan, Zheng <uker...@gmail.com> wrote:
> On Tue, Aug 30, 2016 at 2:11 AM, Gregory Farnum <gfar...@redhat.com>
> wrote:
> > On Mon, Aug 29, 2016 at 7:14 AM, Sean Redmond
Hi,
Yes the file has no contents until the page cache is flushed.
I will give the fuse client a try and report back.
Thanks
On Mon, Aug 29, 2016 at 7:11 PM, Gregory Farnum <gfar...@redhat.com> wrote:
> On Mon, Aug 29, 2016 at 7:14 AM, Sean Redmond <sean.redmo...@gmail.com>
Hi,
I am running cephfs (10.2.2) with kernel 4.7.0-1. I have noticed that
frequently static files are showing empty when serviced via a web server
(apache). I have tracked this down further and can see when running a
checksum against the file on the cephfs file system on the node serving the
Hi,
I am running cephfs (10.2.2) with kernel 4.7.0-1. I have noticed that
frequently static files are showing empty when serviced via a web server
(apache). I have tracked this down further and can see when running a
checksum against the file on the cephfs file system on the node serving the
Hi,
This seems pretty quick here on a jewel cluster here, But I guess the key
questions is how large is large? Is it perhaps a large number of smaller
files that is slowing this down? Is the bucket index shared / on SSD?
[root@korn ~]# time s3cmd du s3://seanbackup
1656225129419 29 objects
Hi,
Is this disabled because its not a stable feature or just user preference?
Thanks
On Mon, Jul 18, 2016 at 2:37 PM, Yan, Zheng wrote:
> On Mon, Jul 18, 2016 at 9:00 PM, David wrote:
> > Hi all
> >
> > Recursive statistics on directories are no
Hi Matt,
I have too followed the upgrade from hammer to jewel, I think it is pretty
accepted to upgrade between LTS releases (H>J) skipping the 'stable'
releases (I) in the middle.
Thanks
On Fri, Jul 15, 2016 at 9:48 AM, Mart van Santen wrote:
>
> Hi Wido,
>
> Thank you, we
wrote:
> Thanks, Can I ignore this warning then?
>
> health HEALTH_WARN
> crush map has legacy tunables (require bobtail, min is firefly)
>
> Cheers,
> Mike
>
> On Jul 12, 2016, at 9:57 AM, Sean Redmond <sean.redmo...@gmail.com> wrote:
>
>
issing
> 400
>
> How can I set the tunable low enough? And what does that mean for
> performance?
>
> Cheers,
> Mike
>
> On Jul 12, 2016, at 9:43 AM, Sean Redmond <sean.redmo...@gmail.com> wrote:
>
> Hi,
>
> It should work for you with kernel
Hi,
It should work for you with kernel 3.10 as long as turntables are set low
enough - Do you see anything in 'dmesg'?
Thanks
On Tue, Jul 12, 2016 at 5:37 PM, Mike Jacobacci wrote:
> Hi All,
>
> Is mounting rbd only really supported in Ubuntu? All of our servers are
>
Hi,
What happened to the missing 2 OSD's?
53 osds: 51 up, 51 in
Thanks
On Tue, Jul 5, 2016 at 4:04 PM, Matyas Koszik wrote:
>
> Should you be interested, the solution to this was
> ceph pg $pg mark_unfound_lost delete
> for all pgs that had unfound objects, now the cluster is
Hi,
I noticed in the jewel release notes:
"You can now access radosgw buckets via NFS (experimental)."
Are there any docs that explain the configuration of NFS to access RADOSGW
buckets?
Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
t;> 2016-07-03 09:49:52.720116 7f3da57fa700 0 -- 192.168.0.5:0/2773396901
>> >> 192.168.0.7:6789/0 pipe(0x7f3da00023f0 sd=4 :0 s=1 pgs=0 cs=0 l=1
>> c=0x7f3da00036b0).fault
>>
>> Regards - Willi
>>
>> Am 03.07.16 um 09:36 schrieb Sean Redmond:
>&
It would need to be set to 1
On 3 Jul 2016 8:17 a.m., "Willi Fehler" wrote:
> Hello David,
>
> so in a 3 node Cluster how should I set min_size if I want that 2 nodes
> could fail?
>
> Regards - Willi
>
> Am 28.06.16 um 13:07 schrieb David:
>
> Hi,
>
> This is probably
Hi,
I noticed in the jewel release notes:
"You can now access radosgw buckets via NFS (experimental)."
Are there any docs that explain the configuration of NFS to access RADOSGW
buckets?
Thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
, Василий Ангапов <anga...@gmail.com> wrote:
> Is there any way to move existing non-sharded bucket index to sharded
> one? Or is there any way (online or offline) to move all objects from
> non-sharded bucket to sharded one?
>
> 2016-06-13 11:38 GMT+03:00 Sean Redmond <
Hi,
You could set the below to create ephemeral disks as RBD's
[libvirt]
libvirt_images_type = rbd
On Mon, May 2, 2016 at 2:28 PM, yang sheng wrote:
> Hi
>
> I am using ceph infernalis.
>
> it works fine with my openstack liberty.
>
> I am trying to test nova evacuate.
Hi German,
For Data to be split over the racks you should set the crush rule set to
'step chooseleaf firstn 0 type rack' instead of 'step chooseleaf firstn 0
type host'
Thanks
On Wed, Mar 23, 2016 at 3:50 PM, German Anders wrote:
> Hi all,
>
> I had a question, I'm in
I used a Unit a little like this (
https://www.sgi.com/products/storage/servers/mis_server.html) for a SATA
pool in ceph - rebuilds after a failure of a node can be painful without a
fair amount of testing & tuning.
I have opted for more units with less disks for future builds using R730XD.
On
f execute `ceph pg dump | grep scrub` is empty.
>But command of "ceph health" show there is "*16 pgs
> active+clean+scrubbing+deep, 2** pgs active+clean+scrubbing*".
>I have 2 osds have slow requests warning.
>Is it releated?
>
>
>
> Bes
Hi Mika,
Have the scubs been running for a long time? Can you see what pool they are
running on? You can check using `ceph pg dump | grep scrub`
Thanks
On Mon, Nov 23, 2015 at 9:32 AM, Mika c wrote:
> Hi cephers,
> We are facing a scrub issue. Our CEPH cluster is
Hi Mart,
I agree with Eneko, I had 72 of the Samaung Evo drives in service for
journals (4:1) and ended up replacing them all within 9 months with Intel
DC 3700's due to high number of failures and very poor performance
resulting in frequent blocked ops.
Just stick with the Intel Data Center
ad, until you hit the limit of your QEMU setup, which may be a single
> IO thread. That’s also what I think Mike is alluding to.
>
> Warren
>
> From: Sean Redmond <sean.redmo...@gmail.com<mailto:sean.redmo...@gmail.com
> >>
> Date: Wednesday, November 18, 2015 at 6
Hi,
I have a performance question for anyone running an SSD only pool. Let me
detail the setup first.
12 X Dell PowerEdge R630 ( 2 X 2620v3 64Gb RAM)
8 X intel DC 3710 800GB
Dual port Solarflare 10GB/s NIC (one front and one back)
Ceph 0.94.5
Ubuntu 14.04 (3.13.0-68-generic)
The above is in one
Hi,
The below should help you:
http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
Thanks
On Tue, Nov 17, 2015 at 9:58 PM, Nikola Ciprich
wrote:
> I'm not an ceph expert, but I needed to use
>
> osd crush update on start = false
76 matches
Mail list logo