> On Mon, Oct 22, 2018 at 7:47 PM Dylan McCulloch wrote:
> >
> > > On Mon, Oct 22, 2018 at 2:37 PM Dylan McCulloch
> > > wrote:
> > > >
> > > > >
> > > > > On Mon, Oct 22, 2018 at 9:46 AM Dylan McCulloch > > > > unimelb.edu.au> wrote:
> > > > > >
> > > > > > On Mon, Oct 8, 2018 at 2:57 PM
I went through raid controller firmware update. I replaced a pair of SSDs with new ones. Nothing have changed. Per controller card utility it shows that no patrol reading happens and battery
backup is in a good shape. Cache policy is WriteBack. I am aware on the bad battery effect but it
On Mon, Nov 19, 2018 at 7:17 PM 楼锴毅 wrote:
> sorry to disturb , but recently when I use ceph(12.2.8),I found that the
> leader monitor will always failed in thread_name:safe_timer.
> [...]
Try upgrading the mons to v12.2.9 (but see recent warnings concerning
upgrades to v12.2.9 for the OSDs):
Yes. Using GlusterFS now.
But Ceph has best write replication which I am struggling to make gluster
guys implement.
If this read replica pick issue could be fixed ceph could be a good cloud
fs not just local network RAID.
On Mon, Nov 19, 2018 at 2:54 AM Konstantin Shalygin wrote:
> On 11/17/18
Hi,
> Raid card for journal disks is Perc H730 (Megaraid), RAID 1, battery back
> cache is on
>
> Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad
> BBU
> Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad
> BBU
>
> I have 2 other nodes
For this the procedure is generally to stop the osd, flush the journal,
update the symlink on the osd to the new journal location, mkjournal, start
osd. You shouldn't need to do anything in the ceph.conf file.
On Thu, Nov 8, 2018 at 2:41 AM wrote:
> Hi all,
>
>
>
> I have been trying to
Hi everyone,
I've recently started a documentation patch to better explain Swift
compatibility and OpenStack integration for radosgw; a WIP PR is at
https://github.com/ceph/ceph/pull/25056/. I have, however, run into an
issue that I would really *like* to document, except I don't know
whether
On 18/11/2018 22:08, Dilip Renkila wrote:
> Hi all,
>
> We are provisioning openstack swift api though ceph rgw (mimic). We have
> problems when trying to create two containers in two projects of same
> name. After scraping web, i came to know that i have to enable
>
> *
Hi,
On 11/19/18 12:49 PM, Thomas Klute wrote:
Hi,
we have a production cluster (3 nodes) stuck unclean after we had to
replace one osd.
Cluster recovered fine except some pgs that are stuck unclean for about
2-3 days now:
*snipsnap*
[root@ceph1 ~]# fgrep remapp /tmp/pgdump.txt
3.83
Hi,
we have a production cluster (3 nodes) stuck unclean after we had to
replace one osd.
Cluster recovered fine except some pgs that are stuck unclean for about
2-3 days now:
[root@ceph1 ~]# ceph health detail
HEALTH_WARN 7 pgs stuck unclean; recovery 8/8565617 objects degraded
(0.000%);
Hi Yan,
I can get the usage of sub dirctory on the client side. Is there a way I
can get
it from the server?
Thanks.
Yan, Zheng 于2018年11月19日周一 下午3:08写道:
> On Mon, Nov 19, 2018 at 3:06 PM Zhenshi Zhou wrote:
> >
> > Many thanks Yan!
> >
> > This command can get IP, hostname, mounting point
11 matches
Mail list logo