update, you can modify mds code to not
scan all objects.
>
>
> huxia...@horebdata.cn
>
>
> From: Yan, Zheng
> Date: 2021-12-11 06:42
> To: huxia...@horebdata.cn
> CC: ceph-users
> Subject: Re: [ceph-users] CephFS single file size limit and p
On Sat, Dec 11, 2021 at 2:21 AM huxia...@horebdata.cn
wrote:
>
> Dear Ceph experts,
>
> I encounter a use case wherein the size of a single file may go beyound 50TB,
> and would like to know whether CephFS can support a single file with size
> over 50TB? Furthermore, if multiple clients, say
On Fri, Nov 19, 2021 at 11:36 AM 飞翔 wrote:
>
> ceph fs what is Maximum number of files supported per shared filesystem?
> who can tell me?
>
we have FS contain more than 40 billions small files. when FS cluster
contains this many files, osd stores can be severely fragmented and
cause some issus.
On Fri, Sep 17, 2021 at 12:14 AM Mark Nelson wrote:
>
>
>
> On 9/15/21 11:05 PM, Yan, Zheng wrote:
> > On Wed, Sep 15, 2021 at 8:36 PM Mark Nelson wrote:
> >>
> >> Hi Zheng,
> >>
> >>
> >> This looks great! Have you not
. In file creation test (mpirun -np 160 -host
xxx:160 mdtest -F -L -w 4096 -z 2 -b 10 -I 200 -u -d ...), 16 active
MDS can serve over 100k file creation per second.
Yan, Zheng
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email
On Sun, Aug 30, 2020 at 8:05 PM wrote:
>
> Hi,
> I've had a complete monitor failure, which I have recovered from with the
> steps here:
> https://docs.ceph.com/docs/mimic/rados/troubleshooting/troubleshooting-mon/#monitor-store-failures
> The data and metadata pools are there and are
On Sat, Aug 15, 2020 at 12:32 AM wrote:
>
> Yes, I've seen this problem quite frequently as of late, running v13.2.10
> MDS. It seems to be dependent on the client behavior - a lot of xlock
> contention on some directory, although it's hard to pin down which client is
> doing what. The only
On Thu, May 7, 2020 at 1:27 AM Patrick Donnelly wrote:
>
> Hello Robert,
>
> On Mon, Mar 9, 2020 at 7:55 PM Robert Ruge wrote:
> > For a 1.1PB raw cephfs system currently storing 191TB of data and 390
> > million objects (mostly small Python, ML training files etc.) how many MDS
> > servers
On Wed, May 6, 2020 at 3:53 PM Marc Roos wrote:
>
>
> I have been using snapshots on cephfs since luminous, 1xfs and
> 1xactivemds and used an rsync on it for backup.
> Under luminious I did not encounter any problems with this setup. I
> think I was even snapshotting user dirs every 7 days
On Thu, Apr 16, 2020 at 3:27 PM Dan van der Ster wrote:
>
> On Thu, Apr 16, 2020 at 3:53 AM Yan, Zheng wrote:
> >
> > On Thu, Apr 16, 2020 at 12:15 AM Dan van der Ster
> > wrote:
> > >
> > > On Wed, Apr 15, 2020 at 5:13 PM Yan, Zheng wrote:
> >
On Thu, Apr 16, 2020 at 12:15 AM Dan van der Ster wrote:
>
> On Wed, Apr 15, 2020 at 5:13 PM Yan, Zheng wrote:
> >
> > On Wed, Apr 15, 2020 at 2:33 AM Dan van der Ster
> > wrote:
> > >
> > > Hi all,
> > >
> > >
On Wed, Apr 15, 2020 at 2:33 AM Dan van der Ster wrote:
>
> Hi all,
>
> Following some cephfs issues today we have a stable cluster but the
> num_strays is incorrect.
> After starting the mds, the values are reasonable, but they very soon
> underflow and start showing 18E (2^64 - a few)
>
>
On Tue, Apr 14, 2020 at 11:45 PM Yan, Zheng wrote:
>
> On Tue, Apr 14, 2020 at 9:41 PM Dan van der Ster wrote:
> >
> > On Tue, Apr 14, 2020 at 2:50 PM Dan van der Ster
> > wrote:
> > >
> > > On Sun, Apr 12, 2020 at 9:33 PM Dan van der S
On Wed, Apr 15, 2020 at 9:40 AM Xinying Song wrote:
>
> Hi, Greg:
> Thanks for your reply!
> I think master can always know if a request has been finished or not
> no matter whether
> there is a Commit-logevent, because it has written a EUpdate logevent
> that records the
> unfinished request.
>
ne MDS with ms_type = simple and that MDS maintained a
> > normal amount of buffer_anon for several hours, while the other active
> > MDS (with async ms type) saw its buffer_anon grow by some ~10GB
> > overnight.
> > So, it seems there are still memory leaks with ms_type
On Sun, Mar 22, 2020 at 8:21 AM Dungan, Scott A. wrote:
>
> Zitat, thanks for the tips.
>
> I tried appending the key directly in the mount command
> (secret=) and that produced the same error.
>
> I took a look at the thread you suggested and I ran the commands that Paul at
> Croit suggested
On Thu, Mar 12, 2020 at 1:41 PM Robert LeBlanc wrote:
>
> This is the second time this happened in a couple of weeks. The MDS locks
> up and the stand-by can't take over so the Montiors black list them. I try
> to unblack list them, but they still say this in the logs
>
> mds.0.1184394 waiting
On Tue, Dec 10, 2019 at 8:06 PM Frank Schilder wrote:
>
> I have a strange problem with ceph fs and extended attributes. I have two
> Centos machines where I mount cephfs in exactly the same way (I manually
> executed the exact same mount command on both machines). On one of the
> machines,
are not reliable. you'd better to another method
to verify your backup.
Yan, Zheng
$ fallocate -l 1K test1
$ getfattr -n ceph.dir.rbytes .
# file: .
ceph.dir.rbytes="1024"
$ mkdir .snap/testsnap
$ fallocate -l 2K test2
$ getfattr -n ceph.dir.rbytes . .snap/testsnap
# file: .
ceph.
On Tue, Nov 12, 2019 at 6:18 PM Karsten Nielsen wrote:
>
> -Original message-
> From: Karsten Nielsen
> Sent: Tue 12-11-2019 10:30
> Subject:[ceph-users] Re: mds crash loop
> To: Yan, Zheng ;
> CC: ceph-users@ceph.io;
> > -Original
n
>
> -Original message-
> From: Yan, Zheng
> Sent: Thu 07-11-2019 14:20
> Subject:Re: [ceph-users] Re: mds crash loop
> To: Karsten Nielsen ;
> CC: ceph-users@ceph.io;
> > On Thu, Nov 7, 2019 at 6:40 PM Karsten Nielsen wrote:
> > >
> &g
s/missing_obj_dirs
> Any tool that is able to do that ?
>
> Thanks
> - Karsten
>
> -----Original message-
> From: Yan, Zheng
> Sent: Thu 07-11-2019 09:22
> Subject:Re: [ceph-users] Re: mds crash loop
> To: Karsten Nielsen ;
> CC: ceph-users@c
I have tracked down the root cause. See https://tracker.ceph.com/issues/42675
Regards
Yan, Zheng
On Thu, Nov 7, 2019 at 4:01 PM Karsten Nielsen wrote:
>
> -Original message-
> From: Yan, Zheng
> Sent: Thu 07-11-2019 07:21
> Subject:Re: [ceph-users] Re:
On Wed, Nov 6, 2019 at 4:42 PM Karsten Nielsen wrote:
>
> -Original message-
> From: Yan, Zheng
> Sent: Wed 06-11-2019 08:15
> Subject:Re: [ceph-users] mds crash loop
> To: Karsten Nielsen ;
> CC: ceph-users@ceph.io;
> > On Tue, Nov 5, 2019
h of which is running the
> same crash loop.
> I am running ceph based on https://hub.docker.com/r/ceph/daemon version
> v3.2.7-stable-3.2-minic-centos-7-x86_64 with a etcd kv store.
>
> Log details are: https://paste.debian.net/1113943/
>
please try again with debug_mds=20. Tha
lient option -- does CephFS have
> something like that planned?)
>
ceph-fuse has client_use_faked_inos option. When it is enabled,
ceph-fuse maps 64bits inode numbers to 32bits. It works as long as
client has less than 2^32 inodes cached. So far there is no kernel
cli
On Sun, Sep 29, 2019 at 8:21 PM Florian Pritz
wrote:
>
> On Sun, Sep 29, 2019 at 10:49:58AM +0800, "Yan, Zheng"
> wrote:
> > > Hanging client (10.1.67.49) kernel log:
> > >
> > > > 2019-09-26T16:08:27.481676+02:00 hostnamefoo kernel: [7
On Fri, Sep 27, 2019 at 1:12 AM Florian Pritz
wrote:
>
> Hi,
>
> We are running a ceph cluster on Ubuntu 18.04 machines with ceph 14.2.4.
> Our cephfs clients are using the kernel module and we have noticed that
> some of them are sometimes (at least once) hanging after an MDS restart.
> The only
On Fri, Sep 20, 2019 at 12:38 AM Guilherme Geronimo
wrote:
>
> Here it is: https://pastebin.com/SAsqnWDi
>
please set debug_mds to 10 and send detailed log to me
> The command:
>
> timeout 10 rm /mnt/ceph/lost+found/12430c8 ; umount -f /mnt/ceph
>
>
> On 17/09/2
gt; Any other suggestion?
>
> =D
>
> []'s
> Arthur (aKa Guilherme Geronimo)
>
> On 10/09/2019 23:51, Yan, Zheng wrote:
> > On Wed, Sep 4, 2019 at 6:39 AM Guilherme
> > wrote:
> >> Dear CEPHers,
> >> Adding some comments to my colleague's post:
On Wed, Sep 4, 2019 at 6:39 AM Guilherme wrote:
>
> Dear CEPHers,
> Adding some comments to my colleague's post: we are running Mimic 13.2.6 and
> struggling with 2 issues (that might be related):
> 1) After a "lack of space" event we've tried to remove a 40TB file. The file
> is not there
On Wed, Sep 11, 2019 at 6:51 AM Kenneth Waegeman
wrote:
>
> We sync the file system without preserving hard links. But we take
> snapshots after each sync, so I guess deleting files which are still in
> snapshots can also be in the stray directories?
>
> [root@mds02 ~]# ceph daemon mds.mds02 perf
32 matches
Mail list logo