How do you confirm that cephfs files and rados objects are being compressed?
I don't see how in the docs.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
- you
> need to somehow replay the journal (I'm unsure whether cephfs-data-scan
> tool operates on journal entries).
>
>
>
> On 6.11.2018, at 03:43, Rhian Resnick wrote:
>
> Workload is mixed.
>
> We ran a rados cpool to backup the metadata pool.
>
> So your
t;rados export" in order to preserve omap data*, then try truncating
> journals (along with purge queue if supported by your ceph version), wiping
> session table, and resetting the fs.
>
>
> On 6.11.2018, at 03:26, Rhian Resnick wrote:
>
> That was our original plan. So we
doing "recover dentries", truncating the journal, and then
> "fs reset". After that I was able to revert to single-active MDS and kept
> on running for a year until it failed on 13.2.2 upgrade :))
>
>
> On 6.11.2018, at 03:18, Rhian Resnick wrote:
>
> Our me
to get MDS to
> start and backup valuable data instead of doing long running recovery?
>
>
> On 6.11.2018, at 02:59, Rhian Resnick wrote:
>
> Sounds like I get to have some fun tonight.
>
> On Mon, Nov 5, 2018, 6:39 PM Sergey Malinin
>> inode linkage (i.e. folder hie
Does a tool exist to recover files from a cephfs data partition? We are
rebuilding metadata but have a user who needs data asap.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ments after disabling it.
>
>
> > On 5.11.2018, at 17:32, Rhian Resnick wrote:
> >
> > We are running cephfs-data-scan to rebuild metadata. Would changing the
> cache tier mode of our cephfs data partition improve performance? If so
> what should w
We are running cephfs-data-scan to rebuild metadata. Would changing the
cache tier mode of our cephfs data partition improve performance? If so
what should we switch to?
Thanks
Rhian
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.cep
& scan_inodes can be done in a few
> hours by running the tool on each OSD node, but scan_links will be
> painfully slow due to it’s single-threaded nature.
> In my case I ended up getting MDS to start and copied all data to a fresh
> filesystem ignoring errors.
> On Nov 4,
For a 150TB file system with 40 Million files how many cephfs-data-scan
threads should be used? Or what is the expected run time. (we have 160 osd
with 4TB disks.)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ce
is it possible to snapshot the cephfs data pool?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
9:47 PM Rhian Resnick wrote:
> I was posting with my office account but I think it is being blocked.
>
> Our cephfs's metadata pool went from 1GB to 1TB in a matter of hours and
> after using all storage on the OSD's reports two damaged ranks.
>
> The cephfs-journal
e the new metadata pool to the original filesystem?
how to remove the new cephfs so the original mounts work.
Rhian Resnick
Associate Director Research Computing
Enterprise Systems
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22, Rm 173B
Boca Raton, FL
I was posting with my office account but I think it is being blocked.
Our cephfs's metadata pool went from 1GB to 1TB in a matter of hours and
after using all storage on the OSD's reports two damaged ranks.
The cephfs-journal-tool crashes when performing any operations due to
memory utilization.
ot=ssds host=ceph-storage3-ssd
[osd.5]
host = ceph-storage2
crush_location = root=ssds host=ceph-storage2-ssd
[osd.68]
host = ceph-storage2
crush_location = root=ssds host=ceph-storage2-ssd
[osd.87]
host = ceph-storage2
crush_location = root=ssds host=ceph-storage2-ssd
Rhian Resnick
Associa
journal reset --rank=4
12. cephfs-table-tool all reset session
13. Start metadata servers
14. Scrub mds:
* ceph daemon mds.{hostname} scrub_path / recursive
* ceph daemon mds.{hostname} scrub_path / force
15.
Rhian Resnick
Associate Director Research Computing
Enterprise
e2
crush_location = root=ssds host=ceph-storage2-ssd
[osd.87]
host = ceph-storage2
crush_location = root=ssds host=ceph-storage2-ssd
Rhian Resnick
Associate Director Research Computing
Enterprise Systems
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22,
That is what I though. I am increasing debug to see where we are getting stuck.
I am not sure if it is an issue deactivating or a rdlock issue.
Thanks if we discover more we will post a question with details.
Rhian Resnick
Associate Director Research Computing
Enterprise Systems
Office of
Evening,
We are running into issues deactivating mds ranks. Is there a way to safely
forcibly remove a rank?
Rhian Resnick
Associate Director Research Computing
Enterprise Systems
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22, Rm 173B
Boca Raton, FL
John,
Thanks!
Rhian Resnick
Associate Director Research Computing
Enterprise Systems
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22, Rm 173B
Boca Raton, FL 33431
Phone 561.297.2647
Fax 561.297.0222
[image] <https://hpc.fau.edu/wp-content/uplo
Evening,
I am looking to decrease our max mds servers as we had a server failure and
need to remove a node.
When we attempt to decrease the number of mds servers from 5 to 4 (or any other
number) they never transition to standby. They just stay active.
ceph fs set cephfs max_mds X
Nothing
them you need to use the following command to delete them.
vgdisplay to find the bad volume groups
vgremove --select vg_uuid=your uuid -f # -f forces it to be removed
Rhian Resnick
Associate Director Middleware and HPC
Office of Information Technology
Florida Atlantic University
777 Glades
vg
return vgs.get(vg_name=vg_name, vg_tags=vg_tags)
File "/usr/lib/python2.7/site-packages/ceph_volume/api/lvm.py", line 429, in
get
raise MultipleVGsError(vg_name)
ceph_volume.exceptions.MultipleVGsError: Got more than 1 result looking for
volume group: ceph-6a2e8f21-bca2-492b-
ed osd.140
--> MultipleVGsError: Got more than 1 result looking for volume group:
ceph-6a2e8f21-bca2-492b-8869-eecc995216cc
Any hints on what to do? This occurs when we attempt to create osd's on this
node.
Rhian Resnick
Associate Director Middleware and HPC
Office of Informat
Morning,
We ran into an issue with the default max file size of a cephfs file. Is it
possible to increase this value to 20 TB from 1 TB without recreating the file
system?
Rhian Resnick
Assistant Director Middleware and HPC
Office of Information Technology
Florida Atlantic University
I didn't see any guidance on how to resolve the check some error online. Any
hints?
Rhian Resnick
Assistant Director Middleware and HPC
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22, Rm 173B
Boca Raton, FL 33431
Phone 561.297.2647
Fax 561.297
437:2017-07-03 08:37:39.278991 7f95a4be6700 -1
log_channel(cluster) log [ERR] : 1.15f repair 3 errors, 0 fixed
Is it possible this thread is related to the error we are seeing?
Rhian Resnick
Assistant Director Middleware and HPC
Office of Information Technology
Florida Atlantic University
777 Gl
000 1.0
10 0.27219 osd.10 up 1.0 1.0
11 0.27219 osd.11 up 1.0 1.00000
Rhian Resnick
Assistant Director Middleware and HPC
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22, Rm
99:39 [9,2,8]
9 [9,2,8] 90'0 2017-03-15 11:02:00.981264
0'0 2017-03-12 22:50:44.393490
0.3d 0 00 0 0 00
0 active+clean 2017-03-15 10:59:52.5408310'0 99
Thanks everyone for the input. We are online in our test environment and are
running user workflows to make sure everything is running as expected.
Rhian
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Rhian
Resnick
Sent: Thursday, March 9, 2017 8:31 AM
To: Maxime
m: ceph-users on behalf of David Turner
Date: Wednesday 8 March 2017 22:27
To: Rhian Resnick , "ceph-us...@ceph.com"
Subject: Re: [ceph-users] cephfs and erasure coding
I use CephFS on erasure coding at home using a cache tier. It works fine for
my use case, but we know nothing abo
Two questions on Cephfs and erasure coding that Google couldn't answer.
1) How well does cephfs work with erasure coding?
2) How would you move an existing cephfs pool that uses replication to erasure
coding?
Rhian Resnick
Assistant Director Middleware and HPC
Office of Inform
Logan,
Thank you for the feedback.
Rhian Resnick
Assistant Director Middleware and HPC
Office of Information Technology
Florida Atlantic University
777 Glades Road, CM22, Rm 173B
Boca Raton, FL 33431
Phone 561.297.2647
Fax 561.297.0222
[image] <https://hpc.fau.edu/wp-content/uplo
unity had with large numbers of files in a single
directory (500,000 - 5 million). We know that directory fragmentation will be
required but are concerned about the stability of the implementation.
Your opinions and suggestions are welcome.
Thank you
Rhian Resnick
Assistant Director Middl
34 matches
Mail list logo