Hi Yoann, thanks a lot for your help.
root@pf-us1-dfs3:/home/rodrigo# ceph osd crush tree
ID CLASS WEIGHT TYPE NAME
-1 72.77390 root default
-3 29.10956 host pf-us1-dfs1
0 hdd 7.27739 osd.0
5 hdd 7.27739 osd.5
6 hdd 7.27739 osd.6
8 hdd
to pf-us1-dfs3 or swap one from the larger
> nodes to this one.
>
> Kevin
>
> Am Di., 8. Jan. 2019 um 15:20 Uhr schrieb Rodrigo Embeita
> :
> >
> > Hi Yoann, thanks for your response.
> > Here are the results of the commands.
> >
> > root@pf-us1-dfs2
while Ceph uses the smallest free space (worst
> OSD).
> Please check your (re-)weights.
>
> Kevin
>
> Am Di., 8. Jan. 2019 um 14:32 Uhr schrieb Rodrigo Embeita
> :
> >
> > Hi guys, I need your help.
> > I'm new with Cephfs and we started using it as file storage.
&g
Hi Yoann, thanks for your response.
Here are the results of the commands.
root@pf-us1-dfs2:/var/log/ceph# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS
0 hdd 7.27739 1.0 7.3 TiB 6.7 TiB 571 GiB 92.33 1.74 310
5 hdd 7.27739 1.0 7.3 TiB 5.6 TiB 1.7 TiB
a6 0 B 0 0 B
0
default.rgw.log 7 0 B 0 0 B
207
On Tue, Jan 8, 2019 at 10:30 AM Rodrigo Embeita
wrote:
> Hi guys, I need your help.
> I'm new with Cephfs and we started using it as file storage.
> T
Hi guys, I need your help.
I'm new with Cephfs and we started using it as file storage.
Today we are getting no space left on device but I'm seeing that we have
plenty space on the filesystem.
Filesystem Size Used Avail Use% Mounted on
Hi Daniel, thanks a lot for your help.
Do you know how I can recover the data again in this scenario since I lost
1 node with 6 OSD?
My configuration had 12 OSD (6 per host).
Regards
On Wed, Nov 21, 2018 at 3:16 PM Daniel Baumann
wrote:
> Hi,
>
> On 11/21/2018 07:04 PM, Rodrigo Embe
Hi guys, maybe someone can help me.
I'm new with CephFS and I was testing the installation of Ceph Mimic with
ceph-deploy in 2 ubuntu 16.04 nodes.
These two nodes have 6 OSD disks each.
I've installed CephFS and 2 MDS service.
The problem is that I copied a lot of data (15 Millions of small files)