edictable.
Thanks for your help.
Best regards,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ive+clean+scrubbing
Is there any mechanism to increase the number of PG automatically in such a
situation ? Or this is something to do manually ?
Is 256 good value in our case ? We have 80TB of data with more than 300M files.
Thank you for your help,
--
Yoann Moulin
EPFL IC-IT
__
mudhan P :
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I am using ceph Nautilus cluster with below configuration.
>>>>>>>
>>>>>>> 3 node's (Ubuntu 18.04) each has 12 OSD's, and mds, mon and mgr a
XXX==
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"
caps osd = "allow *"
But if I use client.admin user, it works.
[client.admin]
key = ==
caps mds
t(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support [2,1]
> 2020-04-02 12:44:59.900 7fd78a6a2700 -1 monclient(hunting):
> handle_auth_bad_method server allowed_methods [2] but i only support [2,1]
> failed to fetch mon config (--no-mon-config to skip)
Is
67 MiB 186 102 MiB
> 063 TiB N/A N/A 186
> 0 B 0 B
> device_health_metrics 12 1.2 MiB 145 1.2 MiB
> 0 63
Yoann
> On Tue, 10 Mar 2020, Paul Emmerich wrote:
>
>> On Tue, Mar 10, 2020 at 8:18 AM Yoann Moulin wrote:
>>> I have added 3 new monitors on 3 VMs and I'd like to stop the 3 old
>>> monitors daemon. But I soon as I stop the 3rd old monitor, the cluster stuck
&g
3(probing) e4
> handle_auth_request failed to assign global_id
Did I miss something?
In attachment : some logs and ceph.conf
Thanks for your help.
Best,
--
Yoann Moulin
EPFL IC-IT
# Please do not change this file directly since it is managed by Ansible and
will be overwritten
[glo
rados -p cephfs_metadata listxattr mds3_openfiles.0
> artemis@icitsrv5:~$ rados -p cephfs_metadata getomapheader mds3_openfiles.0
> header (42 bytes) :
> 13 00 00 00 63 65 70 68 20 66 73 20 76 6f 6c 75 |ceph fs volu|
> 0010 6d 65 20 76 30 31 31 01 01 0d 00 00
"data": "cephfs"
}
}
"cephfs_metadata"
{
"cephfs": {
"metadata": "cephfs"
}
}
Thanks a lot, that has fixed my issue!
Best,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Le 23.01.20 à 15:51, Ilya Dryomov a écrit :
On Wed, Jan 22, 2020 at 8:58 AM Yoann Moulin wrote:
Hello,
On a fresh install (Nautilus 14.2.6) deploy with ceph-ansible playbook
stable-4.0, I have an issue with cephfs. I can create a folder, I can
create empty files, but cannot write data
s-read object_prefix rbd_children, allow rw
pool=cephfs_data
I opened a bug on tracker : https://tracker.ceph.com/issues/43761
This is independent of the replication type of cephfs_data.
Yup, this is what I understood.
Yoann
________
From: Yoann Moulin
Sent: 2
s, not the same hw config (No SSD on the dslab2020 cluster) and cephfs_data on an 8+3 EC pool on Artemis
(see the end of artemis.txt). in attachement, I put the result of the commands I did on both cluster without the same behaviors at the end.
Best,
Yoann
________
From: Yo
phfs pool=cephfs_data "
I don't where to look to get more information about that issue. Anyone can help
me? Thanks
Best regards,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
or mistakes to avoid? I use
ceph-ansible to deploy all myclusters.
Best regards,
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
;> that the metadata is updated with the difference in the binaries.
>>
>> Caveats: time intensive process, almost like cutting a new release
>> which takes about a day (and sometimes longer). Error prone since the
>> process wouldn't be the same (one off, just when a version needs to be
>> removed)
>>
>> Pros: all urls for download.ceph.com and its structure are kept the same.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
Yoann Moulin
EPFL IC-IT
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
.4 is a bug-fix release for https://tracker.ceph.com/issues/41660
>
> There are no other changes beside this fix
My reaction was not on this specific release but with this sentence : « Never
install packages until there is an announcement. » And also with
this one : « If you need to do instal
>>>>> Tue, 3 Sep 2019 11:28:20 +0200
>>>>> Yoann Moulin ==> ceph-users@ceph.io :
>>>>>> Is it better to put all WAL on one SSD and all DBs on the other one? Or
>>>>>> put WAL and DB of the first 5 OSDs on the first SSD an
Le 04/09/2019 à 11:01, Lars Täuber a écrit :
> Wed, 4 Sep 2019 10:32:56 +0200
> Yoann Moulin ==> ceph-users@ceph.io :
>> Hello,
>>
>>> Tue, 3 Sep 2019 11:28:20 +0200
>>> Yoann Moulin ==> ceph-users@ceph.io :
>>>> Is it better to put all WAL
a read-only cluster to distribute public datasets over S3 inside
our network, it is fine for me if write operations are not fully
protected during a couple of days. All writes operations are managed by us to
update datasets.
But as mentioned above, 8+3 may be a good compromise.
Best,
Yoann
>
the nexts version allow access data with the EC numbers.
I think it is still possible to set min_size = k in Nautilus but it is not
recommended.
Best,
Yoann
> -Mensaje original-
> De: Yoann Moulin
> Enviado el: martes, 3 de septiembre de 2019 11:28
> Para: ceph-users@ceph.io
in a mixed case?
It looks like I must configure LVM before running the playbook
but I am not sure if I missed something.
Is wal_vg and db_vg can be identical (on VG per SSD shared with multiple OSDs)?
Thanks for your help.
Best regards,
--
Yoann Moulin
EPFL IC
22 matches
Mail list logo