[ceph-users] Re: adding mds service , unable to create keyring for mds

2022-09-14 Thread Xiubo Li



On 15/09/2022 03:09, Jerry Buburuz wrote:

Hello,

I am trying to add my first mds service on any node. I am unable to add
keyring to start mds service.

# $ sudo ceph auth get-or-create mds.mynode mon 'profile mds' mgr 'profile
mds' mds 'allow *' osd 'allow *'

Error ENINVAL: key for mds.mynode exists but cap mds does not match


It says the key mds.mynode already exists. What's the output of the 
`ceph auth ls` ?


Thanks!



I tried this command on storage nodes, admin nodes(monitor) , same error.

ceph mds stat
cephfs:0

This makes sense I don't have any mds services running yet.

I had no problem creating keyrings for other services like monitors and mgr.

Thanks
jerry





___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: quincy v17.2.4 QE Validation status

2022-09-14 Thread Neha Ojha
Hi Yuri,

On Wed, Sep 14, 2022 at 8:02 AM Adam King  wrote:
>
> orch suite failures fall under
> https://tracker.ceph.com/issues/49287
> https://tracker.ceph.com/issues/57290
> https://tracker.ceph.com/issues/57268
> https://tracker.ceph.com/issues/52321
>
> For rados/cephadm the failures are both
> https://tracker.ceph.com/issues/57290
>
> Overall, nothing new/unexpected. orch approved.
>
>
>
> On Tue, Sep 13, 2022 at 4:03 PM Yuri Weinstein  wrote:
>
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/57472#note-1
> > Release Notes - https://github.com/ceph/ceph/pull/48072
> >
> > Seeking approvals for:
> >
> > rados - Neha, Travis, Ernesto, Adam

We noticed a new failure in your rados run, which did not appear in
your rerun. We decided to investigate further (Thanks Laura for the
extra validation!), and it appears that a recently merged PR
https://github.com/ceph/ceph/pull/47901 is intermittently failing one
of our tests, more details in https://tracker.ceph.com/issues/57546.

There is no urgency to ship the PR in this release, so we will revert
it for now and investigate in parallel. The amount of validation
required for the revert PR should be minimal.

All the other failures in your rados runs are known issues.
- https://tracker.ceph.com/issues/57346
- https://tracker.ceph.com/issues/44595
- https://tracker.ceph.com/issues/57496

Thanks,
Neha






> > rgw - Casey
> > fs - Venky
> > orch - Adam
> > rbd - Ilya, Deepika
> > krbd - missing packages, Adam Kr is looking into it
> > upgrade/octopus-x - missing packages, Adam Kr is looking into it
> > ceph-volume - Guillaume is looking into it
> >
> > Please reply to this email with approval and/or trackers of known
> > issues/PRs to address them.
> >
> > Josh, Neha - LRC upgrade pending major suites approvals.
> > RC release - pending major suites approvals.
> >
> > Thx
> > YuriW
> >
> > ___
> > Dev mailing list -- d...@ceph.io
> > To unsubscribe send an email to dev-le...@ceph.io
> >
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] adding mds service , unable to create keyring for mds

2022-09-14 Thread Jerry Buburuz
Hello,

I am trying to add my first mds service on any node. I am unable to add
keyring to start mds service.

# $ sudo ceph auth get-or-create mds.mynode mon 'profile mds' mgr 'profile
mds' mds 'allow *' osd 'allow *'

Error ENINVAL: key for mds.mynode exists but cap mds does not match

I tried this command on storage nodes, admin nodes(monitor) , same error.

ceph mds stat
cephfs:0

This makes sense I don't have any mds services running yet.

I had no problem creating keyrings for other services like monitors and mgr.

Thanks
jerry





___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: quincy v17.2.4 QE Validation status

2022-09-14 Thread Adam King
orch suite failures fall under
https://tracker.ceph.com/issues/49287
https://tracker.ceph.com/issues/57290
https://tracker.ceph.com/issues/57268
https://tracker.ceph.com/issues/52321

For rados/cephadm the failures are both
https://tracker.ceph.com/issues/57290

Overall, nothing new/unexpected. orch approved.



On Tue, Sep 13, 2022 at 4:03 PM Yuri Weinstein  wrote:

> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/57472#note-1
> Release Notes - https://github.com/ceph/ceph/pull/48072
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs - Venky
> orch - Adam
> rbd - Ilya, Deepika
> krbd - missing packages, Adam Kr is looking into it
> upgrade/octopus-x - missing packages, Adam Kr is looking into it
> ceph-volume - Guillaume is looking into it
>
> Please reply to this email with approval and/or trackers of known
> issues/PRs to address them.
>
> Josh, Neha - LRC upgrade pending major suites approvals.
> RC release - pending major suites approvals.
>
> Thx
> YuriW
>
> ___
> Dev mailing list -- d...@ceph.io
> To unsubscribe send an email to dev-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Manual deployment, documentation error?

2022-09-14 Thread Ranjan Ghosh
Hi Eugen,

thanks for your answer. I don't want to use the cephadm tool because it
needs docker. I don't like it because it's total overkill for our small
3-node cluster.  I'd like to avoid the added complexity, added packages,
everything. Just another thing I have to learn in detaisl about in case
things go wrong.

The monitor service is running but the logs don't say anything :-( Okay,
but at least I know now that it should work in principle without the mgr.

Thank you
Ranjan

Eugen Block schrieb am 14.09.2022 um 15:26:
> Hi,
>
>> Im currently trying the manual deployment because ceph-deploy
>> unfortunately doesn't seem to exist anymore and under step 19 it says
>> you should run "sudo ceph -s". That doesn't seem to work. I guess this
>> is because the manager service isn't yet running, right?
>
> ceph-deploy was deprecated quite some time ago, if you want to use a
> deployment tool try cephadm [1]. The command 'ceph -s' is not
> depending on the mgr but the mon service. So if that doesn't work you
> need to check the mon logs and see if the mon service is up and running.
>
>> Interestingly the screenshot under step 19 says "mgr: mon-
>> node1(active)". If you follow the documentation step by step, there's
>> no mention of the manager node up until that point.
>
> Right after your mentioned screenshot there's a section for the mgr
> service [2].
>
> Regards,
> Eugen
>
> [1] https://docs.ceph.com/en/quincy/cephadm/install/
> [2]
> https://docs.ceph.com/en/quincy/install/manual-deployment/#manager-daemon-configuration
>
>
> Zitat von Ranjan Ghosh :
>
>> Hi all,
>>
>> I think there's an error in the documentation:
>>
>> https://docs.ceph.com/en/quincy/install/manual-deployment/
>>
>> Im currently trying the manual deployment because ceph-deploy
>> unfortunately doesn't seem to exist anymore and under step 19 it says
>> you should run "sudo ceph -s". That doesn't seem to work. I guess this
>> is because the manager service isn't yet running, right?
>>
>> Interestingly the screenshot under step 19 says "mgr: mon-
>> node1(active)". If you follow the documentation step by step, there's
>> no mention of the manager node up until that point.
>>
>> Thank you / BR
>> Ranjan
>>
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: quincy v17.2.4 QE Validation status

2022-09-14 Thread Guillaume Abrioux
the ceph-volume failure seems valid. I need to investigate.

thanks

On Wed, 14 Sept 2022 at 11:12, Ilya Dryomov  wrote:

> On Tue, Sep 13, 2022 at 10:03 PM Yuri Weinstein 
> wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/57472#note-1
> > Release Notes - https://github.com/ceph/ceph/pull/48072
> >
> > Seeking approvals for:
> >
> > rados - Neha, Travis, Ernesto, Adam
> > rgw - Casey
> > fs - Venky
> > orch - Adam
> > rbd - Ilya, Deepika
>
> rbd approved.
>
> > krbd - missing packages, Adam Kr is looking into it
>
> It seems like a transient issue to me, I would just reschedule.
>
> Thanks,
>
> Ilya
>
>

-- 

*Guillaume Abrioux*Senior Software Engineer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Manual deployment, documentation error?

2022-09-14 Thread Eugen Block

Hi,


Im currently trying the manual deployment because ceph-deploy
unfortunately doesn't seem to exist anymore and under step 19 it says
you should run "sudo ceph -s". That doesn't seem to work. I guess this
is because the manager service isn't yet running, right?


ceph-deploy was deprecated quite some time ago, if you want to use a  
deployment tool try cephadm [1]. The command 'ceph -s' is not  
depending on the mgr but the mon service. So if that doesn't work you  
need to check the mon logs and see if the mon service is up and running.



Interestingly the screenshot under step 19 says "mgr: mon-
node1(active)". If you follow the documentation step by step, there's
no mention of the manager node up until that point.


Right after your mentioned screenshot there's a section for the mgr  
service [2].


Regards,
Eugen

[1] https://docs.ceph.com/en/quincy/cephadm/install/
[2]  
https://docs.ceph.com/en/quincy/install/manual-deployment/#manager-daemon-configuration


Zitat von Ranjan Ghosh :


Hi all,

I think there's an error in the documentation:

https://docs.ceph.com/en/quincy/install/manual-deployment/

Im currently trying the manual deployment because ceph-deploy
unfortunately doesn't seem to exist anymore and under step 19 it says
you should run "sudo ceph -s". That doesn't seem to work. I guess this
is because the manager service isn't yet running, right?

Interestingly the screenshot under step 19 says "mgr: mon-
node1(active)". If you follow the documentation step by step, there's
no mention of the manager node up until that point.

Thank you / BR
Ranjan

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io




___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Manual deployment, documentation error?

2022-09-14 Thread Ranjan Ghosh
Hi all,

I think there's an error in the documentation:

https://docs.ceph.com/en/quincy/install/manual-deployment/

Im currently trying the manual deployment because ceph-deploy
unfortunately doesn't seem to exist anymore and under step 19 it says
you should run "sudo ceph -s". That doesn't seem to work. I guess this
is because the manager service isn't yet running, right?

Interestingly the screenshot under step 19 says "mgr: mon-
node1(active)". If you follow the documentation step by step, there's
no mention of the manager node up until that point.

Thank you / BR
Ranjan

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph deployment best practice

2022-09-14 Thread Janne Johansson
Den ons 14 sep. 2022 kl 11:08 skrev gagan tiwari
:
> Yes. To start with we only have one HP server with DAS. Which I am planning 
> to set up as ceph on. We can have one more server later.
>
> But I think you are correct. I will use ZFS file systems on it and NFS export 
> all the data to all clients.  So, please advise me whether to I use RAID6 
> with ZFS / NFS or not.
>

I already did in my first reply:

"A smaller point is that for both zfs and ceph, it is not advisable to
first raid the separate drives and then present them to the
filesystem/network, but rather give zfs/ceph each individual disk to
handle it at a higher level."

> And yes clients have local disk 1T SSD.  How can I set up local caching in 
> NFS clients?

https://man7.org/linux/man-pages/man5/nfs.5.html check out the "fsc"
option for NFSv4+ clients and the cachefilesd program
https://github.com/jnsnow/cachefilesd/blob/master/howto.txt


-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: quincy v17.2.4 QE Validation status

2022-09-14 Thread Ilya Dryomov
On Tue, Sep 13, 2022 at 10:03 PM Yuri Weinstein  wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/57472#note-1
> Release Notes - https://github.com/ceph/ceph/pull/48072
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs - Venky
> orch - Adam
> rbd - Ilya, Deepika

rbd approved.

> krbd - missing packages, Adam Kr is looking into it

It seems like a transient issue to me, I would just reschedule.

Thanks,

Ilya
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.10 Cephfs with CTDB, Samba running on Ubuntu

2022-09-14 Thread Rafael Diaz Maurin

Hello,

We recently have build a similar config here with Cluster Samba CTDB on 
top of CephFS (under Pacific) via LXC containers (RockyLinux) under 
Proxmox (7.2) for 35000 users authenticated on an Active Directory.


It's used for personal homedirs and shared directories.
The LXC Proxmox Samba containers mount automaticaly the CephFS.
We expose the CephFS snapshots on the clients (Windows or Linux).
The management of the groups is delegated via grouper.


Regards,
--
Rafael


Le 06/09/2022 à 19:44, Marco Pizzolo a écrit :

Hello Everyone,

We are looking at clustering Samba with CTDB to have highly available
access to CephFS for clients.

I wanted to see how others have implemented, and their experiences so far.

Would welcome all feedback, and of course if you happen to have any
documentation on what you did so that we can test out the same way that
would be fantastic.

Many thanks.
Marco
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



--
Rafael Diaz Maurin
DSI de l'Université de Rennes 1
Pôle Gestion des Infrastructures, Équipe Systèmes

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph deployment best practice

2022-09-14 Thread gagan tiwari
Yes. To start with we only have one HP server with DAS. Which I am planning
to set up as ceph on. We can have one more server later.

But I think you are correct. I will use ZFS file systems on it and NFS
export all the data to all clients.  So, please advise me whether to I use
RAID6 with ZFS / NFS or not.

And yes clients have local disk 1T SSD.  How can I set up local caching in
NFS clients?

Thanks,
Gagan

On Wed, Sep 14, 2022 at 2:20 PM Janne Johansson  wrote:

> Den ons 14 sep. 2022 kl 10:14 skrev gagan tiwari
> :
> >
> > Sorry.  I meant SSD Solid state disks.
>
>
> >> > We have a HP storage server with 12 SDD of 5T each and have set-up
> hardware
> >> > RAID6 on these disks.
> >>
> >> You have only one single machine?
> >> If so, run zfs on it and export storage as NFS.
>
> The disk type doesn't really change this part, which we still await
> the answer to.
>
> --
> May the most significant bit of your life be positive.
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph deployment best practice

2022-09-14 Thread Janne Johansson
Den ons 14 sep. 2022 kl 10:14 skrev gagan tiwari
:
>
> Sorry.  I meant SSD Solid state disks.


>> > We have a HP storage server with 12 SDD of 5T each and have set-up hardware
>> > RAID6 on these disks.
>>
>> You have only one single machine?
>> If so, run zfs on it and export storage as NFS.

The disk type doesn't really change this part, which we still await
the answer to.

-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Increasing number of unscrubbed PGs

2022-09-14 Thread Burkhard Linke

Hi,

On 9/13/22 16:33, Wesley Dillingham wrote:
what does "ceph pg ls scrubbing" show? Do you have PGs that have been 
stuck in a scrubbing state for a long period of time (many 
hours,days,weeks etc). This will show in the "SINCE" column.



the deep scrubs have been running for some minutes to about 2 hours, 
which seems to be OK (PGs in the large data have a size of ~290 GB).


The only suspicious values are run times of several hours for the cephfs 
metadata and primary data pool (only used for the xattr entries, no 
actual data stored). But those are on SSD/NVME storage, and according to 
the timestamps have been scrubbed in the last days.



Is it possible to get a full list of all affected PGs? 'ceph health 
detail' only displays 50 entries.



Best regards,

Burkhard Linke


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph deployment best practice

2022-09-14 Thread gagan tiwari
Sorry.  I meant SSD Solid state disks.

Thanks,
Gagan

On Wed, Sep 14, 2022 at 12:49 PM Janne Johansson 
wrote:

> Den ons 14 sep. 2022 kl 08:54 skrev gagan tiwari
> :
> > Hi Guys,
> > I am new to Ceph and storage. We have a requirement of
> > managing around 40T of data which will be accessed by around 100 clients
> > all running RockyLinux9.
> >
> > We have a HP storage server with 12 SDD of 5T each and have set-up
> hardware
> > RAID6 on these disks.
>
> You have only one single machine?
> If so, run zfs on it and export storage as NFS.
>
> >  HP storage server has 64G RAM and 18 cores.
> >
> > So, please advise how I should go about setting up Ceph on it to have
> best
> > read performance. We need fastest read performance.
>
> With NFSv4.x you can have local caching in the NFS client, that might
> help a lot for read perf if those 100 clients have local drives also.
>
> The reason I am not advocating ceph in this case is that ceph is built
> to have many servers feed data to many clients (or many processes
> doing separate reads) and you seem to have a "single-server" setup and
> in this case, the overhead of the ceph protocol will lower the
> performance compared to "simpler" solutions like NFS which are not
> designed to scale in the way ceph is.
>
> A smaller point is that for both zfs and ceph, it is not advisable to
> first raid the separate drives and then present them to the
> filesystem/network, but rather give zfs/ceph each individual disk to
> handle it at a higher level. But compared to the "I have one or I have
> many servers to serve file IO" it is a small thing.
>
> --
> May the most significant bit of your life be positive.
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph deployment best practice

2022-09-14 Thread Janne Johansson
Den ons 14 sep. 2022 kl 08:54 skrev gagan tiwari
:
> Hi Guys,
> I am new to Ceph and storage. We have a requirement of
> managing around 40T of data which will be accessed by around 100 clients
> all running RockyLinux9.
>
> We have a HP storage server with 12 SDD of 5T each and have set-up hardware
> RAID6 on these disks.

You have only one single machine?
If so, run zfs on it and export storage as NFS.

>  HP storage server has 64G RAM and 18 cores.
>
> So, please advise how I should go about setting up Ceph on it to have best
> read performance. We need fastest read performance.

With NFSv4.x you can have local caching in the NFS client, that might
help a lot for read perf if those 100 clients have local drives also.

The reason I am not advocating ceph in this case is that ceph is built
to have many servers feed data to many clients (or many processes
doing separate reads) and you seem to have a "single-server" setup and
in this case, the overhead of the ceph protocol will lower the
performance compared to "simpler" solutions like NFS which are not
designed to scale in the way ceph is.

A smaller point is that for both zfs and ceph, it is not advisable to
first raid the separate drives and then present them to the
filesystem/network, but rather give zfs/ceph each individual disk to
handle it at a higher level. But compared to the "I have one or I have
many servers to serve file IO" it is a small thing.

-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph deployment best practice

2022-09-14 Thread Jarett
Did you mean SSD? 12 x 5TB solid-state disks? Or is that “Spinning Disk Drive?” Do you have any SSDs/NVMe you can use? From: gagan tiwariSent: Wednesday, September 14, 2022 1:54 AMTo: ceph-users@ceph.ioSubject: [ceph-users] ceph deployment best practice Hi Guys,    I am new to Ceph and storage. We have a requirement ofmanaging around 40T of data which will be accessed by around 100 clientsall running RockyLinux9. We have a HP storage server with 12 SDD of 5T each and have set-up hardwareRAID6 on these disks.  HP storage server has 64G RAM and 18 cores. So, please advise how I should go about setting up Ceph on it to have bestread performance. We need fastest read performance.  Thanks,Gagan___ceph-users mailing list -- ceph-users@ceph.ioTo unsubscribe send an email to ceph-users-le...@ceph.io 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io