On Tuesday, December 9, 2025 11:45:43 AM Eastern Standard Time Michael Sudnick
via ceph-users wrote:
> Sorry for the wall of text. I'm getting an error when trying to access smb
> shares (or even list shares as an authenticated user. An anonymous user can
> list shares) from a Linux client. Tests are performed with: podman run --rm
> -it quay.io/samba.org/samba-client:latest
>
> # smbclient -N -L 10.0.150.77
> Anonymous login successful
>
> Sharename Type Comment
> --------- ---- -------
> IPC$ IPC IPC Service (Samba 4.23.2)
> media Disk
> SMB1 disabled -- no workgroup available
>
> However when I attempt to specify a user:
> I get the following error from the client with the following command:
> # smbclient -d 10 -U media%media -L 10.0.150.77
>
> gensec_update_send: ntlmssp[0x55a593b774c0]: subreq: 0x55a593b72b80
> gensec_update_send: spnego[0x55a593b75800]: subreq: 0x55a593b8b1e0
> gensec_update_done: ntlmssp[0x55a593b774c0]: NT_STATUS_OK
> tevent_req[0x55a593b72b80/../../auth/ntlmssp/ntlmssp.c:189]: state[2]
> error[0 (0x0)] state[struct gensec_ntlmssp_up date_state (0x55a593b72d60)]
> timer[(nil)] finish[../../auth/ntlmssp/ntlmssp.c:231] gensec_update_done:
> spnego[0x55a593b75800]: NT_STATUS_MORE_PROCESSING_REQUIRED
> tevent_req[0x55a593b8b1e0/../../auth/gensec/spnego.c:1614]: state[2]
> error[0 (0x0)] state[stru ct gensec_spnego_update_state (0x55a593b8b3c0)]
> timer[(nil)] finish[../../auth/gensec/spnego.c:2109] SPNEGO login failed:
> The attempted logon is invalid. This is either due to a bad username or
> authentication information. session setup failed: NT_STATUS_LOGON_FAILURE
> Freeing parametrics:
>
> My cluster and share definitions are as follows. It looks like there are a
> few remnants of various attempts are getting it working: # ceph smb show
> {
> "resources": [
> {
> "resource_type": "ceph.smb.cluster",
> "cluster_id": "smb",
> "auth_mode": "user",
> "intent": "present",
> "user_group_settings": [
> {
> "source_type": "resource",
> "ref": "smbeskkuhxm"
> }
> ],
> "placement": {
> "count": 5
> },
> "clustering": "always",
> "public_addrs": [
> {
> "address": "10.0.150.77/16"
> }
> ]
> },
> {
> "resource_type": "ceph.smb.share",
> "cluster_id": "smb",
> "share_id": "media",
> "intent": "present",
> "name": "media",
> "readonly": false,
> "browseable": true,
> "cephfs": {
> "volume": "cephfs",
> "path": "/",
> "subvolumegroup": "smb",
> "subvolume": "media",
> "provider": "samba-vfs"
> },
> "login_control": [
> {
> "name": "media",
> "category": "user",
> "access": "admin"
> }
> ]
> },
> {
> "resource_type": "ceph.smb.usersgroups",
> "users_groups_id": "smbeskkuhxm",
> "intent": "present",
> "values": {
> "users": [
> {
> "name": "media",
> "password": "media"
> }
> ],
> "groups": []
> },
> "linked_to_cluster": "smb"
> }
> ]
> }
>
> -Michael Sudnick
Thank you for trying the SMB support out! Also, thanks for providing the JSON
so I know what your configuration generally looks like.
I adapted the JSON to one of my own clusters (changed the IPs and placement)
and deployed it. I was able to connect to the share on my cluster:
smbclient -U 'media%media' //192.168.76.202/media
Try "help" to get a list of possible commands.
smb: \>
So I'm not entirely sure what has happened on your cluster.
If I had reproduced the error I would have first tried again to see if using
the smbclient inside the smb container image produced the same result, and
then I would have looked to see if the users got created in the container
image correctly. Some example commands:
(on a ceph cluster node)
# cephadm enter -i smb smbclient -U 'media%media' //localhost/media -c ls
# cephadm enter -i smb getent passwd media
# cephadm enter -i smb pdbedit -L
You could also try to redploy with smbd logging cranked way up and see if
anything interesting appears in the logs. For now you can add:
{"custom_smb_global_options": { "log level": "10", "_allow_customization: "i-
take-responsibility-for-all-samba-configuration-errors"}}
to the cluster resource json
(We're going to make enabling debug logging easier in future versions FWIW)
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]