Re: [Gluster-users] Gluster problems permission denied LOOKUP () /etc/samba/private/msg.sock

2018-10-05 Thread Diego Remolina
Hi,

Thanks for the reply!

This was setup a few years ago and was working OK, even when falling back
to this server. We had not failed over to this server recently after the
latest samba upgrades, so Not sure if maybe the new samba and ctdb packages
had a change that is creating the issue.

samba-libs-4.7.1-9.el7_5.x86_64
samba-client-libs-4.7.1-9.el7_5.x86_64
samba-common-tools-4.7.1-9.el7_5.x86_64
samba-common-4.7.1-9.el7_5.noarch
samba-common-libs-4.7.1-9.el7_5.x86_64
samba-vfs-glusterfs-4.7.1-9.el7_5.x86_64
samba-4.7.1-9.el7_5.x86_64

It may not be the right way to do it, so I am going to investigate your
suggestion and find out if it works for us. I do need your help with
answers to some questions below.

A bit of an explanation on the current setup. Both servers, ysmha01 and
ysmha02 are joined against AD using sssd. We are not using winbindd at all.

For each server, we created a machine account in AD, and we also created a
computer account for the "Shared" host name. So we have these 3 computer
objects in AD
ysmha01 10.0.0.6
ysmha02 10.0.0.7
ysmserver 10.0.0.1 (this ip is handled by ctdb)

We are not controlling smb with ctdb (doing it manually).

Both ysmha01 and ysmha02 were tied to AD using: realm join domain -v
unattended

Then we modified the sssd.conf file as follows:

http://termbin.com/wulh

And restarted sssd and everything works fine getting users and groups.

We populate uidNumbers and gidNumbers for all users and groups in AD, so
the permissions work.

Then we configured samba to join the domain using the ysmserver machine
account and only password (not keytab). So in order to keep the samba
information available to both servers, we used the configuration:

private dir = /export/etc/samba/private

Since this is an un-conventional setup, could you explain the process of
using both sssd and joining the machine to the AD domain? I am not quite
sure I understand how to do that after having used SSSD first. In occasions
where I set ysmha01 and ysmha02 as the netbios name for smb.conf and then
ran net ads join after realm join, it simply updated the keytab and then
sssd would not work anymore. This is why we ended up using the setup above.
If you could point to a good process including smb.conf and how to join the
machines to the domain, that would be appreciated.

This is the current config for samba. For the Projects share I had to
disable vfs gluster because I had issues with one specific type of files,
but it would be really nice if I can clean up all of this and get it to
work properly using vfs gluster for all shares.

http://termbin.com/2f64

After replacing the motherboard on ysmha02 and bringing it back up last
night, things seem to be working fine so far, but I still see the gluster
error messages and I want to fix this and run it properly as it should:

[2018-10-05 13:41:21.279685] I [MSGID: 139001]
[posix-acl.c:269:posix_acl_log_permit_denied] 0-posix-acl-autoload: cli
ent: -, gfid: 5b5bed22-ace0-410d-8623-4f1a31069b81,
req(uid:1058,gid:513,perm:1,ngrps:3), ctx(uid:0,gid:0,in-groups:0,
perm:700,updated-fop:LOOKUP, acl:-) [Permission denied]
[2018-10-05 13:41:21.279758] W [fuse-bridge.c:490:fuse_entry_cbk]
0-glusterfs-fuse: 10521075: LOOKUP() /etc/samba/priv
ate/msg.sock/6945 => -1 (Permission denied)
[2018-10-05 13:41:21.279827] W [fuse-bridge.c:490:fuse_entry_cbk]
0-glusterfs-fuse: 10521076: LOOKUP() /etc/samba/priv
ate/msg.sock/6945 => -1 (Permission denied)

The link you sent is broken, but I think it should be:

https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html-single/administration_guide/#sect-SMB_CTDB

Thanks

Diego


On Thu, Oct 4, 2018, 09:16 Poornima Gurusiddaiah 
wrote:

>
>
> On Tue, Oct 2, 2018 at 5:26 PM Diego Remolina  wrote:
>
>> Dear all,
>>
>> I have a two node setup running on Centos and gluster version
>> glusterfs-3.10.12-1.el7.x86_64
>>
>> One of my nodes died (motherboard issue). Since I had to continue
>> being up, I modified the quorum to below 50% to make sure I could
>> still run on one server.
>>
>> The server runs ovirt and 2 VMs on top of a volume called vmstorage. I
>> also had a third node in the peer list, but never configured it as an
>> arbiter, so it just comes up in gluster v status. The server also run
>> a file server with samba to serve files to windows machines.
>>
>> The issue is that since starting the server on it's own as the samba
>> server, I am seeing permission denied errors for the "export" volume
>> in /var/log/glusterfs/export.log
>>
>> The errors look like this and repeat over and over:
>>
>> [2018-10-02 11:46:56.327925] I [MSGID: 139001]
>> [posix-acl.c:269:posix_acl_log_permit_denied] 0-posix-acl-autoload:
>> client: -, gfid: 5b5bed22-ace0-410d-8623-4f1a31069b81,
>> req(uid:1051,gid:513,perm:1,ngrps:2),
>> ctx(uid:0,gid:0,in-groups:0,perm:700,updated-fop:LOOKUP, acl:-)
>> [Permission denied]
>> [2018-10-02 11:46:56.328004] W [fuse-bridge.c:490:fuse_entry_cbk]
>> 0-glusterfs-fuse: 20599112: LOOKUP() 

Re: [Gluster-users] sharding in glusterfs

2018-10-05 Thread Krutika Dhananjay
Hi,

Apologies for the late reply. My email filters are messed up, I missed
reading this.

Answers to questions around shard algorithm inline ...

On Sun, Sep 30, 2018 at 9:54 PM Ashayam Gupta 
wrote:

> Hi Pranith,
>
> Thanks for you reply, it would be helpful if you can please help us with
> the following issues with respect to sharding.
> The gluster version we are using is *glusterfs 4.1.4 *on Ubuntu 18.04.1
> LTS
>
>
>- *Shards-Creation Algo*: We were interested in understanding the way
>in which shards are distributed across bricks and nodes, is it Round-Robin
>or some other algo and can we change this mechanism using some config file.
>E.g. If we have 2 nodes with each nodes having 2 bricks , with a total
>of 4 (2*2) bricks how will the shards be distributed, will it be always
>even distribution?(Volume type in this case is plain)
>
>-  *Sharding+Distributed-Volume*: Currently we are using plain volume
>with sharding enabled and we do not see even distribution of shards across
>bricks .Can we use sharding with distributed volume to achieve evenly and
>better distribution of shards? Would be helpful if you can suggest the most
>efficient way of using sharding , our goal is to have a evenly distributed
>file system(we have large files hence using sharding) and we are not
>concerned with replication as of now.
>
> I think Raghavendra already answered the two questions above.

>
>- *Shard-Block-Size: *In case we change the
>* features.shard-block-size* value from X -> Y after lots of data has
>been populated , how does this affect the existing shards are they auto
>corrected as per the new size or do we need to run some commdands to get
>this done or is this even recommended to do the change?
>
> Existing files will retain their shard-block-size. shard-block-size is a
property of a file that is set at the time of creation of the file (in the
form of an extended attribute "trusted.glusterfs.shard.block-size") and
remains same through the lifetime of the file.

If you want the shard-block-size to be changed across these files, you'll
need to perform either of the two steps below:

1. move the existing files to a local fs from your glusterfs volume and
then move them back into the volume.
2. copy the existing files into a temporary filenames on the same volume
and rename them back to their original names.
In our tests wrt vm store workload, we've found 64MB shard-block-size to be
good fit for both IO and self-heal performance.


>- *Rebalance-Shard*: As per the docs whenever we add new server/node
>to the existing gluster we need to run Rebalance command, we would like to
>know if there are any known issues for re-balancing with sharding enabled.
>
> We did find some shard-dht inter-op issues in rebalance in the past again
in the supported vm storage use-case. The good news is that the problems
known to us have been fixed, but their validation is still pending.


> We would highly appreciate if you can point us to the latest sharding
> docs, we tried to search but could not find better than this
> https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/shard/
> .
>

The doc is still valid (except for minor changes in the To-Do list at the
bottom). But I agree, the answers to all of the questions you asked above
are well worth documenting. I'll fix this. Thanks for the feedback.
Let us know if you have any more questions or if you run into any problems.
Happy to help.
Also, since you're using a non-vm storage use case, I'd suggest that you
try shard on a test cluster first before even putting it into production. :)

-Krutika


> Thanks
> Ashayam
>
>
> On Thu, Sep 20, 2018 at 7:47 PM Pranith Kumar Karampuri <
> pkara...@redhat.com> wrote:
>
>>
>>
>> On Wed, Sep 19, 2018 at 11:37 AM Ashayam Gupta <
>> ashayam.gu...@alpha-grep.com> wrote:
>>
>>> Please find our workload details as requested by you :
>>>
>>> * Only 1 write-mount point as of now
>>> * Read-Mount : Since we auto-scale our machines this can be as big as
>>> 300-400 machines during peak times
>>> * >" multiple concurrent reads means that Reads will not happen until
>>> the file is completely written to"  Yes , in our current scenario we can
>>> ensure that indeed this is the case.
>>>
>>> But when you say it only supports single writer workload we would like
>>> to understand the following scenarios with respect to multiple writers and
>>> the current behaviour of glusterfs with sharding
>>>
>>>- Multiple Writer writes to different files
>>>
>>> When I say multiple writers, I mean multiple mounts. Since you were
>> saying earlier there is only one mount which does all writes, everything
>> should work as expected.
>>
>>>
>>>- Multiple Writer writes to same file
>>>   - they write to same file but different shards of same file
>>>   - they write to same file (no gurantee if they write to different
>>>   shards)
>>>
>>> As long as