[gpfsug-discuss] Protocol limits

2020-12-09 Thread leslie elliott
hi all

we run a large number of shares from CES servers connected to a single
scale cluster
we understand the current supported limit is 1000 SMB shares, we run the
same number of NFS shares

we also understand that using external CES cluster to increase that limit
is not supported based on the documentation, we use the same authentication
for all shares, we do have additional use cases for sharing where this
pathway would be attractive going forward

so the question becomes if we need to run 2 SMB and NFS shares off a
scale cluster is there any hardware design we can use to do this whilst
maintaining support

I have submitted a support request to ask if this can be done but thought I
would ask the collective good if this has already been solved

thanks

leslie
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] dependent versus independent filesets

2020-07-07 Thread leslie elliott
as long as your currently do not need more than 1000 on a filesystem

On Wed, 8 Jul 2020 at 04:20, Daniel Kidger  wrote:

> It is worth noting that Independent Filesets are a relatively recent
> addition to Spectrum Scale, compared to Dependant Filesets. They havesolved
> some of the limitations of the former.
>
>
> My view would be to always use Independent FIlesets unless there is a
> particular reason to use Dependant ones.
>
> Daniel
>
> _
> *Daniel Kidger Ph.D.*
> IBM Technical Sales Specialist
> Spectrum Scale, Spectrum Discover  and IBM Cloud Object Store
>
> +44-(0)7818 522 266
> daniel.kid...@uk.ibm.com
>
> 
> 
> 
>
>
>
>
> - Original message -
> From: "Frederick Stock" 
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
> To: gpfsug-discuss@spectrumscale.org
> Cc: gpfsug-discuss@spectrumscale.org
> Subject: [EXTERNAL] Re: [gpfsug-discuss] dependent versus independent
> filesets
> Date: Tue, Jul 7, 2020 17:25
>
> One comment about inode preallocation.  There was a time when inode
> creation was performance challenged but in my opinion that is no longer the
> case, unless you have need for file creates to complete at extreme speed.
> In my experience it is the rare customer that requires extremely fast file
> create times so pre-allocation is not truly necessary.  As was noted once
> an inode is allocated it cannot be deallocated.  The more important item is
> the maximum inodes defined for a fileset or file system.  Yes, those do
> need to be monitored so they can be increased if necessary to avoid out of
> space errors.
>
> Fred
> __
> Fred Stock | IBM Pittsburgh Lab | 720-430-8821
> sto...@us.ibm.com
>
>
>
> - Original message -
> From: "Wahl, Edward" 
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
> To: gpfsug main discussion list 
> Cc:
> Subject: [EXTERNAL] Re: [gpfsug-discuss] dependent versus independent
> filesets
> Date: Tue, Jul 7, 2020 11:59 AM
>
> We also went with independent filesets for both backup (and quota) reasons
> for several years now, and have stuck with this across to 5.x.  However we
> still maintain a minor number of dependent filesets for administrative use.
> Being able to mmbackup on many filesets at once can increase your
> parallelization _quite_ nicely!  We create and delete the individual snaps
> before and after each backup, as you may expect.  Just be aware that if you
> do massive numbers of fast snapshot deletes and creates you WILL reach a
> point where you will run into issues due to quiescing compute clients, and
> that certain types of workloads have issues with snapshotting in general.
>
> You have to more closely watch what you pre-allocate, and what you have
> left in the common metadata/inode pool.  Once allocated, even if not being
> used, you cannot reduce the inode allocation without removing the fileset
> and re-creating.  (say a fileset user had 5 million inodes and now only
> needs 500,000)
>
> Growth can also be an issue if you do NOT fully pre-allocate each space.
> This can be scary if you are not used to over-subscription in general.  But
> I imagine that most sites have some decent % of oversubscription if they
> use filesets and quotas.
>
> Ed
> OSC
>
> -Original Message-
> From: gpfsug-discuss-boun...@spectrumscale.org <
> gpfsug-discuss-boun...@spectrumscale.org> On Behalf Of Skylar Thompson
> Sent: Tuesday, July 7, 2020 10:00 AM
> To: gpfsug-discuss@spectrumscale.org
> Subject: Re: [gpfsug-discuss] dependent versus independent filesets
>
> We wanted to be able to snapshot and backup filesets separately with
> mmbackup, so went with independent filesets.
>
> On Tue, Jul 07, 2020 at 08:37:46AM -0500, Damir Krstic wrote:
> > We are deploying our new ESS and are considering moving to independent
> > filesets. The snapshot per fileset feature appeals to us.
> >
> > Has anyone considered independent vs. dependent filesets and what was
> > your reasoning to go with one as opposed to the other? Or perhaps you
> > opted to have both on your filesystem, and if, what was the reasoning
> for it?
> >
> > Thank you.
> > Damir
>
> > ___
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org
> > https://urldefense.com/v3/__http://gpfsug.org/mailman/listinfo/gpfsug-
> > discuss__;!!KGKeukY!j-c9kslUrEaNslhTbLLfaY8TES7Xf4eUCxysOaXwroHhTMwiVY
> > vcGNh4M_no$
>
>
> --
> -- Skylar Thompson (skyl...@u.washington.edu)
> -- Genome Sciences Department (UW Medicine), System Administrator
> -- Foege Building S046, (206)-685-7354
> -- Pronouns: He/Him/His
> ___
> gpfsug-discuss 

[gpfsug-discuss] afmHashVersion

2020-04-04 Thread leslie elliott
I was wondering if there was any more information on the different values
for
afmHashVersion

the default value is 2 but if we want to assign an afmGateway to
a fileset we need a value of 5

is there likely to be any performance degradation because of this change

do the home cluster and the cache cluster both have to be set to 5 for the
fileset allocation to gateways

just trying to find a little more information before we try this on a
production
system with a large number of afm independent filesets

leslie
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] SMB server on GPFS clients and Followsymlinks

2018-05-15 Thread leslie elliott
you might want to read the license details of gpfs before you try do this :)

pretty sure you need a server license to re-export the files from a GPFS
mount

On 16 May 2018 at 08:00,  wrote:

> Hello All,
>
> Has anyone tried serving SMB export of GPFS mounts from a SMB server on
> GPFS client? Is it supported and does it lead to any issues?
> I understand that i will not need a redundant SMB server configuration.
>
> I could use CES, but CES does not support follow-symlinks outside
> respective SMB export. Follow-symlinks is a however a hard-requirement  for
> to follow links outside GPFS filesystems.
>
> Thanks,
> Lohit
>
>
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] transparent cloud tiering

2017-10-03 Thread leslie elliott
hi

I am trying to change the account for the cloud tier
but am having some problems

any hints would be appreciated

I am not interested in the data locally or migrated but do not seem to be
able to recall this so would just like to repurpose it with the new account

I can see in the logs
2017-10-03_15:38:49.226+1000: [W] Snapshot quiesce of SG cloud01 snap -1/0
doing 'mmcrsnapshot :MCST.scan.6' timed out on node . Retrying if
possible.


which is no doubt the reason for the following


mmcloudgateway account delete --cloud-nodeclass  TCTNodeClass --cloud-name
gpfscloud1234
mmcloudgateway: Sending the command to the first successful node starting
with gpfs-dev02
mmcloudgateway: This may take a while...
mmcloudgateway: Error detected on node gpfs-dev02
The return code is 94.  The error is:

MCSTG00084E: Command Failed with following reason: Unable to create
snapshot for file system /gpfs/itscloud01, [Ljava.lang.String;@3353303e
failed with: com.ibm.gpfsconnector.messages.GpfsConnectorException: Command
[/usr/lpp/mmfs/bin/mmcrsnapshot, cloud01, MCST.scan.4] failed with the
following return code: 78..

mmcloudgateway: Sending the command to the next node gpfs-dev04
mmcloudgateway: Error detected on node gpfs-dev04
The return code is 94.  The error is:

MCSTG00084E: Command Failed with following reason: Unable to create
snapshot for file system /gpfs/cloud01, [Ljava.lang.String;@90a887ad failed
with: com.ibm.gpfsconnector.messages.GpfsConnectorException: Command
[/usr/lpp/mmfs/bin/mmcrsnapshot, cloud01, MCST.scan.6] failed with the
following return code: 78..

mmcloudgateway: Command failed. Examine previous error messages to
determine cause.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] AFM over NFS

2017-07-19 Thread leslie elliott
we are having a problem linking a target to a fileset

we are able to manually connect with NFSv4 to the correct path on an NFS
export down a particular subdirectory path, but when when we create a
fileset with this same path as an afmTarget it connects with NFSv3 and
actually connects to the top of the export even though mmafmctl displays
the extended path information

are we able to tell AFM to connect with NFSv4 in any way to work around
this problem

the NFS comes from a closed system, we can not change the configuration on
it to fix the problem on the target

thanks

leslie
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] CES permissions

2017-01-20 Thread leslie elliott
Hi

we have an existing configuration with a home - cache relationship on
linked clusters, we are running CES on the cache cluster.

When data is copied to an SMB share the the afm target for the cache is
marked dirty and the replication back to the home cluster stops.

both clusters are running 4.2.1

We have seen this behaviour whether the acls on the home cluster file
system are nfsv4 only or posix and nfsv4

the cache cluster is nfsv4 only so that we can use CES on it for SMB.

We are using uid remapping between the cache and the home

can anyone suggest why the cache is marked dirty and how we can get around
this issue

the other thing we would like to do is force group and posix file
permissions via samba but these are not supported options in the CES
installation of samba

any help is appreciated

leslie

Leslie Elliott, Infrastructure Support Specialist
Information Technology Services, The University of Queensland
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] HAWC and LROC

2016-11-05 Thread leslie elliott
Hi I am curious if anyone has run these together on a client and whether it
helped

If we wanted to have these functions out at the client to optimise compute
IO in a couple of special cases

can both exist at the same time on the same nonvolatile hardware or do the
two functions need independent devices

and what would be the process to disestablish them on the clients as the
requirement was satisfied

thanks

leslie
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] unified file and object

2016-11-02 Thread leslie elliott
Sorry I don't have an install log

This is a DDN installation so while I believe the use the spectrum scale
toolkit I can not confirm this


Thanks

Leslie

On Thursday, 3 November 2016, Bill Owen <billo...@us.ibm.com> wrote:

> > now that I have yours for reference I have updated the file and the
> service starts, but I am unsure why it was not provisioned correctly
> initially
> Do you have the log from the original installation? Did you install using
> the spectrumscale install toolkit?
>
> Thanks,
> Bill Owen
> billo...@us.ibm.com <javascript:_e(%7B%7D,'cvml','billo...@us.ibm.com');>
> Spectrum Scale Object Storage
> 520-799-4829
>
>
> [image: Inactive hide details for leslie elliott ---11/02/2016 03:00:55
> PM---Bill you are correct about it missing details]leslie elliott
> ---11/02/2016 03:00:55 PM---Bill you are correct about it missing details
>
> From: leslie elliott <leslie.james.elli...@gmail.com
> <javascript:_e(%7B%7D,'cvml','leslie.james.elli...@gmail.com');>>
> To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org
> <javascript:_e(%7B%7D,'cvml','gpfsug-discuss@spectrumscale.org');>>
> Date: 11/02/2016 03:00 PM
> Subject: Re: [gpfsug-discuss] unified file and object
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
> <javascript:_e(%7B%7D,'cvml','gpfsug-discuss-boun...@spectrumscale.org');>
> --
>
>
>
> Bill
>
> you are correct about it missing details
>
>
> [root@pren-gs7k-vm4 ~]# cat /etc/swift/object-server-sof.conf
> [DEFAULT]
> devices = /gpfs/pren01/ObjectFileset/o
> log_level = ERROR
>
>
>
> now that I have yours for reference I have updated the file and the
> service starts, but I am unsure why it was not provisioned correctly
> initially
>
> leslie
>
>
> On 3 November 2016 at 00:28, Bill Owen <*billo...@us.ibm.com*
> <javascript:_e(%7B%7D,'cvml','billo...@us.ibm.com');>> wrote:
>
>Hi Leslie,
>Can you also send the /etc/swift/object-server-sof.conf file from this
>system?
>
>Here is a sample of the file from my working system - it sounds like
>the config file may not be complete on your system:
>[root@spectrumscale ~]# cat /etc/swift/object-server-sof.conf
>[DEFAULT]
>bind_ip = 127.0.0.1
>bind_port = 6203
>workers = 3
>mount_check = false
>log_name = object-server-sof
>log_level = ERROR
>id_mgmt = unified_mode
>retain_acl = yes
>retain_winattr = yes
>retain_xattr = yes
>retain_owner = yes
>tempfile_prefix = .ibmtmp_
>disable_fallocate = true
>log_statsd_host = localhost
>log_statsd_port = 8125
>log_statsd_default_sample_rate = 1.0
>log_statsd_sample_rate_factor = 1.0
>log_statsd_metric_prefix =
>devices = /gpfs/fs1/object_fileset/o
>
>[pipeline:main]
>pipeline = object-server
>
>[app:object-server]
>use = egg:swiftonfile#object
>disk_chunk_size = 65536
>network_chunk_size = 65536
>
>[object-replicator]
>
>[object-updater]
>
>[object-auditor]
>
>    [object-reconstructor]
>
>
>Bill Owen
> *billo...@us.ibm.com*
><javascript:_e(%7B%7D,'cvml','billo...@us.ibm.com');>
>Spectrum Scale Object Storage
>520-799-4829
>
>
>[image: Inactive hide details for leslie elliott ---10/29/2016
>03:53:48 AM---Bill to be clear the file access I mentioned was in 
> relat]leslie
>elliott ---10/29/2016 03:53:48 AM---Bill to be clear the file access I
>mentioned was in relation to SMB and NFS
>
>From: leslie elliott <*leslie.james.elli...@gmail.com*
><javascript:_e(%7B%7D,'cvml','leslie.james.elli...@gmail.com');>>
>To: gpfsug main discussion list <*gpfsug-discuss@spectrumscale.org*
><javascript:_e(%7B%7D,'cvml','gpfsug-discuss@spectrumscale.org');>>
>Date: 10/29/2016 03:53 AM
>Subject: Re: [gpfsug-discuss] unified file and object
>Sent by: *gpfsug-discuss-boun...@spectrumscale.org*
><javascript:_e(%7B%7D,'cvml','gpfsug-discuss-boun...@spectrumscale.org');>
>--
>
>
>
>Bill
>
>to be clear the file access  I mentioned was in relation to SMB and
>NFS using mmuserauth rather than the unification with the object store
>since it is required as well
>
>but I did try to do this for object as well using the Administration
>and Programming Reference from page 142, was using unified_mode rather than
>local_mode
>
>mmobj config change --ccrfile spectrum-scale-object.conf --section
>capabilities --property file-access-enabled 

Re: [gpfsug-discuss] unified file and object

2016-11-02 Thread leslie elliott
Bill

you are correct about it missing details


[root@pren-gs7k-vm4 ~]# cat /etc/swift/object-server-sof.conf
[DEFAULT]
devices = /gpfs/pren01/ObjectFileset/o
log_level = ERROR



now that I have yours for reference I have updated the file and the service
starts, but I am unsure why it was not provisioned correctly initially

leslie


On 3 November 2016 at 00:28, Bill Owen <billo...@us.ibm.com> wrote:

> Hi Leslie,
> Can you also send the /etc/swift/object-server-sof.conf file from this
> system?
>
> Here is a sample of the file from my working system - it sounds like the
> config file may not be complete on your system:
> [root@spectrumscale ~]# cat /etc/swift/object-server-sof.conf
> [DEFAULT]
> bind_ip = 127.0.0.1
> bind_port = 6203
> workers = 3
> mount_check = false
> log_name = object-server-sof
> log_level = ERROR
> id_mgmt = unified_mode
> retain_acl = yes
> retain_winattr = yes
> retain_xattr = yes
> retain_owner = yes
> tempfile_prefix = .ibmtmp_
> disable_fallocate = true
> log_statsd_host = localhost
> log_statsd_port = 8125
> log_statsd_default_sample_rate = 1.0
> log_statsd_sample_rate_factor = 1.0
> log_statsd_metric_prefix =
> devices = /gpfs/fs1/object_fileset/o
>
> [pipeline:main]
> pipeline = object-server
>
> [app:object-server]
> use = egg:swiftonfile#object
> disk_chunk_size = 65536
> network_chunk_size = 65536
>
> [object-replicator]
>
> [object-updater]
>
> [object-auditor]
>
> [object-reconstructor]
>
>
> Bill Owen
> billo...@us.ibm.com
> Spectrum Scale Object Storage
> 520-799-4829
>
>
> [image: Inactive hide details for leslie elliott ---10/29/2016 03:53:48
> AM---Bill to be clear the file access I mentioned was in relat]leslie
> elliott ---10/29/2016 03:53:48 AM---Bill to be clear the file access I
> mentioned was in relation to SMB and NFS
>
> From: leslie elliott <leslie.james.elli...@gmail.com>
> To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
> Date: 10/29/2016 03:53 AM
> Subject: Re: [gpfsug-discuss] unified file and object
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
> --
>
>
>
> Bill
>
> to be clear the file access  I mentioned was in relation to SMB and NFS
> using mmuserauth rather than the unification with the object store since it
> is required as well
>
> but I did try to do this for object as well using the Administration and
> Programming Reference from page 142, was using unified_mode rather than
> local_mode
>
> mmobj config change --ccrfile spectrum-scale-object.conf --section
> capabilities --property file-access-enabled --value true
>
> the mmuserauth failed as you are aware, we have created test accounts
> without spaces in the DN and were successful with this step, so eagerly
> await a fix to be able to use the correct accounts
>
> mmobj config change --ccrfile object-server-sof.conf --section DEFAULT
> --property id_mgmt --value unified_mode
> mmobj config change --ccrfile object-server-sof.conf --section DEFAULT
> --property ad_domain --value DOMAIN
>
>
> we have successfully tested object stores on this cluster with simple auth
>
>
> the output you asked for is as follows
>
> [root@pren-gs7k-vm4 ~]# cat /etc/swift/object-server-sof.conf
> [DEFAULT]
> devices = /gpfs/pren01/ObjectFileset/o
> log_level = ERROR
>
>
> [root@pren-gs7k-vm4 ~]# systemctl -l status openstack-swift-object-sof
> ● openstack-swift-object-sof.service - OpenStack Object Storage (swift) -
> Object Server
>Loaded: loaded (/usr/lib/systemd/system/openstack-swift-object-sof.service;
> disabled; vendor preset: disabled)
>Active: failed (Result: exit-code) since Sat 2016-10-29 10:30:22 UTC;
> 27s ago
>   Process: 8086 ExecStart=/usr/bin/swift-object-server-sof
> /etc/swift/object-server-sof.conf (code=exited, status=1/FAILURE)
>  Main PID: 8086 (code=exited, status=1/FAILURE)
>
> Oct 29 10:30:22 pren-gs7k-vm4 systemd[1]: Started OpenStack Object Storage
> (swift) - Object Server.
> Oct 29 10:30:22 pren-gs7k-vm4 systemd[1]: Starting OpenStack Object
> Storage (swift) - Object Server...
> Oct 29 10:30:22 pren-gs7k-vm4 swift-object-server-sof[8086]: Error trying
> to load config from /etc/swift/object-server-sof.conf: No section
> 'object-server' (prefixed by 'app' or 'application' or 'composite' or
> 'composit' or 'pipeline' or 'filter-app') found in config
> /etc/swift/object-server-sof.conf
> Oct 29 10:30:22 pren-gs7k-vm4 systemd[1]: openstack-swift-object-sof.service:
> main process exited, code=exited, status=1/FAILURE
> Oct 29 10:30:22 pren-gs7k-vm4 systemd[1]: Unit 
> openstack-swift-object-sof.service
> entered failed state.
> Oct 

Re: [gpfsug-discuss] unified file and object

2016-10-29 Thread leslie elliott
Bill

to be clear the file access  I mentioned was in relation to SMB and NFS
using mmuserauth rather than the unification with the object store since it
is required as well

but I did try to do this for object as well using the Administration and
Programming Reference from page 142, was using unified_mode rather than
local_mode

mmobj config change --ccrfile spectrum-scale-object.conf --section
capabilities --property file-access-enabled --value true

the mmuserauth failed as you are aware, we have created test accounts
without spaces in the DN and were successful with this step, so eagerly
await a fix to be able to use the correct accounts

mmobj config change --ccrfile object-server-sof.conf --section DEFAULT
--property id_mgmt --value unified_mode
mmobj config change --ccrfile object-server-sof.conf --section DEFAULT
--property ad_domain --value DOMAIN


we have successfully tested object stores on this cluster with simple auth


the output you asked for is as follows

[root@pren-gs7k-vm4 ~]# cat /etc/swift/object-server-sof.conf
[DEFAULT]
devices = /gpfs/pren01/ObjectFileset/o
log_level = ERROR


[root@pren-gs7k-vm4 ~]# systemctl -l status openstack-swift-object-sof
● openstack-swift-object-sof.service - OpenStack Object Storage (swift) -
Object Server
   Loaded: loaded
(/usr/lib/systemd/system/openstack-swift-object-sof.service; disabled;
vendor preset: disabled)
   Active: failed (Result: exit-code) since Sat 2016-10-29 10:30:22 UTC;
27s ago
  Process: 8086 ExecStart=/usr/bin/swift-object-server-sof
/etc/swift/object-server-sof.conf (code=exited, status=1/FAILURE)
 Main PID: 8086 (code=exited, status=1/FAILURE)

Oct 29 10:30:22 pren-gs7k-vm4 systemd[1]: Started OpenStack Object Storage
(swift) - Object Server.
Oct 29 10:30:22 pren-gs7k-vm4 systemd[1]: Starting OpenStack Object Storage
(swift) - Object Server...
Oct 29 10:30:22 pren-gs7k-vm4 swift-object-server-sof[8086]: Error trying
to load config from /etc/swift/object-server-sof.conf: No section
'object-server' (prefixed by 'app' or 'application' or 'composite' or
'composit' or 'pipeline' or 'filter-app') found in config
/etc/swift/object-server-sof.conf
Oct 29 10:30:22 pren-gs7k-vm4 systemd[1]:
openstack-swift-object-sof.service: main process exited, code=exited,
status=1/FAILURE
Oct 29 10:30:22 pren-gs7k-vm4 systemd[1]: Unit
openstack-swift-object-sof.service entered failed state.
Oct 29 10:30:22 pren-gs7k-vm4 systemd[1]:
openstack-swift-object-sof.service failed.




I am happy to help you or for you to help to debug this problem via a short
call


thanks

leslie



On 29 October 2016 at 00:37, Bill Owen  wrote:

>
> 2. Can you provide more details on how you configured file access? The
> normal procedure is to use "mmobj file-access enable", and this will set up
> the required settings in the config file. Can you send us:
> - the steps used to configure file access
> - the resulting /etc/swift/object-server-sof.conf
> - log files from /var/log/swift or output of "systemctl status
> openstack-swift-object-sof"
>
> We can schedule a short call to help debug if needed.
>
>
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] unified file and object

2016-10-25 Thread leslie elliott
Hi

We are in the process of trying to configure unified file and object
storage in unified_mode and have a few problems

We are running 4.2.1 and do not have any current issues the file access
protocols setting up the authentication

First issue we do have is binding the object data to our Active Directory,
we seem to be hitting a road block due to the fact that the bind DN has
spaces in it, if we enclose the DN in quotes it still fails, if we escape
them with the appropriate RFC  value we can get the mmuserauth to complete
but the lookups from the local keystone fail for the authentication of the
users

The DN for the swift user and swift admin also have quotes in them, so just
doing it on the command line is not enough to get the mmuserauth command to
complete

Second problem is

OBJ:openstack-swift-object-sof   is not running

This seems to be due to the config file not having  bind_ip and bind_port
values, if these are added then the error turns to pipeline of other
settings in the config file missing

This particular issue occurs no matter what the auth type is set to be for
object

Hopefully this make some sense to someone

Thanks

leslie


Leslie Elliott, Infrastructure Support Specialist,  Faculty Infrastructure
and Applications Support
Information Technology Services, The University of Queensland
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss