On 6/23/21 11:52 PM, Wayne Sawdon wrote:
At a higher level what are you trying to do? Include some directories and exclude others? Use a
different backup server? What do you need that is not there?
The current situation is that a customer insists on using different management classes on the TS
Call it like this:
/usr/lpp/mmfs/bin/mmbackup --tsm-servers -P
As this non-trivial it should be mentioned in the documentation!
Uli
--
Dipl.-Inf. Ulrich Sibiller science + computing ag
System AdministrationHagello
On 6/23/21 1:08 PM, T.A. Yeep wrote:
You can refer to the Administrator Guide > Chapter 30 > Policies for automating file management, or
access via the link below. If you downloaded a PDF, it starts with page 487.
https://www.ibm.com/docs/en/spectrum-scale/5.1.0?topic=management-policy-rules
Hallo,
mmbackup offers -P to specify an own policy. Unfortunately I cannot seem to find documentation how
that policy has to look like.
I mean, if I grab the policy generated automatically by mmbackup it looks like
this:
-
/* Auto-gener
On 6/2/21 4:12 PM, IBM Spectrum Scale wrote:
The data and metadata replications are 2 on both source and destination
filesystems, so from:
$ mmrepquota -j srcfilesys | grep fileset
srcfileset FILESET 800 800 800 0 none |
863 0 0
0
On 6/2/21 1:09 PM, Jonathan Buzzard wrote:
My rsync is using -AHS, so this should not be relevant here.
I wonder have you done more than one rsync? If so are you using --delete?
If not and the source fileset has changed then you will be accumulating
files at the destination and it would explai
On 6/1/21 6:08 PM, Kumaran Rajaram wrote:
If I'm not mistaken even with SS5 created filesystems, 1 MiB FS block size
implies 32 kiB sub blocks (32 sub-blocks).
Just to add: The /srcfilesys seemed to have been created with GPFS version 4.x
which supports only 32 sub-blocks per block.
-T
Hi,
I experience some strangeness that I fail to understand completely. I have a fileset that got copied
(rsynced) from one cluster to another. The reported size (mmrepquota) of the source filesystem is
800G (and due to data and metadata replication being set to 2 this effectively means 400G).
WEIGHT(Length(PATH_NAME))
WHERE PATH_NAME LIKE '/mypath/%' AND MISC_ATTRIBUTES LIKE '%D%'
Thanks, I am aware of that but it will not really help with my speed concerns.
Uli
--
Dipl.-Inf. Ulrich Sibiller science + computing ag
System Administration
s being deleted :-)
;-) Yeah, that's why I did not the do the mv in the first place ;-)
Thanks,
Uli
--
Dipl.-Inf. Ulrich Sibiller science + computing ag
System AdministrationHagellocher Weg 73
Hotline +49 7071 9457 681 7207
Hello *,
I have to delete a subtree of about ~50 million files in thousands of subdirs, ~14TB of data.
Running a recursive rm is very slow so I setup a simple policy file:
RULE 'delstuff' DELETE
DIRECTORIES_PLUS
WHERE PATH_NAME LIKE '/mypath/%'
This kinda works but is not really fas
On 22.03.21 10:54, Jan-Frode Myklebust wrote:
No — all copying between filesets require full data copy. No simple rename.
This might be worthy of an RFE, as it’s a bit unexpected, and could potentially
work more efficiently..
Yes, your are right. So please vote here:
http://www.ibm.com/develo
On 22.03.21 13:24, Simon Thompson wrote:
You could maybe create the new file-set, link in a different place, copy the
data …
Then at somepoint, unlink and relink and resync. Still some user access, but you are potentially
reducing the time to do the copy.
Yes, but this does not help if a fil
to schedule a downtime for the dirs in
question?
I mean, is there a way to transparently move data to an independent fileset at
the same path?
Kind regards,
Ulrich Sibiller
--
Science + Computing AG
Vorstandsvorsitzender/Chairman of the board of management:
Dr. Martin Matzke
Vorstand/Board of
On 8/28/20 11:43 AM, Philipp Helo Rehs wrote:
root 38212 100 0.0 35544 5752 ? R 11:32 9:40
/usr/lpp/mmfs/bin/tsgskkm store --cert
/var/mmfs/ssl/stage/tmpKeyData.mmremote.38169.cert --priv
/var/mmfs/ssl/stage/tmpKeyData.mmremote.38169.priv --out
/var/mmfs/ssl/stage/tmpKeyData.m
Am 28.04.20 um 13:38 schrieb Jonathan Buzzard:
Yuck, and double yuck. There are many things you can say about systemd
(and I have a choice few) but one of them is that it makes this sort of
hackery obsolete. At least that is one of it goals.
A systemd way to do it would be via one or more helper
Am 28.04.20 um 13:55 schrieb Hannappel, Juergen:
a gpfs.mount target should be automatically created at boot by the
systemd-fstab-generator from the fstab entry, so no need with hackery like
ismountet.txt...
A generic gpfs.mount target does not seem to exist on my system. There are only specific
Am 28.04.20 um 15:57 schrieb Skylar Thompson:
>> Have you looked a the mmaddcallback command and specifically the file
system mount callbacks?
> We use callbacks successfully to ensure Linux auditd rules are only loaded
> after GPFS is mounted. It was easy to setup, and there's very fine-grai
Hi,
when the gpfs systemd service returns from startup the filesystems are usually not mounted. So
having another service depending on gpfs is not feasible if you require the filesystem(s).
Therefore we have added a script to the systemd gpfs service that waits for all local gpfs
filesystems
On 2/3/20 11:02 AM, Billich Heinrich Rainer (ID SD) wrote:
Thank you. I wonder if there is any ESS version which deploys FW860.70 for ppc64le. The Readme for
5.3.5 lists FW860.60 again, same as 5.3.4?
I have done the upgrade to 5.3.5 last week and gssinstallcheck now reports
860.70:
[...]
I
On 1/29/20 2:05 PM, Billich Heinrich Rainer (ID SD) wrote:
Hello,
Can I change the times at which the GUI runs HW_INVENTORY and related tasks?
we frequently get messages like
gui_refresh_task_failed GUI WARNING 12 hours ago The following GUI
refresh task(s) failed:
On 09.01.20 22:19, Popescu, Razvan wrote:
Thanks,
I’ll set tonight’s run with that debug flag.
I have not tested this myself but if you enable auditlogging this should create
according logs.
Uli
--
Science + Computing AG
Vorstandsvorsitzender/Chairman of the board of management:
Dr. Martin
On 12.08.19 15:38, Marc A Kaplan wrote:
My Admin guide says:
The loss percentage and period are set via the configuration
variables *fileHeatLossPercent *and *fileHeatPeriodMinutes*. By default, the file access temperature
is not
tracked. To use access temperature in policy, the tracking must f
Hello,
I am having difficulties with Spectrum Scale's fileheat feature on Spectrum
Scale 5.0.2/5.0.3:
The config has it activated:
# mmlsconfig | grep fileHeat
fileHeatPeriodMinutes 720
Now everytime I look at the files using mmapplypolicy I only see 0 for the fileheat. I have both
tried rea
On 14.12.2018 15:45, Jeno Cram wrote:
Are you using Extended attributes on the directories in question?
No. What's the background of your question?
Kind regards,
Uli
--
Science + Computing AG
Vorstandsvorsitzender/Chairman of the board of management:
Dr. Martin Matzke
Vorstand/Board of Manag
ppen if the netgroup caching code reports that "client1" is
NOT a member of "netgroup1".
I have also opened a support case at IBM for this.
@Malahal: Looks like you have written the netgroup caching code, feel free to ask for further
details if required.
Kind regards,
Ulri
26 matches
Mail list logo