Re: [gpfsug-discuss] RDMA write error IBV_WC_RETRY_EXC_ERR

2021-07-12 Thread Yaron Daniel
Hi

I had this error is such mix env, does new servers can run OFed v4.9.x ?
In parallel - please open case in Mellanox, since it might be also 
firmware/driver issue with Ofed - or HCA which is not supported with Ofed 
5.x.

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Lab Services Consultant – Storage and Cloud
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
Webex:https://ibm.webex.com/meet/yard
IBM Israel

 
 
 

  
 



From:   "Iban Cabrillo" 
To: "gpfsug-discuss" 
Date:   07/12/2021 05:25 PM
Subject:[EXTERNAL] Re: [gpfsug-discuss] RDMA write error 
IBV_WC_RETRY_EXC_ERR
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hi, 

   old servers:

   [root@gpfs01 ~]# rpm -qa| grep ofed
   ofed-scripts-4.3-OFED.4.3.1.0.1.x86_64
   mlnxofed-docs-4.3-1.0.1.0.noarch

   and newest servers:

   [root@gpfs08 ~]# rpm -qa| grep ofed
   ofed-scripts-5.0-OFED.5.0.2.1.8.x86_64
   mlnxofed-docs-5.0-2.1.8.0.noarch

regards, I___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss 





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] RDMA write error IBV_WC_RETRY_EXC_ERR

2021-07-11 Thread Yaron Daniel
Hi

Did u upgrade OFED version in some of the servers to v5.x ?

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Lab Services Consultant – Storage and Cloud
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
Webex:https://ibm.webex.com/meet/yard
IBM Israel

 
 
 

  
 



From:   "Wahl, Edward" 
To: "gpfsug main discussion list" 
Date:   07/09/2021 10:21 PM
Subject:[EXTERNAL] Re: [gpfsug-discuss] RDMA write error 
IBV_WC_RETRY_EXC_ERR
Sent by:gpfsug-discuss-boun...@spectrumscale.org



>-E- Link: ib2s5/U1/P6<-->node152/U1/P1 - Unexpected actual link speed 10

This looks like a bad cable (or port).   Trying re-seating the cable on 
both ends, or replacing it to get to full Link Speed.
Re-run ibdiagnet to confirm or use something like 'ibportstate' to check 
it.

Ed Wahl
OSC


-Original Message-
From: gpfsug-discuss-boun...@spectrumscale.org 
 On Behalf Of Iban Cabrillo
Sent: Friday, July 9, 2021 11:57 AM
To: gpfsug-discuss 
Subject: Re: [gpfsug-discuss] RDMA write error IBV_WC_RETRY_EXC_ERR

Thanks both of you, for your fast answer,

   I just restart the server with the biggest waiters, and seems that 
every thing is working now

   Using de ib diag command I see these errors:

   -E- lid=0x0380 dev=4115 gpfs03/U1/P1
Performance Monitor counter : Value 
port_xmit_discard   : 65535  (overflow)
-E- lid=0x0ed0 dev=4115 gpfs01/U2/P1
Performance Monitor counter : Value 
port_xmit_discard   : 65535  (overflow)
..
-E- Link: ib2s5/U1/P6<-->node152/U1/P1 - Unexpected actual link speed 10 
(enable_speed1="2.5 or 5 or 10 or FDR10", enable_speed2="2.5 or 5 or 10 or 
FDR10" ther efore final speed should be FDR10)

Regards, I
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss 





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale pagepool size with RDMA

2020-07-23 Thread Yaron Daniel
Hi


What is the output for:
#mmlsconfig |grep -i verbs 
#ibstat

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect ? IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
Webex:https://ibm.webex.com/meet/yard
IBM Israel

 
 
 

  
 



From:   Prasad Surampudi 
To: "gpfsug-discuss@spectrumscale.org" 

Date:   07/23/2020 03:34 AM
Subject:[EXTERNAL] [gpfsug-discuss] Spectrum Scale pagepool size 
with RDMA
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hi,

We have an ESS clusters with two CES nodes. The pagepool is set to 128 GB 
( Real Memory is 256 GB ) on both ESS NSD servers and CES nodes as well. 
Occasionally we see the mmfsd process memory usage reaches 90% on NSD 
servers and CES nodes and stays there until GPFS is recycled. I have 
couple of questions in this scenario:

 What are the general recommendations of pagepool size for nodes with RDMA 
enabled? On, IBM knowledge center for RDMA tuning says "If the GPFS 
pagepool is set to 32 GB, then the mapping of the RDMA for this pagepool 
must be at least 64 GB."  So, does this mean that the pagepool can't be 
more than half of real memory with RDMA enabled? Also, Is this the reason 
why mmfsd memory usage exceeds pagepool size and spikes to almost 90%?
If we dont want to see high mmfsd process memory usage on NSD/CES nodes, 
should we decrease the pagepool size?
Can we tune  log_num_mtt parameter to limit the memory usage? Currently 
its set to 0 for both NSD (ppc64_le) and CES (x86_64).
We also see messages like "Verbs RDMA disabled for xx.xx.xx.xx due to no 
matching port found" . Any idea what this message indicate? I dont see any 
Verbs RDMA enabled message after these warning messages. Does it get 
enabled automatically? 

Prasad Surampudi|Sr. Systems Engineer|ATS Group, LLC


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=Bn1XE9uK2a9CZQ8qKnJE3Q=3V12EzdqYBk1P235cOvncsD-pOXNf5e5vPp85RnNhP8=XxlITEUK0nSjIyiu9XY1DEbYiVzVbp5XHcvQPfFJ2NY=
 





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Encryption - checking key server health (SKLM)

2020-02-19 Thread Yaron Daniel
Hi

Also in case that u configure 3 SKLM servers (1 Primary - 2 Slaves, in 
case the Primary is not responding you will see in the logs this messages:






 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect – IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
Webex:https://ibm.webex.com/meet/yard
IBM Israel

 
 
 

  



From:   "Felipe Knop" 
To: gpfsug-discuss@spectrumscale.org
Cc: gpfsug-discuss@spectrumscale.org
Date:   20/02/2020 00:08
Subject:[EXTERNAL] Re: [gpfsug-discuss] Encryption - checking key 
server health (SKLM)
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Bob,
 
Scale does not yet have a tool to perform a health-check on a key server, 
or an independent mechanism to retrieve keys.
 
One can use a command such as 'mmkeyserv key show' to retrieve the list of 
keys from a given SKLM server (and use that to determine whether the key 
server is responsive), but being able to retrieve a list of keys does not 
necessarily mean being able to retrieve the actual keys, as the latter 
goes through the KMIP port/protocol, and the former uses the REST 
port/API:
 
# mmkeyserv key show --server 192.168.105.146 --server-pwd 
/tmp/configKeyServ_pid11403914_keyServPass --tenant sklm3Tenant
KEY-ad4f3a9-01397ebf-601b-41fb-89bf-6c4ac333290b
KEY-ad4f3a9-019465da-edc8-49d4-b183-80ae89635cbc
KEY-ad4f3a9-0509893d-cf2a-40d3-8f79-67a444ff14d5
KEY-ad4f3a9-08d514af-ebb2-4d72-aa5c-8df46fe4c282
KEY-ad4f3a9-0d3487cb-a674-44ab-a7d0-1f68e86e2fc9
[...]
 
Having a tool that can retrieve keys independently from mmfsd would be 
useful capability to have. Could you submit an RFE to request such 
function?
 
Thanks,
 
  Felipe
 

Felipe Knop k...@us.ibm.com
GPFS Development and Security
IBM Systems
IBM Building 008
2455 South Rd, Poughkeepsie, NY 12601
(845) 433-9314 T/L 293-9314
 
 
 
- Original message -
From: "Oesterlin, Robert" 
Sent by: gpfsug-discuss-boun...@spectrumscale.org
To: gpfsug main discussion list 
Cc:
Subject: [EXTERNAL] [gpfsug-discuss] Encryption - checking key server 
health (SKLM)
Date: Wed, Feb 19, 2020 11:35 AM
 
I’m looking for a way to check the status/health of the encryption key 
servers from the client side - detecting if the key server is unavailable 
or can’t serve a key. I ran into a situation recently where the server was 
answering HTTP requests on the port but wasn’t returning they key. I can’t 
seem to find a way to check if the server will actually return a key.
 
Any ideas?
 
 
Bob Oesterlin
Sr Principal Storage Engineer, Nuance
 
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=Bn1XE9uK2a9CZQ8qKnJE3Q=ARpfta6x0GFP8yy67RAuT4SMBrRHROGRUwCOSPVDEF8=aMBH47I25734lVmyzTZBiPd6a1ELRuurxoFCTf6Ij_Y=
 





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Question about Policies

2019-12-27 Thread Yaron Daniel
Hi

U can create list of diretories in output file which were not modify in 
the last 30 days, and than second script will move this directories to the 
new location that u want.


 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect – IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
Webex:https://ibm.webex.com/meet/yard
IBM Israel

 
 
 

  



From:   Kevin Doyle 
To: "gpfsug-discuss@spectrumscale.org" 

Date:   27/12/2019 13:45
Subject:[EXTERNAL] [gpfsug-discuss] Question about Policies
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hi
 
I work for Cancer Research UK MI in Manchester UK. I am new to GPFS and 
have been tasked with creating a policy that will
Move files older that 30 days to a new folder within the same pool. There 
are a lot of files so using a policy based move will be faster.
I have read about the migration Rule but it states a source and 
destination pool, we only have one pool. Will it work if I define the same 
source and destination pool ?
 
Many thanks
Kevin
 
 
Kevin Doyle | Linux Administrator, Scientific Computing
Cancer Research UK, Manchester Institute
The University of Manchester
Room 13G40, Alderley Park, Macclesfield SK10 4TG
Mobile:  07554 223480
Email: kevin.do...@manchester.ac.uk
 

 
 
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=Bn1XE9uK2a9CZQ8qKnJE3Q=Wg3EAA9O8sH3c_zHS2h8miVpSosqtXulMRqXMRwSMe0=TdemXXkFD1mjpxNFg7Y_DYYPpJXZk7BmQcW9hWQDLso=
 





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] mmvdisk - how to see which recovery groups are managed by mmvdisk?

2019-11-05 Thread Yaron Daniel
Hi Steven

Can you please try to run mmlsnsd - and let me know if you can change the 
nsd name before/after configure them with mmvdisk.


 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect – IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
Webex:https://ibm.webex.com/meet/yard
IBM Israel

 
 
 

  



From:   "Steven Daniels" 
To: gpfsug main discussion list 
Date:   04/11/2019 21:44
Subject:[EXTERNAL] Re: [gpfsug-discuss] mmvdisk - how to see which 
recovery groups are managed by mmvdisk?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



the vdiskset names can be renamed - 
https://www.ibm.com/support/knowledgecenter/en/SSYSP8_5.3.1/com.ibm.spectrum.scale.raid.v5r01.adm.doc/bl8adm_mmvdisk_vdiskset.htm


Steven A. Daniels
Cross-brand Client Architect
Senior Certified IT Specialist
National Programs
Fax and Voice: 3038101229
sadan...@us.ibm.com
http://www.ibm.com




From:    "Yaron Daniel" 
To:gpfsug main discussion list 
Date:11/04/2019 12:21 PM
Subject:[EXTERNAL] Re: [gpfsug-discuss] mmvdisk - how to see which 
recovery groups aremanaged by mmvdisk?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hi

From what i know - there is no control on the NSD names - they got 
automatically generated by mmvdisk.



Regards



 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect – IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
Webex:https://ibm.webex.com/meet/yard
IBM Israel

 
 
 


 



From:"Billich  Heinrich Rainer (ID SD)" 

To:gpfsug main discussion list 
Date:04/11/2019 19:20
Subject:[EXTERNAL] [gpfsug-discuss] mmvdisk - how to see which 
recovery groups aremanaged by mmvdisk?
Sent by:gpfsug-discuss-boun...@spectrumscale.org

Hello,
 
I   try to get acquainted with mmvdisk: can I  decide on the names of 
vdisks/nsds which mmvdisk creates? Tools like mmdf still show nsd devices, 
no vdisk-sets, hence a proper naming helps.  RG001VS001 isn’t always what 
I would choose.
 
Of course I can just not use mmvdisk where possible, but some 
recoverygroups already got migrated, so I have to. And I admit I like the 
output of ‘# mmvdisk recoverygroup list --declustered-array” which gives a 
nice. Summary of total/free disk space.
 
Cheers,
 
Heiner


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=Bn1XE9uK2a9CZQ8qKnJE3Q=EzTElY1rKKm1TuasJlFc6kJOA5qcwp0FPH71ofd8CsA=7Tbbpw_RbqYWzzoaTVvo7V_11FP9ytTQRHw_TWgC24Q=
 





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] mmvdisk - how to see which recovery groups are managed by mmvdisk?

2019-11-04 Thread Yaron Daniel
Hi

From what i know - there is no control on the NSD names - they got 
automatically generated by mmvdisk.


 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect – IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
Webex:https://ibm.webex.com/meet/yard
IBM Israel

 
 
 

  



From:   "Billich  Heinrich Rainer (ID SD)" 
To: gpfsug main discussion list 
Date:   04/11/2019 19:20
Subject:[EXTERNAL] [gpfsug-discuss] mmvdisk - how to see which 
recovery groups are managed by mmvdisk?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hello,
 
I   try to get acquainted with mmvdisk: can I  decide on the names of 
vdisks/nsds which mmvdisk creates? Tools like mmdf still show nsd devices, 
no vdisk-sets, hence a proper naming helps.  RG001VS001 isn’t always what 
I would choose.
 
Of course I can just not use mmvdisk where possible, but some 
recoverygroups already got migrated, so I have to. And I admit I like the 
output of ‘# mmvdisk recoverygroup list --declustered-array” which gives a 
nice. Summary of total/free disk space.
 
Cheers,
 
Heiner
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=Bn1XE9uK2a9CZQ8qKnJE3Q=Mg9Yxn6axX4-iDSDZNIhF58cDheSv41MfR8uVXS_q58=bwXBF_0oFwkRREwv9IUvhXGgbQtjhEnJR5Xma7_XFIU=
 





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] default owner and group for POSIX ACLs

2019-10-15 Thread Yaron Daniel
Hi

In case you want to review with ls -l the POSIX permissions, please put 
the relevant permissions on the SMB share, and add CREATOROWNER & 
CREATETORGROUP.
Than ls -l will show you the owner + group + everyone permissions.


 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect – IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
Webex:https://ibm.webex.com/meet/yard
IBM Israel

 
 
 

  



From:   Jonathan Buzzard 
To: "gpfsug-discuss@spectrumscale.org" 

Date:   15/10/2019 23:34
Subject:[EXTERNAL] Re: [gpfsug-discuss] default owner and group 
for POSIX ACLs
Sent by:gpfsug-discuss-boun...@spectrumscale.org



On 15/10/2019 17:15, Paul Ward wrote:

[SNIP]

>> ...I am not sure why you need POSIX ACL's if you are running Linux...
>  From what I have recently read...
> 
https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_admnfsaclg.htm

> "Linux does not allow a file system to be NFS V4 exported unless it 
supports POSIX ACLs."
> 

Only if you are using the inbuilt kernel NFS server, which IMHO is awful 
from a management perspective. That is you have zero visibility into 
what the hell it is doing when it all goes pear shaped unless you break 
out dtrace. I am not sure that using  dtrace on a production service to 
find out what is going on is "best practice". It also in my experience 
stops you cleanly shutting down most of the time. The sooner it gets 
removed from the kernel the better IMHO.

If you are using protocol nodes which is the only supported option as 
far as I am aware then that does not apply. I would imagined if you are 
rolling your own Ganesha NFS server it won't matter either.

Checking the code of the FSAL in Ganesha shows functions for converting 
between GPFS ACL's and the ACL format as used by Ganesha. My 
understanding was one of the drivers for using Ganesha as an NFS server 
with GPFS was you can write a FSAL to do just that, in the same way as 
on Samba you load the vfs_gpfs module, unless you are into self 
flagellation I guess.


JAB.

-- 
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=Bn1XE9uK2a9CZQ8qKnJE3Q=b8w1GtIuT4M2ayhd-sZvIeIGVRrqM7QoXlh1KVj4Zq4=huFx7k3Vx10aZ-7AVq1HSVo825JPWVdFaEu3G3Dh-78=
 






___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Migrating billions of files?

2019-03-06 Thread Yaron Daniel
Hi

U can also use today Aspera - which will replicate gpfs extended attr.

Integration of IBM Aspera Sync with IBM Spectrum Scale: Protecting and 
Sharing Files Globally
http://www.redbooks.ibm.com/redpieces/abstracts/redp5527.html?Open


 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect ? IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

  



From:   Simon Thompson 
To: gpfsug main discussion list 
Date:   03/06/2019 11:08 AM
Subject:Re: [gpfsug-discuss] Migrating billions of files?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



AFM doesn?t work well if you have dependent filesets though .. which we 
did for quota purposes.
 
Simon
 
From:  on behalf of 
"y...@il.ibm.com" 
Reply-To: "gpfsug-discuss@spectrumscale.org" 

Date: Wednesday, 6 March 2019 at 09:01
To: "gpfsug-discuss@spectrumscale.org" 
Subject: Re: [gpfsug-discuss] Migrating billions of files?
 
Hi

What permissions you have ? Do u have only Posix , or also SMB attributes 
?

If only posix attributes you can do the following:

- rsync (which will work on different filesets/directories in parallel.
- AFM (but in case you need rollback - it will be problematic) 

 
Regards
 



 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect ? IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

  



From:"Oesterlin, Robert" 
To:gpfsug main discussion list 
Date:03/05/2019 11:57 PM
Subject:[gpfsug-discuss] Migrating billions of files?
Sent by:gpfsug-discuss-boun...@spectrumscale.org

 
I?m looking at migration 3-4 Billion files, maybe 3PB of data between GPFS 
clusters. Most of the files are small - 60% 8K or less. Ideally I?d like 
to copy at least 15-20M files per day - ideally 50M.
 
Any thoughts on how achievable this is? Or what to use? Either with AFM, 
mpifileutils, rsync.. other? Many of these files would be in 4k inodes. 
Destination is ESS.
 
 
Bob Oesterlin
Sr Principal Storage Engineer, Nuance
 ___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=Bn1XE9uK2a9CZQ8qKnJE3Q=B2e9s5aGSXZvMOkd4ZPk_EIjfTloX7O_ExWsyR0RGP8=wwIfs_8RrX5Z7mGp2Mehj5z7z2yUhr0r-vO7TMyNUeE=





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] suggestions for copying one GPFS file system into another

2019-03-06 Thread Yaron Daniel
Hi

U can also use today Aspera - which will replicate gpfs extended attr.

Integration of IBM Aspera Sync with IBM Spectrum Scale: Protecting and 
Sharing Files Globally
http://www.redbooks.ibm.com/redpieces/abstracts/redp5527.html?Open

I used in the past the arsync - used for Sonas - i think this is now the 

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect ? IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

  



From:   Simon Thompson 
To: gpfsug main discussion list 
Date:   03/05/2019 11:39 PM
Subject:Re: [gpfsug-discuss] suggestions for copying one GPFS file 
system into another
Sent by:gpfsug-discuss-boun...@spectrumscale.org



DDN also have a paid for product for doing moving of data (data flow) We 
found out about it after we did a massive data migration...

I can't comment on it other than being aware of it. Sure your local DDN 
sales person can help.

But if only IBM supported some sort of restripe to new block size, we 
wouldn't have to do this mass migration :-P

Simon 

From: gpfsug-discuss-boun...@spectrumscale.org 
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Simon Thompson 
[s.j.thomp...@bham.ac.uk]
Sent: 05 March 2019 16:38
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] suggestions forwar copying one GPFS file 
system into another

I wrote a patch to mpifileutils which will copy gpfs attributes, but when 
we played with it with rsync, something was obviously still different 
about the attrs from each, so use with care.

Simon

From: gpfsug-discuss-boun...@spectrumscale.org 
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Ratliff, John 
[jdrat...@iu.edu]
Sent: 05 March 2019 16:21
To: gpfsug-discuss@spectrumscale.org
Subject: [gpfsug-discuss] suggestions for copying one GPFS file system 
into another

We use a GPFS file system for our computing clusters and we?re working on 
moving to a new SAN.

We originally tried AFM, but it didn?t seem to work very well. We tried to 
do a prefetch on a test policy scan of 100 million files, and after 24 
hours it hadn?t pre-fetched anything. It wasn?t clear what was happening. 
Some smaller tests succeeded, but the NFSv4 ACLs did not seem to be 
transferred.

Since then we started using rsync with the GPFS attrs patch. We have over 
600 million files and 700 TB. I split up the rsync tasks with lists of 
files generated by the policy engine and we transferred the original data 
in about 2 weeks. Now we?re working on final synchronization. I?d like to 
use one of the delete options to remove files that were sync?d earlier and 
then deleted. This can?t be combined with the files-from option, so it?s 
harder to break up the rsync tasks. Some of the directories I?m running 
this against have 30-150 million files each. This can take quite some time 
with a single rsync process.

I?m also wondering if any of my rsync options are unnecessary. I was using 
avHAXS and numeric-ids. I?m thinking the A (acls) and X (xatttrs) might be 
unnecessary with GPFS->GPFS. We?re only using NFSv4 GPFS ACLs. I don?t 
know if GPFS uses any xattrs that rsync would sync or not. Removing those 
two options removed several system calls, which should make it much 
faster, but I want to make sure I?m syncing correctly. Also, it seems 
there is a problem with the GPFS patch on rsync where it will always give 
an error trying to get GPFS attributes on a symlink, which means it 
doesn?t sync any symlinks when using that option. So you can rsync 
symlinks or GPFS attrs, but not both at the same time. This has lead to me 
running two rsyncs, one to get all files and one to get all attributes.

Thanks for any ideas or suggestions.

John Ratliff | Pervasive Technology Institute | UITS | Research Storage ? 
Indiana University | 
https://urldefense.proofpoint.com/v2/url?u=http-3A__pti.iu.edu=DwIF-g=jf_iaSHvJObTbx-siA1ZOg=Bn1XE9uK2a9CZQ8qKnJE3Q=Yz-c0LCo_QGBe4pgbJEr_zzSX4Q1ttDOaHYmcfLln5U=gNzUpbvNUfVteTqZ3zpzpbC4M1lQiopyrIfr46h4Okc=


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwIF-g=jf_iaSHvJObTbx-siA1ZOg=Bn1XE9uK2a9CZQ8qKnJE3Q=Yz-c0LCo_QGBe4pgbJEr_zzSX4Q1ttDOaHYmcfLln5U=pG-g3zRAtaMwcmwoabY4dvuI1j3jbLk-uGHZ6nz6TlU=

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwIF-g=jf_iaSHvJObTbx-siA1ZOg=Bn1XE9uK2a9CZQ8qKnJE3Q=Yz-c0LCo_QGBe4pgbJEr_zzSX4Q1ttDOaHYmcfLln5U=pG-g3zRAtaMwcmwoabY4dvuI1j3jbLk-uGHZ6nz6

Re: [gpfsug-discuss] Migrating billions of files?

2019-03-06 Thread Yaron Daniel
Hi

What permissions you have ? Do u have only Posix , or also SMB attributes 
?

If only posix attributes you can do the following:

- rsync (which will work on different filesets/directories in parallel.
- AFM (but in case you need rollback - it will be problematic) 

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect ? IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

  



From:   "Oesterlin, Robert" 
To: gpfsug main discussion list 
Date:   03/05/2019 11:57 PM
Subject:[gpfsug-discuss] Migrating billions of files?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



I?m looking at migration 3-4 Billion files, maybe 3PB of data between GPFS 
clusters. Most of the files are small - 60% 8K or less. Ideally I?d like 
to copy at least 15-20M files per day - ideally 50M.
 
Any thoughts on how achievable this is? Or what to use? Either with AFM, 
mpifileutils, rsync.. other? Many of these files would be in 4k inodes. 
Destination is ESS.
 
 
Bob Oesterlin
Sr Principal Storage Engineer, Nuance
 ___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=Bn1XE9uK2a9CZQ8qKnJE3Q=uXadyLeBnskK8mq-S8OjwY-ESxuNxXme9Akj9QaQBiE=UdKoJNySkr8itrQaRD9XMkVjBGnVaU8XnyxuKCldX-8=





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Same file opened by many nodes / processes

2018-07-21 Thread Yaron Daniel
Hi

Do u run mmbackup on snapshot , which is read only ?

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect ? IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

   



From:   Peter Childs 
To: "gpfsug-discuss@spectrumscale.org" 

Date:   07/10/2018 05:51 PM
Subject:[gpfsug-discuss] Same file opened by many nodes / 
processes
Sent by:gpfsug-discuss-boun...@spectrumscale.org



We have an situation where the same file is being read by around 5000
"jobs" this is an array job in uge with a tc set, so the file in
question is being opened by about 100 processes/jobs at the same time.

Its a ~200GB file so copying the file locally first is not an easy
answer, and these jobs are causing issues with mmbackup scanning the
file system, in that the scan is taking 3 hours instead of the normal
40-60 minutes.

This is read only access to the file, I don't know the specifics about
the job.

It looks like the metanode is moving around a fair amount (given what I
can see from mmfsadm saferdump file)

I'm wondering if we there is anything we can do to improve things or
that can be tuned within GPFS, I'm don't think we have an issue with
token management, but would increasing maxFileToCache on our token
manager node help say?

Is there anything else I should look at, to try and attempt to allow
GPFS to share this file better.

Thanks in advance

Peter Childs

-- 
Peter Childs
ITS Research Storage
Queen Mary, University of London
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss






___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] High I/O wait times

2018-07-08 Thread Yaron Daniel
Hi

Clean all counters on the FC switches and see which port have errors .

For brocade run :

slotstatsclear
statsclear
porterrshow

For cisco run:
 
clear countersall

There might be bad gbic/cable/Storage gbic, which can affect the 
performance, if there is something like that - u can see which ports have 
errors grow over time.
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect ? IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

   



From:   Jonathan Buzzard 
To: gpfsug-discuss@spectrumscale.org
Date:   07/07/2018 11:43 AM
Subject:Re: [gpfsug-discuss] High I/O wait times
Sent by:gpfsug-discuss-boun...@spectrumscale.org



On 07/07/18 01:28, Buterbaugh, Kevin L wrote:

[SNIP]

> 
> So, to try to rule out everything but the storage array we replaced the 
> FC cables going from the SAN switches to the array, plugging the new 
> cables into different ports on the SAN switches.  Then we repeated the 
> dd tests from a different NSD server, which both eliminated the NSD 
> server and its? FC cables as a potential cause ? and saw results 
> virtually identical to the previous test.  Therefore, we feel pretty 
> confident that it is the storage array and have let the vendor know all 
> of this.

I was not thinking of doing anything quite as drastic as replacing 
stuff, more look into the logs on the switches in the FC network and 
examine them for packet errors. The above testing didn't eliminate bad 
optics in the storage array itself for example, though it does appear to 
be the storage arrays themselves. Sounds like they could do with a power 
cycle...

JAB.

-- 
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Bn1XE9uK2a9CZQ8qKnJE3Q=TM-kJsvzTX9cq_xmR5ITHclBCfO4FDvZ3ZxyugfJCfQ=Ass164qVEhb9fC4_VCmzfZeYd_BLOv9cZsfkrzqi8pM=






___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] How to get rid of very old mmhealth events

2018-07-01 Thread Yaron Daniel
Hi

There is was issue with Scale 5.x GUI error - 
ib_rdma_nic_unrecognized(mlx5_0/2)

Check if you have the patch:

[root@gssio1 ~]#  diff /usr/lpp/mmfs/lib/mmsysmon/NetworkService.py 
/tmp/NetworkService.py
229c229,230
< recognizedNICs = set(re.findall(r"verbsConnectPorts\[\d+\] +: 
(\w+/\d+)/\d+\n", mmfsadm))
---
> #recognizedNICs = set(re.findall(r"verbsConnectPorts\[\d+\] +: 
(\w+/\d+)/\d+\n", mmfsadm))
>  recognizedNICs = set(re.findall(r"verbsConnectPorts\[\d+\] +: 
(\w+/\d+)/\d+/\d+\n", mmfsadm))


And restart the - mmsysmoncontrol restart  

Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect ? IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

   



From:   "Andrew Beattie" 
To: gpfsug-discuss@spectrumscale.org
Date:   06/28/2018 11:16 AM
Subject:Re: [gpfsug-discuss] How to get rid of very old mmhealth 
events
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Do you know if there is actually a cable plugged into port 2?
 
The system will work fine as long as there is network connectivity, but 
you may have an issue with redundancy or loss of bandwidth if you do not 
have every port cabled and configured correctly.
 
Regards
Andrew Beattie
Software Defined Storage  - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
 
 
- Original message -
From: "Dorigo Alvise (PSI)" 
Sent by: gpfsug-discuss-boun...@spectrumscale.org
To: "gpfsug-discuss@spectrumscale.org" 
Cc:
Subject: [gpfsug-discuss] How to get rid of very old mmhealth events
Date: Thu, Jun 28, 2018 6:08 PM
 
Dear experts,
I've e GL2 IBM system running SpectrumScale v4.2.3-6 (RHEL 7.3).
The system is working properly but I get a DEGRADED status report for the 
NETWORK running the command mmhealth:
 
[root@sf-gssio1 ~]# mmhealth node show

Node name:  sf-gssio1.psi.ch
Node status:DEGRADED
Status Change:  23 min. ago

Component   StatusStatus Change Reasons
---
GPFSHEALTHY   22 min. ago   -
NETWORK DEGRADED  145 days ago ib_rdma_link_down(mlx5_0/2), 
ib_rdma_nic_down(mlx5_0/2), ib_rdma_nic_unrecognized(mlx5_0/2)
[...]
 
This event is clearly an outlier because the network, verbs and IB are 
correctly working:
 
[root@sf-gssio1 ~]# mmfsadm test verbs status
VERBS RDMA status: started
 
[root@sf-gssio1 ~]# mmlsconfig verbsPorts|grep gssio1
verbsPorts mlx5_0/1 [sf-ems1,sf-gssio1,sf-gssio2]
 
[root@sf-gssio1 ~]# mmdiag --config|grep verbsPorts
 ! verbsPorts mlx5_0/1
 
[root@sf-gssio1 ~]# ibstat  mlx5_0
CA 'mlx5_0'
CA type: MT4113
Number of ports: 2
Firmware version: 10.16.1020
Hardware version: 0
Node GUID: 0xec0d9a03002b5db0
System image GUID: 0xec0d9a03002b5db0
Port 1:
State: Active
Physical state: LinkUp
Rate: 56
Base lid: 42
LMC: 0
SM lid: 1
Capability mask: 0x26516848
Port GUID: 0xec0d9a03002b5db0
Link layer: InfiniBand
Port 2:
State: Down
Physical state: Disabled
Rate: 10
Base lid: 65535
LMC: 0
SM lid: 0
Capability mask: 0x26516848
Port GUID: 0xec0d9a03002b5db8
Link layer: InfiniBand
 
That event is there since 145 days and I didn't go away after a daemon 
restart (mmshutdown/mmstartup).
My question is: how I can get rid of this event and restore the mmhealth's 
output to HEALTHY ? This is important because I've nagios sensors that 
periodically parse the "mmhealth -Y ..." output and at the moment I've to 
disable their email notification (which is not good if some real bad event 
happens).
 
Thanks,
 
  Alvise
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
 
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] gpfs client cluster, lost quorum, ccr issues

2018-07-01 Thread Yaron Daniel
Hi

Just check :

1) getenfore - Selinux status
2) check if FW is active - iptables -L
3) do u have ping to the host report in mmlscluster ? /etc/hosts valid ? 
DNS is valid ?
 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect ? IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

   



From:   "Uwe Falke" 
To: ren...@slac.stanford.edu, gpfsug main discussion list 

Cc: gpfsug-discuss-boun...@spectrumscale.org
Date:   06/28/2018 10:45 AM
Subject:Re: [gpfsug-discuss] gpfs client cluster, lost quorum, ccr 
issues
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Just some ideas what to try.
when you attempted mmdelnode, was that node still active with the IP 
address known in the cluster? If so, shut it down and try again.
Mind the restrictions of mmdelnode though (can't delete NSD servers).

Try to fake one of the currently missing cluster nodes, or restore the old 

system backup to the reinstalled server, if available, or temporarily 
install  gpfs SW there and copy over the GPFS config stuff from a node 
still active (/var/mmfs/), configure the admin and daemon IFs of the faked 

node on that machine, then try to start the cluster and see if it comes up 

with quorum, if it does  then go ahead and cleanly de-configure what's 
needed to remove that node from the cluster gracefully. Once you reach 
quorum with the remaining nodes you are in safe area.


 
Mit freundlichen Grüßen / Kind regards

 
Dr. Uwe Falke
 
IT Specialist
High Performance Computing Services / Integrated Technology Services / 
Data Center Services
---
IBM Deutschland
Rathausstr. 7
09111 Chemnitz
Phone: +49 371 6978 2165
Mobile: +49 175 575 2877
E-Mail: uwefa...@de.ibm.com
---
IBM Deutschland Business & Technology Services GmbH / Geschäftsführung: 
Thomas Wolter, Sven Schooß
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, 
HRB 17122 




From:   Renata Maria Dart 
To: Simon Thompson 
Cc: gpfsug main discussion list 
Date:   27/06/2018 21:30
Subject:Re: [gpfsug-discuss] gpfs client cluster, lost quorum, ccr 

issues
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hi Simon, yes I ran

mmsdrrestore -p 

and that helped to create the /var/mmfs/ccr directory which was
missing.  But it didn't create a ccr.nodes file, so I ended up scp'ng
that over by hand which I hope was the right thing to do.  The one
host that is no longer in service is still in that ccr.nodes file and
when I try to mmdelnode it I get:

root@ocio-gpu03 renata]# mmdelnode -N dhcp-os-129-164.slac.stanford.edu
mmdelnode: Unable to obtain the GPFS configuration file lock.
mmdelnode: GPFS was unable to obtain a lock from node 
dhcp-os-129-164.slac.stanford.edu.
mmdelnode: Command failed. Examine previous error messages to determine 
cause.

despite the fact that it doesn't respond to ping.  The mmstartup on
the newly reinstalled node fails as in my initial email.  I should
mention that the two "working" nodes are running 4.2.3.4.  The person
who reinstalled the node that won't start up put on 4.2.3.8.  I didn't
think that was the cause of this problem though and thought I would
try to get the cluster talking again before upgrading the rest of the
nodes or degrading the reinstalled one.

Thanks,
Renata




On Wed, 27 Jun 2018, Simon Thompson wrote:

>Have you tried running mmsdrestore in the reinstalled node to reads to 
the cluster and then try and startup gpfs on it?
>
>
https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.3/com.ibm.spectrum.scale.v4r23.doc/bl1pdg_mmsdrrest.htm


>
>Simon
>
>From: gpfsug-discuss-boun...@spectrumscale.org 
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of Renata Maria Dart 
[ren...@slac.stanford.edu]
>Sent: 27 June 2018 19:09
>To: gpfsug-discuss@spectrumscale.org
>Subject: [gpfsug-discuss] gpfs client cluster, lost quorum, ccr issues
>
>Hi, we have a client cluster of 4 nodes with 3 quorum nodes.  One of the
>quorum nodes is no longer in service and the other was reinstalled with
>a newer OS, both without informing the gpfs admins.  Gpfs is still
>"working" on the two remaining nodes, that is, they continue to have 
access
>to the gpfs data on the remote clusters.  But, I can no longer get
>any gpfs commands to work.  On one of the 2 nodes that are still serving 
data,
>
>root@ocio-gpu01 ~]# mmlscluster
>get file failed: Not enough CCR quorum nodes available (err 

Re: [gpfsug-discuss] GPFS Windows Mount

2018-06-20 Thread Yaron Daniel
Also what does mmdiag --network + mmgetstate -a show ?

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect ? IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

   



From:   "Yaron Daniel" 
To: gpfsug main discussion list 
Date:   06/20/2018 06:31 PM
Subject:Re: [gpfsug-discuss] GPFS Windows Mount
Sent by:gpfsug-discuss-boun...@spectrumscale.org



HI

Which Windows OS level - which GPFS FS level , what cygwin version ?

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect ? IL Lab Services (Storage)
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

   



From:Michael Holliday 
To:"gpfsug-discuss@spectrumscale.org" 

Date:06/20/2018 05:49 PM
Subject:[gpfsug-discuss] GPFS Windows Mount
Sent by:gpfsug-discuss-boun...@spectrumscale.org


Hi All,
 
We?ve being trying to get the windows system to mount GPFS.  We?ve set the 
drive letter on the files system, and we can get the system added to the 
GPFS cluster and showing as active.
 
When we try to mount the file system  the system just sits and does 
nothing ? GPFS shows no errors or issues, there are no problems in the log 
files. The firewalls are stopped and as far as we can tell it should work.
 
Does anyone have any experience with the GPFS windows client that may help 
us?
 
Michael
 
 
Michael Holliday RITTech MBCS
Senior HPC & Research Data Systems Engineer | eMedLab Operations Team
Scientific Computing | IT | The Francis Crick Institute
1, Midland Road | London | NW1 1AT | United Kingdom
Tel: 0203 796 3167
 

The Francis Crick Institute Limited is a registered charity in England and 
Wales no. 1140062 and a company registered in England and Wales no. 
06885462, with its registered office at 1 Midland Road London NW1 1AT
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] SMB quotas query

2018-05-15 Thread Yaron Daniel
Hi

So - u want to get quota report per fileset quota - right ?
We use this param when we want to monitor the NFS exports with df , i 
think this should also affect the SMB filesets.

Can u try to enable it and see if it works ?


 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

   



From:   "Sobey, Richard A" <r.so...@imperial.ac.uk>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   05/15/2018 11:11 AM
Subject:Re: [gpfsug-discuss] SMB quotas query
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hi Yaron
 
It?s currently set to no.
 
Thanks
Richard
 
From: gpfsug-discuss-boun...@spectrumscale.org 
<gpfsug-discuss-boun...@spectrumscale.org> On Behalf Of Yaron Daniel
Sent: 14 May 2018 22:27
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Subject: Re: [gpfsug-discuss] SMB quotas query
 
Hi

What is the output of mmlsfs - does you have --filesetdfenabled ?


 
Regards
 



 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

   



From:Jonathan Buzzard <jonathan.buzz...@strath.ac.uk>
To:gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:05/14/2018 03:22 PM
Subject:Re: [gpfsug-discuss] SMB quotas query
Sent by:gpfsug-discuss-boun...@spectrumscale.org




On Mon, 2018-05-14 at 11:54 +, Sobey, Richard A wrote:
> Thanks Jonathan. What I failed to mention in my OP was that MacOS
> clients DO report the correct size of each mounted folder. Not sure
> how that changes anything except to reinforce the idea that it's
> Windows at fault.
> 

In which case I would try using the dfree option in the smb.conf and
then having it call a shell script that wrote it's inputs to a log file
and see if there are any differences between macOS and Windows.

If they are the same you could fall back to my old hack and investigate
what the changes where to vfs_gpfs. If they are different then the
assumptions that vfs_gpfs is making are obviously incorrect.

Finally you should test it against an actual Windows server. From
memory if you have a quota it reports the quota size as the disk size.


JAB.

-- 
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] SMB quotas query

2018-05-14 Thread Yaron Daniel
Hi

What is the output of mmlsfs - does you have --filesetdf enabled ?


 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

   



From:   Jonathan Buzzard <jonathan.buzz...@strath.ac.uk>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   05/14/2018 03:22 PM
Subject:Re: [gpfsug-discuss] SMB quotas query
Sent by:gpfsug-discuss-boun...@spectrumscale.org



On Mon, 2018-05-14 at 11:54 +, Sobey, Richard A wrote:
> Thanks Jonathan. What I failed to mention in my OP was that MacOS
> clients DO report the correct size of each mounted folder. Not sure
> how that changes anything except to reinforce the idea that it's
> Windows at fault.
> 

In which case I would try using the dfree option in the smb.conf and
then having it call a shell script that wrote it's inputs to a log file
and see if there are any differences between macOS and Windows.

If they are the same you could fall back to my old hack and investigate
what the changes where to vfs_gpfs. If they are different then the
assumptions that vfs_gpfs is making are obviously incorrect.

Finally you should test it against an actual Windows server. From
memory if you have a quota it reports the quota size as the disk size.


JAB.

-- 
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss






___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Node list error

2018-05-10 Thread Yaron Daniel
Hi

Just to verify - there is no Firewalld running or Selinux ?


 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

   



From:   Bryan Banister <bbanis...@jumptrading.com>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   05/08/2018 11:51 PM
Subject:Re: [gpfsug-discuss] Node list error
Sent by:gpfsug-discuss-boun...@spectrumscale.org



What does `mmlsnodeclass -N ` give you?
-B
 
From: gpfsug-discuss-boun...@spectrumscale.org [
mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Buterbaugh, 
Kevin L
Sent: Tuesday, May 08, 2018 1:24 PM
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Subject: [gpfsug-discuss] Node list error
 
Note: External Email

Hi All, 
 
I can open a PMR for this if necessary, but does anyone know offhand what 
the following messages mean:
 
2018-05-08_12:16:39.567-0500: [I] Calling user exit script 
mmNodeRoleChange: event ccrFileChange, Async command 
/usr/lpp/mmfs/bin/mmsysmonc.
2018-05-08_12:16:39.719-0500: [I] Calling user exit script GUI_CCR_CHANGE: 
event ccrFileChange, Async command 
/usr/lpp/mmfs/gui/callbacks/global/ccrChangedCallback_421.sh.
2018-05-08_12:16:46.325-0500: [E] Node list error. Can not find all nodes 
in list 
1,1415,1515,1517,1519,1569,1571,1572,1573,1574,1575,1576,1577,1578,1579,1580,1581,1582,1583,1584,1585,1586,1587,1588,1589,1590,1591,1592,1783,1784,1786,1787,1788,1789,1793,1794,1795,1796,1797,1798,1799,1800,1801,1802,1803,1804,1805,1806,1807,1808,1809,1810,1812,1813,1815,1816,1817,1818,1819,1820,1821,1822,1823,1824,1825,1826,1827,1828,1829,1830,1831,1832,1833,1834,1835,1836,1837,1838,1839,1840,1841,1842,1843,1844,1888,1889,1908,1909,1910,1911,1912,1913,1914,1915,1916,1917,1918,1919,1920,1921,1922,1923,1924,1925,1926,1927,1928,1929,1930,1931,1932,1933,1934,1935,1936,1937,1938,1939,1940,1941,1942,1943,1966,2,2223,2235,2399,2400,2401,2402,2403,2404,2405,2407,2408,2409,2410,2411,2413,2414,2415,2416,2418,2419,2420,2421,2423,2424,2425,2426,2427,2428,2429,2430,2432,2436,2437,2438,2439,2440,2441,2442,2443,2444,2445,2446,2447,2448,2449,2450,2451,2452,2453,2454,2455,2456,2457,2458,2459,2460,2461,2462,2463,2464,2465,2466,2467,2468,2469,2470,2471,2472,2473,2474,2475,2476,2477,2478,2479,2480,2481,2482,2483,2520,2521,2522,2523,2524,2525,2526,2527,2528,2529,2530,2531,2532,2533,2534,2535,2536,2537,2538,2539,2540,2541,2542,2543,2544,2545,2546,2547,2548,2549,2550,2551,2552,2553,2554,2555,2556,2557,2558,2559,2560,2561,2562,2563,2564,2565,2566,2567,2568,2569,2570,2571,2572,2573,2574,2575,2604,2605,2607,2608,2609,2611,2612,2613,2614,2615,2616,2617,2618,2619,2620,2621,2622,2623,2624,2625,2626,2627,2628,2629,2630,2631,2632,2634,2635,2636,2637,2638,2640,2641,2642,2643,2650,2651,2652,2653,2654,2656,2657,2658,2660,2661,2662,2663,2664,2665,2666,2667,2668,2669,2670,2671,2672,2673,2674,2675,2676,2677,2679,2680,2681,2682,2683,2684,2685,2686,2687,2688,2689,2690,2691,2692,2693,2694,2695,2696,2697,2698,2699,2700,2702,2703,2704,2705,2706,2707,2708,2709,2710,2711,2712,2713,2714,2715,2716,2717,2718,2719,2720,2721,2722,2723,2724,2725,2726,2727,2728,2729,2730,2740,2741,2742,2743,2744,2745,2746,2754,2796,2797,2799,2800,2801,2802,2804,2805,2807,2808,2809,2812,2814,2815,2816,2817,2818,2819,2820,2823,
2018-05-08_12:16:46.340-0500: [E] Read Callback err 2. No user exit event 
is registered
 
This is GPFS 4.2.3-8.  We have not done any addition or deletion of nodes 
and have not had a bunch of nodes go offline, either.  Thanks?
 
Kevin
?
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and 
Education
kevin.buterba...@vanderbilt.edu - (615)875-9633
 
 
 


Note: This email is for the confidential use of the named addressee(s) 
only and may contain proprietary, confidential or privileged information. 
If you are not the intended recipient, you are hereby notified that any 
review, dissemination or copying of this email is strictly prohibited, and 
to please notify the sender immediately and destroy this email and any 
attachments. Email transmission cannot be guaranteed to be secure or 
error-free. The Company, therefore, does not make any guarantees as to the 
completeness or accuracy of this email or any attachments. This email is 
for informational purposes only and does not constitute a recommendation, 
offer, request or solicitation of any kind to buy, sell, subscribe, redeem 
or perform any type of transaction of a financial product.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://

Re: [gpfsug-discuss] CES NFS export

2018-05-06 Thread Yaron Daniel
Hi

If you want to use NFSv3 , define only NFSv3 on the export.
In case you work with NFSv4 - you should have "DOMAIN\user" all the way - 
so this way you will not get any user mismatch errors, and see permissions 
like nobody.


 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

   



From:   Jagga Soorma <jagg...@gmail.com>
To: gpfsug-discuss@spectrumscale.org
Date:   05/07/2018 06:05 AM
Subject:Re: [gpfsug-discuss] CES NFS export
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Looks like this is due to nfs v4 and idmapd domain not being
configured correctly.  I am going to test further and reach out if
more assistance is needed.

Thanks!

On Sun, May 6, 2018 at 6:35 PM, Jagga Soorma <jagg...@gmail.com> wrote:
> Hi Guys,
>
> We are new to gpfs and have a few client that will be mounting gpfs
> via nfs.  We have configured the exports but all user/group
> permissions are showing up as nobody.  The gateway/protocol nodes can
> query the uid/gid's via centrify without any issues as well as the
> clients and the perms look good on a client that natively accesses the
> gpfs filesystem.  Is there some specific config that we might be
> missing?
>
> --
> # mmnfs export list --nfsdefs /gpfs/datafs1
> Path  Delegations Clients
> Access_Type Protocols Transports Squash Anonymous_uid
> Anonymous_gid SecType PrivilegedPort DefaultDelegations Manage_Gids
> NFS_Commit
> 
---
> /gpfs/datafs1 NONE{nodenames} RW  3,4   TCP
> ROOT_SQUASH-2-2SYS FALSE  NONE
>   TRUEFALSE
> /gpfs/datafs1 NONE{nodenames}   RW  3,4
> TCPNO_ROOT_SQUASH -2-2SYS FALSE
>   NONE   TRUEFALSE
> /gpfs/datafs1 NONE   {nodenames}  RW  3,4   TCP
> ROOT_SQUASH-2-2SYS FALSE
> NONE   TRUEFALSE
> --
>
> On the nfs clients I see this though:
>
> --
> # ls -l
> total 0
> drwxrwxr-t 3 nobody nobody 4096 Mar 20 09:19 dir1
> drwxr-xr-x 4 nobody nobody 4096 Feb  9 17:57 dir2
> --
>
> Here is our mmnfs config:
>
> --
> # mmnfs config list
>
> NFS Ganesha Configuration:
> ==
> NFS_PROTOCOLS: 3,4
> NFS_PORT: 2049
> MNT_PORT: 0
> NLM_PORT: 0
> RQUOTA_PORT: 0
> NB_WORKER: 256
> LEASE_LIFETIME: 60
> DOMAINNAME: VIRTUAL1.COM
> DELEGATIONS: Disabled
> ==
>
> STATD Configuration
> ==
> STATD_PORT: 0
> ==
>
> CacheInode Configuration
> ==
> ENTRIES_HWMARK: 150
> ==
>
> Export Defaults
> ==
> ACCESS_TYPE: NONE
> PROTOCOLS: 3,4
> TRANSPORTS: TCP
> ANONYMOUS_UID: -2
> ANONYMOUS_GID: -2
> SECTYPE: SYS
> PRIVILEGEDPORT: FALSE
> MANAGE_GIDS: TRUE
> SQUASH: ROOT_SQUASH
> NFS_COMMIT: FALSE
> ==
>
> Log Configuration
> ==
> LOG_LEVEL: EVENT
> ==
>
> Idmapd Configuration
> ==
> LOCAL-REALMS: LOCALDOMAIN
> DOMAIN: LOCALDOMAIN
> ==
> --
>
> Thanks!
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss






___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Experiences with Synchronous Replication, Stretch Clusters, and Spectrum Scale <= 4.2

2018-04-07 Thread Yaron Daniel
HI

We have few customers than have 2 Sites (Active/Active using SS 
replication) + 3rd site as Quorum Tie Breaker node.

1) Spectrum Scale 4.2.3.x
2) Lenovo x3650 -M4 connect via FC to SVC (Flash900 as external storage)
3) We run all tests before deliver the system to customer Production.

Main items to take into account :
1) What is the latecny you have between the 2 main sites ? 
2) What network bandwidth between the 2 sites ?
3) What is the latency to the 3rd site from each site ?
4) Which protocols plan to be used ? Do you have layer2 between the 2 
sites , or layer 3 ?
5) Do you plan to use dedicated network for GPFS daemon ? 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

   



From:   "Howard, Stewart Jameson" <sjhow...@iu.edu>
To: "gpfsug-discuss@spectrumscale.org" 
<gpfsug-discuss@spectrumscale.org>
Date:   04/06/2018 06:24 PM
Subject:[gpfsug-discuss] Experiences with Synchronous Replication, 
Stretch Clusters, and Spectrum Scale <= 4.2
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hi All,



I was wondering what experiences the user group has had with stretch

4.x clusters.  Specifically, we're interested in:



1)  What SS version are you running?



2)  What hardware are you running it on?



3)  What has been your experience with testing of site-failover

scenarios (e.g., full power loss at one site, interruption of inter-

site link).



Thanks so much for your help!



Stewart
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwIGaQ=jf_iaSHvJObTbx-siA1ZOg=Bn1XE9uK2a9CZQ8qKnJE3Q=yYIveWTR3gNyhJ9KsrodpWApBlpQ29Oi858MuE0Nzsw=V42UYnHtEYVK3LvH6i930tzte1qp0sWmiY6Pp1Ep3kg=






___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Underlying LUN mirroring NSD impact

2018-03-16 Thread Yaron Daniel
Hi

You can have few options:

1) Active/Active GPFS sites - with sync replication of the storage - take 
into account the latency you have.
2) Active/StandBy Gpfs sites- with a-sync replication of the storage.

All info can be found at :

https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.1/com.ibm.spectrum.scale.v4r21.doc/bl1adv_continous_replication_SSdata.htm
Synchronous mirroring with GPFS replication
In a configuration utilizing GPFS? replication, a single GPFS cluster is 
defined over three geographically-separate sites consisting of two 
production sites and a third tiebreaker site. One or more file systems are 
created, mounted, and accessed concurrently from the two active production 
sites. 
Synchronous mirroring utilizing storage based replication
This topic describes synchronous mirroring utilizing storage-based 
replication.
Point In Time Copy of IBM Spectrum Scale data
Most storage systems provides functionality to make a point-in-time copy 
of data as an online backup mechanism. This function provides an 
instantaneous copy of the original data on the target disk, while the 
actual copy of data takes place asynchronously and is fully transparent to 
the user. 


 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage Architect
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

   



From:   Mark Bush <mark.b...@siriuscom.com>
To: "gpfsug-discuss@spectrumscale.org" 
<gpfsug-discuss@spectrumscale.org>
Date:   03/14/2018 10:10 PM
Subject:[gpfsug-discuss] Underlying LUN mirroring NSD impact
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Is it possible (albeit not advisable) to mirror LUNs that are NSD?s to 
another storage array in another site basically for DR purposes?  Once 
it?s mirrored to a new cluster elsewhere what would be the step to get the 
filesystem back up and running.  I know that AFM-DR is meant for this but 
in this case my client only has Standard edition and has mirroring 
software purchased with the underlying disk array.
 
Is this even doable? 
 
 
Mark
 
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=Bn1XE9uK2a9CZQ8qKnJE3Q=c9HNr6pLit8n4hQKpcYyyRg9ZnITpo_2OiEx6hbukYA=qFgC1ebi1SJvnCRlc92cI4hZqZYpK7EneZ0Sati5s5E=





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] hdisk suspend / stop (Buterbaugh, Kevin L)

2018-02-09 Thread Yaron Daniel
Hi

Just make sure you have a backup, just in case ...


 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Storage architect
 Petach Tiqva, 49527
IBM Global Markets, Systems HW Sales
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

   



From:   "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   02/08/2018 09:49 PM
Subject:Re: [gpfsug-discuss] hdisk suspend / stop (Buterbaugh, 
Kevin L)
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hi again all, 

It sounds like doing the ?mmchconfig unmountOnDiskFail=meta -i? suggested 
by Steve and Bob followed by using mmchdisk to stop the disks temporarily 
is the way we need to go.  We will, as an aside, also run a mmapplypolicy 
first to pull any files users have started accessing again back to the 
?regular? pool before doing any of this.

Given that this is our ?capacity? pool and files have to have an atime > 
90 days to get migrated there in the 1st place I think this is reasonable. 
 Especially since users will get an I/O error if they happen to try to 
access one of those NSDs during the brief maintenance window.

As to naming and shaming the vendor ? I?m not going to do that at this 
point in time.  We?ve been using their stuff for well over a decade at 
this point and have had a generally positive experience with them.  In 
fact, I have spoken with them via phone since my original post today and 
they have clarified that the problem with the mismatched firmware is only 
an issue because we are a major version off of what is current due to us 
choosing to not have a downtime and therefore not having done any firmware 
upgrades in well over 18 months.

Thanks, all...

Kevin

On Feb 8, 2018, at 11:17 AM, Steve Xiao <sx...@us.ibm.com> wrote:

You can change the cluster configuration to online unmount the file system 
when there is error accessing metadata.   This can be done run the 
following command:
   mmchconfig unmountOnDiskFail=meta -i 

After this configuration change, you should be able to stop all 5 NSDs 
with mmchdisk stop command.While these NSDs are in down state, any 
user IO to files resides on these disks will fail but your file system 
should state mounted and usable.

Steve Y. Xiao

> Date: Thu, 8 Feb 2018 15:59:44 +
> From: "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu>
> To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
> Subject: [gpfsug-discuss] mmchdisk suspend / stop
> Message-ID: <8dca682d-9850-4c03-8930-ea6c68b41...@vanderbilt.edu>
> Content-Type: text/plain; charset="utf-8"
> 
> Hi All,
> 
> We are in a bit of a difficult situation right now with one of our 
> non-IBM hardware vendors (I know, I know, I KNOW - buy IBM hardware!
> ) and are looking for some advice on how to deal with this 
> unfortunate situation.
> 
> We have a non-IBM FC storage array with dual-?redundant? 
> controllers.  One of those controllers is dead and the vendor is 
> sending us a replacement.  However, the replacement controller will 
> have mis-matched firmware with the surviving controller and - long 
> story short - the vendor says there is no way to resolve that 
> without taking the storage array down for firmware upgrades. 
> Needless to say there?s more to that story than what I?ve included 
> here, but I won?t bore everyone with unnecessary details.
> 
> The storage array has 5 NSDs on it, but fortunately enough they are 
> part of our ?capacity? pool ? i.e. the only way a file lands here is
> if an mmapplypolicy scan moved it there because the *access* time is
> greater than 90 days.  Filesystem data replication is set to one.
> 
> So ? what I was wondering if I could do is to use mmchdisk to either
> suspend or (preferably) stop those NSDs, do the firmware upgrade, 
> and resume the NSDs?  The problem I see is that suspend doesn?t stop
> I/O, it only prevents the allocation of new blocks ? so, in theory, 
> if a user suddenly decided to start using a file they hadn?t needed 
> for 3 months then I?ve got a problem.  Stopping all I/O to the disks
> is what I really want to do.  However, according to the mmchdisk man
> page stop cannot be used on a filesystem with replication set to one.
> 
> There?s over 250 TB of data on those 5 NSDs, so restriping off of 
> them or setting replication to two are not options.
> 
> It is very unlikely that anyone would try to access a file on those 
> NSDs during the hour or so I?d need to do the firmware upgrades, but
> how would GPFS itself react to those (suspended) disks going away 
> for a while?  I?m thinking I could be OK if there was just a way to 
> actually stop them 

Re: [gpfsug-discuss] storage-based replication for Spectrum Scale

2018-01-25 Thread Yaron Daniel
Hi

You can do remote mount between GPFS clusters.
https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.0/com.ibm.spectrum.scale.v5r00.doc/bl1adv_admmcch.htm

Where is you daemon communication network ? IP ir IP Over IB ?


 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services - Team Leader  
 Petach Tiqva, 49527
Global Technology Services
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

   



From:   John Hearns <john.hea...@asml.com>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   01/25/2018 11:53 AM
Subject:Re: [gpfsug-discuss] storage-based replication for 
Spectrum Scale
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Jan Frode, thankyou for that link.
 
I have a general question regarding remote GPFS filesystems.
If we have two clusters, in separate locations on separate Infiniband 
fabrics,
we can set up a remote relationship between filesystems.
 
As Valdis discusses, what happens if the IP link between the clusters goes 
down or is unstable?
Can nodes in one cluster vote out nodes in the other cluster?
 
 
 
From: gpfsug-discuss-boun...@spectrumscale.org [
mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Jan-Frode 
Myklebust
Sent: Wednesday, January 24, 2018 8:08 AM
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Subject: Re: [gpfsug-discuss] storage-based replication for Spectrum Scale
 
 
Have you seen 
https://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adv.doc/bl1adv_dr.htm
 
? Seems to cover what you?re looking for..
 
 
  -jf
 
ons. 24. jan. 2018 kl. 07:33 skrev Harold Morales <hmora...@optimizeit.co
>:
Thanks for answering.
 
Essentially, the idea being explored is to replicate LUNs between 
identical storage hardware (HP 3PAR volumesrein) on both sites. There is 
an IP connection between storage boxes but not between servers on both 
sites, there is a dark fiber connecting both sites. Here they dont want to 
explore the idea of a scaled-based.
 
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
-- The information contained in this communication and any attachments is 
confidential and may be privileged, and is for the sole use of the 
intended recipient(s). Any unauthorized review, use, disclosure or 
distribution is prohibited. Unless explicitly stated otherwise in the body 
of this communication or the attachment thereto (if any), the information 
is provided on an AS-IS basis without any express or implied warranties or 
liabilities. To the extent you are relying on this information, you are 
doing so at your own risk. If you are not the intended recipient, please 
notify the sender immediately by replying to this message and destroy all 
copies of this message and any attachments. Neither the sender nor the 
company/group of companies he or she represents shall be liable for the 
proper and complete transmission of the information contained in this 
communication, or for any delay in its receipt. 
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=Bn1XE9uK2a9CZQ8qKnJE3Q=BMyrARi4hlfjG-EugDznaiyWSqErGF5FyFpQQ-o4ScU=R0w70yvIjZaXpcs4P2mGJNQYBSNlUp3aZcCNks-sveU=





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] mmrestripefs "No space left on device"

2017-11-02 Thread Yaron Daniel
Hi

Please check mmdf output to see that MetaData disks are not full, or you 
have i-nodes issue.

In case you have Independent File-Sets , please run : mmlsfileset  -L 
-i to get the status of each fileset inodes.

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services - Team Leader  
 Petach Tiqva, 49527
Global Technology Services
 Israel
 
 
 
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 





From:   John Hanks <griz...@gmail.com>
To: gpfsug <gpfsug-discuss@spectrumscale.org>
Date:   11/02/2017 12:54 AM
Subject:[gpfsug-discuss] mmrestripefs "No space left on device"
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hi all,

I'm trying to do a restripe after setting some nsds to metadataOnly and I 
keep running into this error:

Scanning user file metadata ...
   0.01 % complete on Wed Nov  1 15:36:01 2017  ( 40960 inodes with 
total 531689 MB data processed)
Error processing user file metadata. 
Check file '/var/mmfs/tmp/gsfs0.pit.interestingInodes.12888779708' on 
scg-gs0 for inodes with broken disk addresses or failures.
mmrestripefs: Command failed. Examine previous error messages to determine 
cause.

The file it points to says:

This inode list was generated in the Parallel Inode Traverse on Wed Nov  1 
15:36:06 2017
INODE_NUMBER DUMMY_INFO SNAPSHOT_ID ISGLOBAL_SNAPSHOT INDEPENDENT_FSETID 
MEMO(INODE_FLAGS FILE_TYPE [ERROR])
 535040:00   1 0  
illreplicated REGULAR_FILE RESERVED Error: 28 No space left on device


/var on the node I am running this on has > 128 GB free, all the NSDs have 
plenty of free space, the filesystem being restriped has plenty of free 
space and if I watch the node while running this no filesystem on it even 
starts to get full. Could someone tell me where mmrestripefs is attempting 
to write and/or how to point it at a different location?

Thanks,

jbh___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=Bn1XE9uK2a9CZQ8qKnJE3Q=WTfQpWOsmp-BdHZ0PWDbaYsxq-5Q1ZH26IyfrBRe3_c=SJg8NrUXWEpaxDhqECkwkbJ71jtxjLZz5jX7FxmYMBk=





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] mmapplypolicy didn't migrate everything it should have - why not?

2017-04-19 Thread Yaron Daniel
Hi

Maybe the temp list file - fill the FS that they build on.

Try to monitor the FS where the temp filelist is created.

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services - Team Leader  
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

 



From:   Bryan Banister <bbanis...@jumptrading.com>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   04/19/2017 07:19 PM
Subject:Re: [gpfsug-discuss] mmapplypolicy didn't migrate 
everything it shouldhave - why not?
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hey Marc,
 
I?m having some issues where a simple ILM list policy never completes, but 
I have yet to open a PMR or enable additional logging.  But I was 
wondering if there are known reasons that this would not complete, such as 
when there is a symbolic link that creates a loop within the directory 
structure or something simple like that.
 
Do you know of any cases like this, Marc, that I should try to find in my 
file systems?
 
Thanks in advance!
-Bryan
 
From: gpfsug-discuss-boun...@spectrumscale.org [
mailto:gpfsug-discuss-boun...@spectrumscale.org] On Behalf Of Marc A 
Kaplan
Sent: Wednesday, April 19, 2017 9:37 AM
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Subject: Re: [gpfsug-discuss] mmapplypolicy didn't migrate everything it 
should have - why not?
 
Well  I'm glad we followed Mr. S. Holmes dictum which I'll paraphrase... 
eliminate the impossible and what remains, even if it seems improbable, 
must hold.

BTW - you may want to look at  mmclone.  Personally, I find the doc and 
terminology confusing, but mmclone was designed to efficiently store 
copies and near-copies of large (virtual machine) images.  Uses 
copy-on-write strategy, similar to GPFS snapshots, but at a file by file 
granularity.

BBTW - we fixed directories - they can now be huge (up to about 2^30 
files) and automagically, efficiently grow and shrink in size.  Also small 
directories can be stored efficiently in the inode.  The last major 
improvement was just a few years ago.  Before that they could be huge, but 
would never shrink. 



Note: This email is for the confidential use of the named addressee(s) 
only and may contain proprietary, confidential or privileged information. 
If you are not the intended recipient, you are hereby notified that any 
review, dissemination or copying of this email is strictly prohibited, and 
to please notify the sender immediately and destroy this email and any 
attachments. Email transmission cannot be guaranteed to be secure or 
error-free. The Company, therefore, does not make any guarantees as to the 
completeness or accuracy of this email or any attachments. This email is 
for informational purposes only and does not constitute a recommendation, 
offer, request or solicitation of any kind to buy, sell, subscribe, redeem 
or perform any type of transaction of a financial product.
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] SMB and AD authentication

2017-02-27 Thread Yaron Daniel
Hi

Can you show the share config + ls -l on the share Fileset/Directory from 
the protocols nodes ?

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services - Team Leader  
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

 



From:   "mark.b...@siriuscom.com" <mark.b...@siriuscom.com>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   02/27/2017 09:41 PM
Subject:[gpfsug-discuss] SMB and AD authentication
Sent by:gpfsug-discuss-boun...@spectrumscale.org



For some reason, I just can?t seem to get this to work.  I have configured 
my protocol nodes to authenticate to AD using the following 
 
mmuserauth service create --type ad --data-access-method file --servers 
192.168.88.3 --user-name administrator --netbios-name scale --idmap-role 
master --password * --idmap-range-size 100 --idmap-range 
1000-2 --enable-nfs-kerberos --unixmap-domains 
'sirius(1-2)'
 
 
All goes well, I see the nodes in AD and all of the wbinfo commands show 
good (id Sirius\\administrator doesn?t work though), but when I try to 
mount an SMB share (after doing all the necessary mmsmb export stuff) I 
get permission denied.  I?m curious if I missed a step (followed the docs 
pretty much to the letter).  I?m trying Administrator, mark.bush, and a 
dummy aduser I created.  None seem to gain access to the share. 
 
Protocol gurus help!  Any ideas are appreciated.
 
 

Mark R. Bush| Storage Architect
Mobile: 210-237-8415 
Twitter: @bushmr | LinkedIn: /markreedbush
10100 Reunion Place, Suite 500, San Antonio, TX 78216
www.siriuscom.com |mark.b...@siriuscom.com 
 
This message (including any attachments) is intended only for the use of 
the individual or entity to which it is addressed and may contain 
information that is non-public, proprietary, privileged, confidential, and 
exempt from disclosure under applicable law. If you are not the intended 
recipient, you are hereby notified that any use, dissemination, 
distribution, or copying of this communication is strictly prohibited. 
This message may be viewed by parties at Sirius Computer Solutions other 
than those named in the message header. This message does not contain an 
official representation of Sirius Computer Solutions. If you have received 
this communication in error, notify Sirius Computer Solutions immediately 
and (i) destroy this message if a facsimile or (ii) delete this message 
immediately if this is an electronic communication. Thank you. 
Sirius Computer Solutions ___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] 200 filesets and AFM

2017-02-20 Thread Yaron Daniel
Hi

Split rsync into the directory level so u can run parallel rsync session , 
this way you maximize the network usage.

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services - Team Leader  
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

 



From:   "mark.b...@siriuscom.com" <mark.b...@siriuscom.com>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   02/20/2017 06:54 PM
Subject:Re: [gpfsug-discuss] 200 filesets and AFM
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Regular rsync apparently takes one week to sync up.  I?m just the 
messenger getting more info from my client soon. 
 
From: <gpfsug-discuss-boun...@spectrumscale.org> on behalf of Yaron Daniel 
<y...@il.ibm.com>
Reply-To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date: Monday, February 20, 2017 at 10:05 AM
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Subject: Re: [gpfsug-discuss] 200 filesets and AFM
 
Hi

Which protocols used to access data ? GPFS + NFS ?
If yes, you  can use standard rsync.

  
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services- Team Leader 
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 
 
 



From:"mark.b...@siriuscom.com" <mark.b...@siriuscom.com>
To:gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:02/20/2017 05:56 PM
Subject:Re: [gpfsug-discuss] 200 filesets and AFM
Sent by:gpfsug-discuss-boun...@spectrumscale.org




Not sure.  It?s a 3.5 based cluster currently.
 
From: <gpfsug-discuss-boun...@spectrumscale.org> on behalf of Yaron Daniel 
<y...@il.ibm.com>
Reply-To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date: Monday, February 20, 2017 at 9:47 AM
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Subject: Re: [gpfsug-discuss] 200 filesets and AFM
 
Hi

Which ACLs you have in your FS ? 

Do u have NFSv4 Acls - which use NFS + Windows Acls ?

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services- Team Leader 
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 
 
 



From:"Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu>
To:gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:02/20/2017 05:41 PM
Subject:Re: [gpfsug-discuss] 200 filesets and AFM
Sent by:gpfsug-discuss-boun...@spectrumscale.org





Hi Mark, 

Are you referring to this?

http://www.spectrumscale.org/pipermail/gpfsug-discuss/2012-October/000169.html


It?s not magical, but it?s pretty good!  ;-)  Seriously, we use it any 
time we want to move stuff around in our GPFS filesystems.

Kevin

On Feb 20, 2017, at 9:35 AM, Mark.Bush@siriuscom.comwrote:

I have a client that has around 200 filesets (must be a good reason for 
it) and they need to migrate data but it?s really looking like this might 
bring AFM to its knees.  At one point, I had heard of some magical version 
of RSYNC that IBM developed that could do something like this.  Anyone 
have any details on such a tool and is it available.  Or is there some 
other way I might do this?




Mark R. Bush| Storage Architect
Mobile: 210-237-8415 
Twitter: @bushmr| LinkedIn: /markreedbush
10100 Reunion Place, Suite 500, San Antonio, TX 78216
www.siriuscom.com|mark.b...@siriuscom.com


?
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and 
Education
kevin.buterba...@vanderbilt.edu- (615)875-9633


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


This message (including any attachments) is intended only for the use of 
the individual or entity to which it is addressed and may contain 
information that is non-public, proprietary, privileged, confidential, and 
exempt from disclosure under applicable law. If you are not the intended 
recipient, you are hereby notified that any use, dissemination, 
distribution, or copying of this communication is strictly prohibited. 
This message may be viewed by parties at Sirius Computer Solutions other 
than those named in the message header. This message does not contain an 
official representation of Sirius Computer Solutions. If you have received 
this communication in error, notify Sirius Computer Solutions immediately 
and (i) destroy this

Re: [gpfsug-discuss] 200 filesets and AFM

2017-02-20 Thread Yaron Daniel
Hi

Which ACLs you have in your FS ? 

Do u have NFSv4 Acls - which use NFS + Windows Acls ?

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services - Team Leader  
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

 



From:   "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   02/20/2017 05:41 PM
Subject:Re: [gpfsug-discuss] 200 filesets and AFM
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hi Mark, 

Are you referring to this?

http://www.spectrumscale.org/pipermail/gpfsug-discuss/2012-October/000169.html

It?s not magical, but it?s pretty good!  ;-)  Seriously, we use it any 
time we want to move stuff around in our GPFS filesystems.

Kevin

On Feb 20, 2017, at 9:35 AM, mark.b...@siriuscom.com wrote:

I have a client that has around 200 filesets (must be a good reason for 
it) and they need to migrate data but it?s really looking like this might 
bring AFM to its knees.  At one point, I had heard of some magical version 
of RSYNC that IBM developed that could do something like this.  Anyone 
have any details on such a tool and is it available.  Or is there some 
other way I might do this?
 
 
 

Mark R. Bush| Storage Architect
Mobile: 210-237-8415 
Twitter: @bushmr | LinkedIn: /markreedbush
10100 Reunion Place, Suite 500, San Antonio, TX 78216
www.siriuscom.com |mark.b...@siriuscom.com 
 

?
Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and 
Education
kevin.buterba...@vanderbilt.edu - (615)875-9633


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] replication and no failure groups

2017-01-09 Thread Yaron Daniel
Hi

So - do u able to have GPFS replication for the MD Failure Groups ?

I can see that u have 3 Failure Groups for Data -1, 2012,2034 , how many 
Storage Subsystems you have ?


 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services - Team Leader  
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

 



From:   "J. Eric Wonderley" <eric.wonder...@vt.edu>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   01/09/2017 10:48 PM
Subject:Re: [gpfsug-discuss] replication and no failure groups
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hi Yaron:

This is the filesystem:

[root@cl005 net]# mmlsdisk work
disk driver   sector failure holds
holdsstorage
name type   size   group metadata data  status
availability pool
  -- ---  - - 
 
nsd_a_7  nsd 512  -1 No   Yes   ready 
up   system   
nsd_b_7  nsd 512  -1 No   Yes   ready 
up   system   
nsd_c_7  nsd 512  -1 No   Yes   ready 
up   system   
nsd_d_7  nsd 512  -1 No   Yes   ready 
up   system   
nsd_a_8  nsd 512  -1 No   Yes   ready 
up   system   
nsd_b_8  nsd 512  -1 No   Yes   ready 
up   system   
nsd_c_8  nsd 512  -1 No   Yes   ready 
up   system   
nsd_d_8  nsd 512  -1 No   Yes   ready 
up   system   
nsd_a_9  nsd 512  -1 No   Yes   ready 
up   system   
nsd_b_9  nsd 512  -1 No   Yes   ready 
up   system   
nsd_c_9  nsd 512  -1 No   Yes   ready 
up   system   
nsd_d_9  nsd 512  -1 No   Yes   ready 
up   system   
nsd_a_10 nsd 512  -1 No   Yes   ready 
up   system   
nsd_b_10 nsd 512  -1 No   Yes   ready 
up   system   
nsd_c_10 nsd 512  -1 No   Yes   ready 
up   system   
nsd_d_10 nsd 512  -1 No   Yes   ready 
up   system   
nsd_a_11 nsd 512  -1 No   Yes   ready 
up   system   
nsd_b_11 nsd 512  -1 No   Yes   ready 
up   system   
nsd_c_11 nsd 512  -1 No   Yes   ready 
up   system   
nsd_d_11 nsd 512  -1 No   Yes   ready 
up   system   
nsd_a_12 nsd 512  -1 No   Yes   ready 
up   system   
nsd_b_12 nsd 512  -1 No   Yes   ready 
up   system   
nsd_c_12 nsd 512  -1 No   Yes   ready 
up   system   
nsd_d_12 nsd 512  -1 No   Yes   ready 
up   system   
work_md_pf1_1 nsd 512 200 Yes  Noready 
up   system   
jbf1z1   nsd40962012 No   Yes   ready 
up   sas_ssd4T
jbf2z1   nsd40962012 No   Yes   ready 
up   sas_ssd4T
jbf3z1   nsd40962012 No   Yes   ready 
up   sas_ssd4T
jbf4z1   nsd40962012 No   Yes   ready 
up   sas_ssd4T
jbf5z1   nsd40962012 No   Yes   ready 
up   sas_ssd4T
jbf6z1   nsd40962012 No   Yes   ready 
up   sas_ssd4T
jbf7z1   nsd40962012 No   Yes   ready 
up   sas_ssd4T
jbf8z1   nsd40962012 No   Yes   ready 
up   sas_ssd4T
jbf1z2   nsd40962012 No   Yes   ready 
up   sas_ssd4T
jbf2z2   nsd40962012 No   Yes   ready 
up   sas_ssd4T
jbf3z2   nsd40962012 No   Yes   ready 
up   sas_ssd4T
jbf4z2   nsd40962012 No   Yes   ready 
up   sas_ssd4T
jbf5z2   nsd40962012 No   Yes   ready 
up   sas_ssd4T
jbf6z2   nsd40962012 No   Yes   ready 
up   sas_ssd4T
jbf7z2   nsd40962012 No   Y

Re: [gpfsug-discuss] replication and no failure groups

2017-01-09 Thread Yaron Daniel
Hi

1) Yes in case u have only 1 Failure group - replication will not work.

2) Do you have 2 Storage Systems ?  When using GPFS replication write stay 
the same - but read can be double - since it read from 2 Storage systems

Hope this help - what do you try to achive , can you share your env setup 
?

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services - Team Leader  
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

 



From:   Brian Marshall <mimar...@vt.edu>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   01/09/2017 10:17 PM
Subject:[gpfsug-discuss] replication and no failure groups
Sent by:gpfsug-discuss-boun...@spectrumscale.org



All,

If I have a filesystem with replication set to 2 and 1 failure group:

1) I assume replication won't actually happen, correct?

2) Will this impact performance i.e cut write performance in half even 
though it really only keeps 1 copy?

End goal - I would like a single storage pool within the filesystem to be 
replicated without affecting the performance of all other pools(which only 
have a single failure group)

Thanks,
Brian Marshall
VT - ARC___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] AFM Migration Issue

2017-01-09 Thread Yaron Daniel
Hi

Do u have nfsv4 acl's ?
Try to ask from IBM support  to get Sonas rsync in order to migrate the 
data.

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services - Team Leader  
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

 



From:   Jan-Frode Myklebust <janfr...@tanso.net>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   01/09/2017 05:30 PM
Subject:Re: [gpfsug-discuss] AFM Migration Issue
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Untested, and I have no idea if it will work on the number of files and 
directories you have, but maybe you can fix it by rsyncing just the 
directories?


rsync -av --dry-run --include='*/' --exclude='*' source/ destination/



-jf
man. 9. jan. 2017 kl. 16.09 skrev <paul.tomlin...@awe.co.uk>:
Hi All,

We have just completed the first data move from our old cluster to the new 
one using AFM Local Update as per the guide, however we have noticed that 
all date stamps on the directories have the date they were created on(e.g. 
9th Jan 2017) , not the date from the old system (e.g. 14th April 2007), 
whereas all the files have the correct dates.

Has anyone else seen this issue as we now have to convert all the 
directory dates to their original dates !




The information in this email and in any attachment(s) is
commercial in confidence. If you are not the named addressee(s)
or
if you receive this email in error then any distribution, copying or
use of this communication or the information in it is strictly
prohibited. Please notify us immediately by email at
admin.internet(at)awe.co.uk, and then delete this message from
your computer. While attachments are virus checked, AWE plc
does not accept any liability in respect of any virus which is not
detected.

AWE Plc
Registered in England and Wales
Registration No 02763902
AWE, Aldermaston, Reading, RG7 4PR

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Using AFM to migrate files. (Peter Childs) (Peter Childs) - URL encoding for pathnames

2016-10-24 Thread Yaron Daniel
Hi

Maybe worth also to check if there are any orphan files in the NEW fs ?

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services - Team Leader  
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

 



From:   Loic Tortay <tor...@cc.in2p3.fr>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   10/24/2016 07:50 PM
Subject:Re: [gpfsug-discuss] Using AFM to migrate files. (Peter 
Childs) (Peter Childs) - URL encoding for pathnames
Sent by:gpfsug-discuss-boun...@spectrumscale.org



On 10/24/2016 11:44 AM, Venkateswara R Puvvada wrote:
> 
> mmafmctl prefecth expects encoded list file, and it is not documented 
> correctly.  Issues like memory leak, file descriptor leak, and fileset 
> going into Unmounted state were fixed in later releases (4.2.1/4.2.2). 
All 
> your points are correct with respect to AFM migration. There is manual 
> intervention required. Also prefetch does not give list of files which 
> were failed during data read. Users need to run policy to find all 
> uncached files today.
> 
Hello,
For the record, I have completed today my AFM migration of a filesystem
with 100 million files. Users are now accessing the new filesystem.

After disabling user access and a last "prefetch", the AFM filesets were
converted to independent filesets.
Less than 600 files were then found to be different between the "home"
and the "cache" filesystems with a metadata comparison (I just copied
the files from the old filesystem to the new one).
I have compared the MD5 of a few thousand randomly selected files and
found no differences between the "home" and the "cache" filesystems.
I expect the users to let us know if they find something different (they
have been instructed to do so). We'll keep the "home" filesystem around
for some time, just in case there is a problem.

Maybe something else that should be mentionned in the documentation is
what to do with the ".ptrash" directories after the AFM filesets have
been converted. I removed them since they contained files that had
clearly been deleted by the users.


Loïc.
-- 
|   Loïc Tortay <tor...@cc.in2p3.fr> - IN2P3 Computing Centre  |
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Ubuntu client

2016-09-20 Thread Yaron Daniel
Hi

Check that kernel symbols are installed too

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services - Team Leader  
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

 



From:   Stef Coene <stef.co...@docum.org>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   09/20/2016 08:43 PM
Subject:[gpfsug-discuss] Ubuntu client
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Hi,

I just installed 4.2.1 on 2 RHEL 7.2 servers without any issue.
But I also need 2 clients on Ubuntu 14.04.
I installed the GPFS client on the Ubuntu server and used mmbuildgpl to 
build the required kernel modules.
ssh keys are exchanged between GPFS servers and the client.

But I can't add the node:
[root@gpfs01 ~]# mmaddnode -N client1
Tue Sep 20 19:40:09 CEST 2016: mmaddnode: Processing node client1
mmremote: The CCR environment could not be initialized on node client1.
mmaddnode: The CCR environment could not be initialized on node client1.
mmaddnode: mmaddnode quitting.  None of the specified nodes are valid.
mmaddnode: Command failed. Examine previous error messages to determine 
cause.

I don't see any error in /var/mmfs on client and server.

What can I try to debug this error?


Stef
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss





___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Migrate 3.5 to 4.2 and LTFSEE toSpectrumArchive

2016-08-17 Thread Yaron Daniel
So - the procedure you are asking related to Samba.
Please check at redhat Site the process of upgrade Samba - u will need to 
backup the tdb files and restore them.

But pay attention that the Samba ids will remain the same after moving to 
CES - please review the Authentication Section.

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services - Team Leader  
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

 



From:   Shaun Anderson <sander...@convergeone.com>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   08/18/2016 04:52 AM
Subject:Re: [gpfsug-discuss] Migrate 3.5 to 4.2 and LTFSEE to 
SpectrumArchive
Sent by:gpfsug-discuss-boun...@spectrumscale.org



​We are currently running samba on the 3.5 node, but wanting to migrate 
everything into using CES once we get everything up to 4.2.


SHAUN ANDERSON
STORAGE ARCHITECT
O 208.577.2112
M 214.263.7014



From: gpfsug-discuss-boun...@spectrumscale.org 
<gpfsug-discuss-boun...@spectrumscale.org> on behalf of Yaron Daniel 
<y...@il.ibm.com>
Sent: Wednesday, August 17, 2016 5:11 PM
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Migrate 3.5 to 4.2 and LTFSEE to Spectrum 
Archive 
 
Hi

Do u use CES protocols nodes ? Or Samba on each of the Server ?

  
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services- Team Leader 
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

 



From:Shaun Anderson <sander...@convergeone.com>
To:"gpfsug-discuss@spectrumscale.org" 
<gpfsug-discuss@spectrumscale.org>
Date:08/18/2016 12:11 AM
Subject:[gpfsug-discuss] Migrate 3.5 to 4.2 and LTFSEE to Spectrum 
Archive
Sent by:gpfsug-discuss-boun...@spectrumscale.org



​I am in process of migrating from 3.5 to 4.2 and LTFSEE to Spectrum 
Archive.
 
1 node cluster (currently) connected to V3700 storage and TS4500 backend. 
We have upgraded their 2nd node to 4.2 and have successfully tested 
joining the domain, created smb shares, and validated their ability to 
access and control permissions on those shares. 
They are using .tdb backend for id mapping on their current server. 
 
I'm looking to discuss with someone the best method of migrating those tdb 
databases to the second server, or understand how Spectrum Scale does id 
mapping and where it stores that information.

Any hints would be greatly appreciated.
 
Regards,
 
SHAUN ANDERSON
STORAGE ARCHITECT
O208.577.2112
M214.263.7014

   
  


NOTICE:  This email message and any attachments here to may contain 
confidential
information.  Any unauthorized review, use, disclosure, or distribution of 
such
information is prohibited.  If you are not the intended recipient, please
contact
the sender by reply email and destroy the original message and all copies 
of it.___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss



NOTICE:  This email message and any attachments here to may contain 
confidential
information.  Any unauthorized review, use, disclosure, or distribution of 
such
information is prohibited.  If you are not the intended recipient, please
contact
the sender by reply email and destroy the original message and all copies 
of it.___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Migrate 3.5 to 4.2 and LTFSEE to Spectrum Archive

2016-08-17 Thread Yaron Daniel
Hi

Do u use CES protocols nodes ? Or Samba on each of the Server ?

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services - Team Leader  
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

 



From:   Shaun Anderson <sander...@convergeone.com>
To: "gpfsug-discuss@spectrumscale.org" 
<gpfsug-discuss@spectrumscale.org>
Date:   08/18/2016 12:11 AM
Subject:[gpfsug-discuss] Migrate 3.5 to 4.2 and LTFSEE to Spectrum 
Archive
Sent by:gpfsug-discuss-boun...@spectrumscale.org



​I am in process of migrating from 3.5 to 4.2 and LTFSEE to Spectrum 
Archive.
 
1 node cluster (currently) connected to V3700 storage and TS4500 backend. 
We have upgraded their 2nd node to 4.2 and have successfully tested 
joining the domain, created smb shares, and validated their ability to 
access and control permissions on those shares. 
They are using .tdb backend for id mapping on their current server. 
 
I'm looking to discuss with someone the best method of migrating those tdb 
databases to the second server, or understand how Spectrum Scale does id 
mapping and where it stores that information.

Any hints would be greatly appreciated.
 
Regards,
 
SHAUN ANDERSON
STORAGE ARCHITECT
O 208.577.2112
M 214.263.7014

 
 


NOTICE:  This email message and any attachments here to may contain 
confidential
information.  Any unauthorized review, use, disclosure, or distribution of 
such
information is prohibited.  If you are not the intended recipient, please
contact
the sender by reply email and destroy the original message and all copies 
of it.___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss




___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] NDS in Two Site scenario

2016-07-20 Thread Yaron Daniel
HI

U must remember the following:

Network vlan should be the same between 2 Main Sites - since the CES IP 
failover will not work...

U can define :
Site1 - 2 x NSD servers + Quorum 
Site2 - 2 x NSD servers + Quorum

GPFS FS replication define with failure groups. (Latency must be very low 
in order to have write performance).

Site3 - 1 x Quorum + Local disk as Tie Breaker Disk. (Desc Only)

Hope this help.

 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services - Team Leader  
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

 



From:   "Ken Hill" <k...@us.ibm.com>
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   07/21/2016 03:02 AM
Subject:Re: [gpfsug-discuss] NDS in Two Site scenario
Sent by:gpfsug-discuss-boun...@spectrumscale.org



Yes - it is a cluster.

The sites should NOT be further than a MAN - or Campus network. If you're 
looking to do this over a large distance - it would be best to choose 
another GPFS solution (Multi-Cluster, AFM, etc).

Regards,

Ken Hill
Technical Sales Specialist | Software Defined Solution Sales
IBM Systems


Phone:1-540-207-7270
E-mail: k...@us.ibm.com
  

2300 Dulles Station Blvd
Herndon, VA 20171-6133
United States







From:"mark.b...@siriuscom.com" <mark.b...@siriuscom.com>
To:gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:07/20/2016 07:33 PM
Subject:Re: [gpfsug-discuss] NDS in Two Site scenario
Sent by:gpfsug-discuss-boun...@spectrumscale.org



So in this scenario Ken, can server3 see any disks in site1? 
 
From: <gpfsug-discuss-boun...@spectrumscale.org> on behalf of Ken Hill 
<k...@us.ibm.com>
Reply-To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date: Wednesday, July 20, 2016 at 4:15 PM
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Subject: Re: [gpfsug-discuss] NDS in Two Site scenario
 

 Site1Site2
 Server1 (quorum 1) Server3 (quorum 2)
 Server2 Server4




 SiteX 
 Server5 (quorum 3)




You need to set up another site (or server) that is at least power 
isolated (if not completely infrastructure isolated) from Site1 or Site2. 
You would then set up a quorum node at that site | location. This insures 
you can still access your data even if one of your sites go down.

You can further isolate failure by increasing quorum (odd numbers).

The way quorum works is: The majority of the quorum nodes need to be up to 
survive an outage.

- With 3 quorum nodes you can have 1 quorum node failures and continue 
filesystem operations.
- With 5 quorum nodes you can have 2 quorum node failures and continue 
filesystem operations.
- With 7 quorum nodes you can have 3 quorum node failures and continue 
filesystem operations.
- etc

Please see 
http://www.ibm.com/support/knowledgecenter/en/STXKQY_4.2.0/ibmspectrumscale42_content.html?view=kc
for more information about quorum and tiebreaker disks.

Ken Hill
Technical Sales Specialist | Software Defined Solution Sales
IBM Systems 


Phone:1-540-207-7270
E-mail: k...@us.ibm.com
  

2300 Dulles Station Blvd
Herndon, VA 20171-6133
United States







From:"mark.b...@siriuscom.com" <mark.b...@siriuscom.com>
To:gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:07/20/2016 04:47 PM
Subject:[gpfsug-discuss] NDS in Two Site scenario
Sent by:gpfsug-discuss-boun...@spectrumscale.org





For some reason this concept is a round peg that doesn?t fit the square 
hole inside my brain.  Can someone please explain the best practice to 
setting up two sites same cluster?  I get that I would likely have two NDS 
nodes in site 1 and two NDS nodes in site two.  What I don?t understand 
are the failure scenarios and what would happen if I lose one or worse a 
whole site goes down.  Do I solve this by having scale replication set to 
2 for all my files?  I mean a single site I think I get it?s when there 
are two datacenters and I don?t want two clusters typically.



Mark R. Bush| Solutions Architect
Mobile: 210.237.8415 | mark.b...@siriuscom.com
Sirius Computer Solutions | www.siriuscom.com
10100 Reunion Place, Suite 500, San Antonio, TX 78216 
 
This message (including any attachments) is intended only for the use of 
the individual or entity to which it is addressed and may contain 
information that is non-public, proprietary, privileged, confidential, and 
exempt from disclosure under applicable law. If you are not the intended 
recipient, you are hereby notified that any 

Re: [gpfsug-discuss] GPFS(snapshot, backup) vs. GPFS(backup scripts) vs. TSM(backup)

2016-03-09 Thread Yaron Daniel
Hi

Did u use mmbackup with TSM ?

https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/com.ibm.spectrum.scale.v4r2.adm.doc/bl1adm_mmbackup.htm

Please also review this :

http://files.gpfsug.org/presentations/2015/SBENDER-GPFS_UG_UK_2015-05-20.pdf


 
Regards
 


 
 
Yaron Daniel
 94 Em Ha'Moshavot Rd

Server, Storage and Data Services - Team Leader  
 Petach Tiqva, 49527
Global Technology Services
 Israel
Phone:
+972-3-916-5672
 
 
Fax:
+972-3-916-5672
 
 
Mobile:
+972-52-8395593
 
 
e-mail:
y...@il.ibm.com
 
 
IBM Israel
 
 
 
 

 

gpfsug-discuss-boun...@spectrumscale.org wrote on 03/09/2016 09:56:13 PM:

> From: Jaime Pinto <pi...@scinet.utoronto.ca>
> To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
> Date: 03/09/2016 09:56 PM
> Subject: [gpfsug-discuss] GPFS(snapshot, backup) vs. GPFS(backup 
> scripts) vs. TSM(backup)
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
> 
> Here is another area where I've been reading material from several 
> sources for years, and in fact trying one solution over the other from 
> time-to-time in a test environment. However, to date I have not been 
> able to find a one-piece-document where all these different IBM 
> alternatives for backup are discussed at length, with the pos and cons 
> well explained, along with the how-to's.
> 
> I'm currently using TSM(built-in backup client), and over the years I 
> developed a set of tricks to rely on disk based volumes as 
> intermediate cache, and multiple backup client nodes, to split the 
> load and substantially improve the performance of the backup compared 
> to when I first deployed this solution. However I suspect it could 
> still be improved further if I was to apply tools from the GPFS side 
> of the equation.
> 
> I would appreciate any comments/pointers.
> 
> Thanks
> Jaime
> 
> 
> 
> 
> 
> ---
> Jaime Pinto
> SciNet HPC Consortium  - Compute/Calcul Canada
> www.scinet.utoronto.ca - www.computecanada.org
> University of Toronto
> 256 McCaul Street, Room 235
> Toronto, ON, M5T1W5
> P: 416-978-2755
> C: 416-505-1477
> 
> 
> This message was sent using IMP at SciNet Consortium, University of 
Toronto.
> 
> 
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss