Re: [lustre-discuss] Lustre version 2.14 support for CentOS 7

2021-05-11 Thread Nathaniel Clark via lustre-discuss

If you've installed the zfs rpms:

zfs, kmod-zfs, kmod-zfs-devel, kmod-zfs-devel-$(uname -r), libnvpair3, 
libuutil3, libzfs4, libzfs4-devel, libzpool4


You should be able to configure lustre via:

sh autogen.sh

./configure --with-zfs

make rpms

The auto conf will pick up the current kernel and all the installed ZFS 
paths without having to specify them.


Sincerely,

Nathaniel Clark

On 5/6/21 12:47 PM, Hugo R Hernandez via lustre-discuss wrote:

Morning Lustre Community!

Has anyone experienced any issue when trying to build Lustre 2.14 with 
ZFS 2.0.4 on CentOS7.9 running any of these two kernels: 
|3.10.0-1160.6.1.el7| (tested during release) and 
|3.10.0-1160.25.1.el7| (latest).  Is there any 'special recipe' you 
need to follow for a proper Lustre build from source?  When we built 
for 2.10+ we used to use these flags when configuring:


--with-zfs
--with-zfs-obj
--with-spl
--with-spl-obj

but ZFS 2.0.x now includes SPL as part of it as previously they were 
separated packages.


Any hint on how to address this problem?  Help is greatly appreciated!

Thanks,
-Hugo



On Fri, Apr 30, 2021 at 12:51 PM Hugo R Hernandez <mailto:hdezm...@gmail.com>> wrote:


Peter, I have been trying to get 2.14 ready with ZFS 2.0.4 on
CentOS 7.9 but I have encountered a couple of issues.  This is
what I have been doing:

ZFS:
Install dependency packages
git clone https://github.com/openzfs/zfs.git
<https://github.com/openzfs/zfs.git>
git checkout remotes/origin/zfs-2.0-release
./autogen.sh
./configure
make && make rpms
install libzfs4, zfs-2.0.4, zfs-dksm (have tried also installing
kmod-zfs*, libuutil3, libnvpair3, libzpool4)

Lustre:
git clone git://git.whamcloud.com/fs/lustre-release.git
<http://git.whamcloud.com/fs/lustre-release.git>
git checkout remotes/origin/b2_14
./autogen.sh
./configure --enable-ldiskfs --with-zfs --enable-quota
--enable-utils --enable-gss --enable-snmp
--with-zfs-obj=/var/lib/dkms/zfs/2.0.4/3.10.0-
1160.24.1.el7.x86_64/x86_64
make <<< here I have linking breaks

I get an error like this:

fatal error: sys/byteorder.h: No such file or directory
  #include 

I wondering if I'm doing this on an updated CentOS 7.9 host
running kernel 3.10.0-1160.24.1.el7 instead of the one used for
testing during release cycle:  3.10.0-1160.6.1.el7.  is there
something I'm missing or doing wrong in this case?  Should I be
able to compile then build RPMs i.e. using now
available 3.10.0-1160.25.1.el7 so we can have happy security folks
by using the latest kernel?

Please advise.  Thanks in advance!

-- 
*Hugo R Hernandez*

*
*"Se seus esforços foram vistos com indeferença, não desanime que
o sol faze um espectacolo maravilhoso todas as manhãs enquanto a
maioria das pessoas ainda estão dormindo"
- Anónimo brasileiro


On Thu, Apr 15, 2021 at 12:01 PM Peter Jones mailto:pjo...@whamcloud.com>> wrote:

Hugo

2.14 will likely build/work against centos 7.9 even though
that was not the primary kernel it was tested against

Peter

*From: *lustre-discuss
mailto:lustre-discuss-boun...@lists.lustre.org>> on behalf of
Hugo R Hernandez via lustre-discuss
mailto:lustre-discuss@lists.lustre.org>>
*Reply-To: *Hugo R Hernandez mailto:hdezm...@gmail.com>>
*Date: *Thursday, April 15, 2021 at 8:50 AM
*To: *"lustre-discuss@lists.lustre.org
<mailto:lustre-discuss@lists.lustre.org>"
mailto:lustre-discuss@lists.lustre.org>>
*Subject: *[lustre-discuss] Lustre version 2.14 support for
CentOS 7

Hello there!  We have been planning to upgrade Lustre from
2.10+ to 2.14, but we encountered it supports only RHEL 8.3,
SLES 15 SP2, and Ubuntu 20.04.   How about RHEL/CentOS 7?

https://downloads.whamcloud.com/public/lustre/lustre-2.14.0/
<https://downloads.whamcloud.com/public/lustre/lustre-2.14.0/>

We can see release 2.13 supports RHEL 7.7 (servers and
clients) and 2.12.9 supports RHEL 7.9.  Part of this upgrade
is motivated to work on a OST-to-DoM migration but this
appears to be possible until 2.13.  Our desire is to use DoM
to alleviate with metadata performance due to tons of small
files.  We want at this point to verify if any 2.13 or 2.14
would eventually support CentOS 7.9.

Thanks in advance!

-Hugo


-- 


"Se seus esforços foram vistos com indeferença, não desanime
que o sol faze um espectacolo maravilhoso todas as manhãs
enquanto a maioria das pessoas ainda estão dormindo"

- Anónimo brasileiro


___
lustre-discuss mailing list
lustre-discuss@lists.

Re: [lustre-discuss] Lemur Lustre - make rpm fails

2019-12-10 Thread Nathaniel Clark
Can you open at ticket for this on 
https://github.com/whamcloud/lemur/issues
And possibly
https://jira.whamcloud.com/projects/LMR

You could also try:
$ make local-rpm

Which will avoid the docker stack and just build on the local machine
(beware it sudo's to install rpm build dependencies).


-- 
Nathaniel Clark 
Senior Engineer
Whamcloud / DDN

On Mon, 2019-12-09 at 15:04 -0800, Pinkesh Valdria wrote:
> I am trying to install Lemur on CentOS 7.6 (7.6.1810) to integrate
> with Object storage but the install fails.   I used the instructions
> on below page to install.  I already had Lustre client (2.12.3)
> installed on the machine,  so I started with steps for Lemur.
>  
> https://wiki.whamcloud.com/display/PUB/HPDD+HSM+Agent+and+Data+Movers+%28Lemur%29+Getting+Started+Guide
>  
>  
> Steps followed:
>  
> git clone https://github.com/whamcloud/lemur.git
> cd lemur
> git checkout master
> service docker start
> make rpm
>  
> [root@lustre-client-4 lemur]# make rpm
> make -C packaging/docker
> make[1]: Entering directory `/root/lemur/packaging/docker'
> make[2]: Entering directory `/root/lemur/packaging/docker/go-el7'
> Building go-el7/1.13.5-1.fc32 for 1.13.5-1.fc32
> docker build -t go-el7:1.13.5-1.fc32 -t go-el7:latest --build-
> arg=go_version=1.13.5-1.fc32 --build-arg=go_macros_version=3.0.8-
> 4.fc31  .
> Sending build context to Docker daemon 4.608 kB
> Step 1/9 : FROM centos:7
> ---> 5e35e350aded
> Step 2/9 : MAINTAINER Robert Read 
> ---> Using cache
> ---> 4be0d7fa27a2
> Step 3/9 : RUN yum install -y @development golang pcre-devel glibc-
> static which
> ---> Using cache
> ---> ac83254f37f7
> Step 4/9 : RUN mkdir -p /go/src /go/bin && chmod -R 777 /go
> ---> Using cache
> ---> fdbb4d031716
> Step 5/9 : ENV GOPATH /go PATH $GOPATH/bin:$PATH
> ---> Using cache
> ---> 216c5484727e
> Step 6/9 : RUN go get github.com/tools/godep && cp /go/bin/godep
> /usr/local/bin
> ---> Running in aed86ac3eb87
>  
> /bin/sh: go: command not found
> The command '/bin/sh -c go get github.com/tools/godep && cp
> /go/bin/godep /usr/local/bin' returned a non-zero code: 127
> make[2]: *** [go-el7/1.13.5-1.fc32] Error 127
> make[2]: Leaving directory `/root/lemur/packaging/docker/go-el7'
> make[1]: *** [go-el7] Error 2
> make[1]: Leaving directory `/root/lemur/packaging/docker'
> make: *** [docker] Error 2
> [root@lustre-client-4 lemur]#
>  
> Is this repo for Lemur the most updated version?
>  
>  
> [root@lustre-client-4 lemur]# lfs --version
> lfs 2.12.3
> [root@lustre-client-4 lemur]#
>  
>  
>  
>  
>  
>  
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] lfs find

2019-05-03 Thread Nathaniel Clark
That would mean the union of `lfs find --pool HDD /mnt/lustre` and `lfs 
find ! --pool HDD /mnt/lustre` would NOT be ALL files.  I agree a 
semantic for finding files that are have no elements in a pool would be 
useful, I think using a not operator where it's not the inverse would be 
surprising.

Can I suggest something like `lfs find --outside-pool HDD`?

On 4/26/19 11:35 AM, Vitaly Fertman wrote:
> Hi
>
> during a discussion of a bug in lfs find, an improvement idea appeared, it is 
> well
> described by Andreas below, and this thread is to discuss which options may 
> need this
> functionality.
>
>
>> On 26 Apr 2019, at 03:41, Andreas Dilger  wrote:
>>
>>   lfs find ! --pool HDD ...
>>
>> should IMHO find files that do not have any instantiated components in pool 
>> HDD, rather than files that have any component not on HDD.
>>
>> That said, I could imagine that we may need to make some parameters more 
>> flexible, like adding "--pool ="  to allow specifying all 
>> components on the specified pool, and possibly + to specify "at least one 
>> component" (which would be the same as without "+" but may be more clear to 
>> some users)?
>>
>> A similar situation arose with "-mode" for regular find (any vs. all bits) 
>> that took a while to sort out, so we should learn from what they did and get 
>> it right.
> —
> Vitaly Fertman
> _______
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

-- 
Nathaniel Clark
Senior Software Engineer
Whamcloud / DDN

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] stat

2019-05-01 Thread Nathaniel Clark
You should look at the IOC_MDC_GETFILEINFO ioctl.  "Example" usage can 
be found in
lustre-release/lustre/utils/liblustreapi.c::get_lmd_info_fd().

There currently isn't a convenient liblustre API call for it.  It 
doesn't poll the OSTs, it just grabs info from the MDT, so it will do 
fewer RPCs.

On 4/24/19 12:05 PM, Harms, Kevin wrote:
>Does Lustre provide an optimized stat("filename", ...) that requires fewer 
> RPCs than fd=open("filename", ...); stat(fd); ? If so, are there any 
> descriptions of this optimization?
>
> thanks,
> kevin
> ___
> lustre-discuss mailing list
> lustre-discuss@lists.lustre.org
> http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

-- 
Nathaniel Clark
Senior Software Engineer
Whamcloud / DDN

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] Upgrade guide 2.4.2.7

2019-04-02 Thread Nathaniel Clark
That upgrade guide, published here:
https://whamcloud.github.io/Online-Help/docs/Upgrade_Guide/Upgrade_EE-2.4-el6_to_LU-LTS-el7.html

Is currently the definitive guide for EE 2.4 to IML 4 upgrades.

--

Nathaniel Clark
Senior Software Engineer
Whamcloud / DDN


On 3/14/19 11:20 AM, John McCulloch wrote:
I’m researching upgrade procedures for:
(Intel) IML 2.3.0.0 to IML 4.0.9.0
Lustre 2.5.0 to 2.12.0

Has anyone tested this upgrade guide recently?

https://github.com/whamcloud/Online-Help/blob/master/docs/Upgrade_Guide/Upgrade_EE-2.4-el6_to_LU-LTS-el7.md

Regards,
John McCulloch
Integration Engineer
PCPC Direct, Ltd.



___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] problem with resource-agents rpm

2018-09-04 Thread Nathaniel Clark
For CentOS resource-agents is available in the base package repository (for 7.4 
and 7.5). Can you double check which repositories are enabled?


--

Nathaniel Clark
Senior Software Engineer
Whamcloud


On Thu, 2018-08-30 at 12:05 -0700, Riccardo Veraldi wrote:

Lustre 2.10.5


seems like that lustre-resource-agents has a dependency problem


yum localinstall -y lustre-resource-agents-2.10.5-1.el7.x86_64.rpm

Loaded plugins: langpacks

Examining lustre-resource-agents-2.10.5-1.el7.x86_64.rpm:

lustre-resource-agents-2.10.5-1.el7.x86_64

Marking lustre-resource-agents-2.10.5-1.el7.x86_64.rpm to be installed

Resolving Dependencies

--> Running transaction check

---> Package lustre-resource-agents.x86_64 0:2.10.5-1.el7 will be installed

--> Processing Dependency: resource-agents for package:

lustre-resource-agents-2.10.5-1.el7.x86_64

--> Finished Dependency Resolution

Error: Package: lustre-resource-agents-2.10.5-1.el7.x86_64

(/lustre-resource-agents-2.10.5-1.el7.x86_64)

Requires: resource-agents

  You could try using --skip-broken to work around the problem


Anyone had this issue ?


thanks


Rick




___

lustre-discuss mailing list

lustre-discuss@lists.lustre.org<mailto:lustre-discuss@lists.lustre.org>

http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: [lustre-discuss] ZFS based OSTs need advice

2018-06-26 Thread Nathaniel Clark


I assume you mean, that your storage is active-active and cross-
connected between your OSS's. And you want both OSS to be able to
present any of the 4 OSTs.

Each OST should be it's own zpool, this will let each OST be imported /
failed between OSS's independently.

There's a guide on how to setup Pacemaker v1.1 (default for el7) to do
failover with ZFS and Lustre 2.10+
https://wiki.whamcloud.com/display/PUB/Using+Pacemaker+1.1+with+a+Lustr
e+File+System

--
Nathaniel Clark

On Tue, 2018-06-26 at 16:02 +0300, Zeeshan Ali Shah wrote:
> We have 2 OSS with 4 OST shared . Each OST has 90 Disk so total 360
> Disks . 
> I am in phase of installing 2OSS as active/active but as zfs pools
> can only be imported in single OSS host in this case how to achieve
> active/active HA ?
> As what i read is that for active/active both HA hosts should have
> access to a same sets of disks/volumes. 
> 
> any advice ?
> 
> 
> /Zeeshan
> 
> 
> 
> 
> 
> ___lustre-discuss mailing
> listlustre-discuss@lists.lustre.orghttp://lists.lustre.org/listinfo.c
> gi/lustre-discuss-lustre.org
Zeeshan,___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org