Hi,
when trying to update clients from 2.9 to 2.10.0 (on CentOS-7) I
received the following:
"Package lustre-client is obsoleted by lustre, trying to install
lustre-2.10.0-1.el7.x86_64 instead"
and then the update failed (to my guessing due to the fact that
zfs-related packages are missing
related to the virtual machines?
Just curious here.
Thanks,
/jon
On 02/16/2017 04:55 PM, Mohr Jr, Richard Frank (Rick Mohr) wrote:
On Feb 16, 2017, at 9:56 AM, Jon Tegner wrote:
I have three (physical) machines, and each one have a virtual machine on them
(KVM). On one of the virtual machines
going from one to two files). Guess my "hypothesis" was wrong!
/jon
On 02/16/2017 04:55 PM, Mohr Jr, Richard Frank (Rick Mohr) wrote:
On Feb 16, 2017, at 9:56 AM, Jon Tegner wrote:
I have three (physical) machines, and each one have a virtual machine on them
(KVM). On one of t
Hi,
I have been playing around with Lustre on virtual servers (mainly with
the purpose of gaining some experience).
I have three (physical) machines, and each one have a virtual machine on
them (KVM). On one of the virtual machines there is an MDS and on two of
them there are OSS:es installe
st looks like it is working to me.
Doug
On Feb 7, 2017, at 2:13 AM, Jon Tegner wrote:
Probably doing something wrong here, but I tried to test only READING with the
following:
#!/bin/bash
export LST_SESSION=$$
lst new_session read
lst add_group servers 10.0.12.12@o2ib
lst add_group readers
Probably doing something wrong here, but I tried to test only READING
with the following:
#!/bin/bash
export LST_SESSION=$$
lst new_session read
lst add_group servers 10.0.12.12@o2ib
lst add_group readers 10.0.12.11@o2ib
lst add_batch bulk_read
lst add_test --batch bulk_read --concurrency 12 --f
bad"? What to
compare them with?
Thanks!
/jon
On 02/05/2017 08:55 PM, Jeff Johnson wrote:
Without seeing your entire command it is hard to say for sure but I would make
sure your concurrency option is set to 8 for starters.
--Jeff
Sent from my iPhone
On Feb 5, 2017, at 11:30, Jon
Hi,
I'm trying to use lnet selftest to evaluate network performance on a
test setup (only two machines). Using e.g., iperf or Netpipe I've
managed to demonstrate the bandwidth of the underlying 10 Gbits/s
network (and typically you reach the expected bandwidth as the packet
size increases).
Regarding clients and OSS on same physical server. Seems to me the
problem is not (directly) related to the amount of memory on the
machine, but instead to different applications "competing" for the memory?
Could this possibly be resolved by running lustre in a virtual machine?
Or would there
Thanks a lot!
Did found the "--disable-server" and that seemed to work (sort of),
however, had other issues when building the (client) rpms.
Do you have any general advice considering that we want CentOS-7.2 on
the clients? We have happily been using 2.5.3 for quite some time now
(6.5 on bot
Hi,
I want to build RPMS for 2.5.3 on CentOS-7.2 (the other option would be
to use 2.8.0, but since we don't need the news in 2.8.0, and since I've
heard 2.5.3 would be more stable I opted to go for 2.5.3).
Found the link
wiki.hpdd.intel.com/pages/viewpage.action?pageId=8126821
describing t
Hi,
I have brought up a test system using
2.8.0-3.10.0_327.3.1.el7.x86_64_g96792ba
I can mount the system over tcp, but when I try to do so over infiniband
i get errors of the type:
Can't accept conn from 10.0.51.1@o2ib, queue depth too large: 128 (<=8
wanted)
Can't accept conn from 10.0.
This is great!
Will start working with this new version as soon as possible.
At this point I have a few questions:
the lustre server kernel is based on
kernel-3.10.0-327.3.1.el7.x86_64.rpm
surely this means that the clients should be on the same version of the
kernel (i.e., 3.1, but the stan
Thanks! Much appreciated!
Was quite stressed when I noticed the server was down (data is backed
up, but still). Our servers are managed/provisioned by kickstart and
saltstack - so it should be easy to bring up new ones with the same
configuration.
Thanks again,
/jon
On 03/11/2016 07:05 AM,
Hi,
yesterday I had an incident where the system disk of one of my servers
(MDT/MGS) went down, but the raid could be rebuilt and the system went
up again.
However, in the event of a complete failure of the system disk (assuming
all relevant "lustre disks" are still intact) is there a clear
http://git.whamcloud.com/fs/lustre-release.git/shortlog/refs/heads/b2_8 if
you are interested.
Peter
On 2/22/16, 12:26 AM, "lustre-discuss on behalf of Jon Tegner"
wrote:
Hi,
any news on when 2.8.0 will be released?
http://wiki.lustre.org/Release_2.8.0#Current_Schedule (is this th
Hi,
any news on when 2.8.0 will be released?
http://wiki.lustre.org/Release_2.8.0#Current_Schedule (is this the
relevant place to check?) states Feb 15.
Regards,
/jon
___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lust
Hi,
don't think there will be a 2.7.1, but 2.8.0 is scheduled for release in
some weeks:
http://wiki.lustre.org/Release_2.8.0
Regards,
/jon
On 01/21/2016 03:43 PM, Pardo Diaz, Alfonso wrote:
Hi,
We want to upgrade our Lustre environment from Centos 6.5 with Lustre
2.5.2 to Centos 7 with
Hi,
Where do you find the 2.7.x-releases? I thought fixes were only released
for the Intel maintenance version?
Regards,
/jon
On 12/04/2015 11:43 AM, jerome.be...@inserm.fr wrote:
Hello Ray,
One consideration first : You try the 2.7 version which is not the
production one (aka 2.5). From
Please keep in mind that 6.0, 6.1, 6.2, 6.3, 6.4 and 6.5 no longer gets any
updates, nor
any security fix's.
#
Best regards.
Thomas.
Den 11:50 fredag den 2. oktober 2015 skrev Jon Tegner :
I think this happens when you do a default upgrade. To prevent it I
modified the files in /
I think this happens when you do a default upgrade. To prevent it I
modified the files in /etc/yum.repos to explicitly point to "6.5".
Originally they are linked to "6" (or a variable with that value).
Regards,
/jon
On 10/02/2015 11:21 AM, Thomas Lorenzen wrote:
Hi'
The support matrix state
Hi,
we have some new hardware, on which we want to bring up Lustre (as of
today we have two Lustre systems, 2.5.3, running on CentOS-6.5). Without
checking we installed CentOS-7.1 on them, but when looking for the
server packages we realized that there are none for RHEL7 systems.
So it seems
it is not a UID/GID issue.
From: lustre-discuss [lustre-discuss-boun...@lists.lustre.org] on behalf of
Martin Hecht [he...@hlrs.de]
Sent: Thursday, May 28, 2015 7:19 AM
To: Jon Tegner
Cc: Lustre discussion
Subject: Re: [lustre-discuss] Cannot remove file
Hi Jon,
it might be an
in single quotes):
unlink './-?'
If this doesn't work, you could try to stop lustre and mount the MDT as
ldiskfs and remove the entries on that level.
lfsck is supposed to fix this online, too, but it doesn't work in 2.5 if
I recall correctly.
best regards,
Martin
On 05/28/2015 12:46 P
Hi,
I have a few files which are listed (ls -l) with:
"-? ? ? ? ??"
I have tried to remove them, both with "rm" and with "ulink", but
neither of these work (unlink: cannot unlink `file': Invalid argument).
This really doesn't bother me, except for an error
Hi,
have activated quota on the system (2.5.3), and we have noticed that it
seems to be off by quite a bit, i.e., even if
lfs quota -u $USER /home indicates that there are a space available,
the quota prevents files from being written to disk.
We suspect that it can have to do with the fac
Wednesday, November 26, 2014, Jon Tegner <mailto:teg...@foi.se>> wrote:
Hi!
I recently got some help regarding removing an OSS/OST from the
file system. Last thing I did was to permanently remove it with
(on the MDS):
lctl conf_param ost_name.osc.active=0
T
Thanks!
On 11/27/2014 02:53 AM, Dilger, Andreas wrote:
This set_param is just the temporary (lost at unmount or reboot)
version of the conf_param originally run. If the conf_param is not
persistent across a client reboot, then that is a bug.
Presumably "ost_name" in the conf_param invocation w
sion do you have, 2.5.3 or 1.8.x?
Alex.
On Nov 26, 2014, at 6:59 AM, Jon Tegner <mailto:teg...@foi.se>> wrote:
Hi!
I recently got some help regarding removing an OSS/OST from the file
system. Last thing I did was to permanently remove it with (on the MDS):
lctl conf_param ost
Hi!
I recently got some help regarding removing an OSS/OST from the file
system. Last thing I did was to permanently remove it with (on the MDS):
lctl conf_param ost_name.osc.active=0
This all seems to be working, and on the clients, the command
lctl dl
indicates the OSS/OST is inactive. H
Hi,
I successfully (I think) removed an OST (thanks for advice!) with command:
lctl conf_param ost_name.osc.active=0
Now, when I do lfs df it is marked as inactive device, and when I check
quota with
lfs quota -u USER -vh /home
I get the output "quotactl ost3 failed."
Is there something o
Hi!
Short question (using lustre 2.5.3). Have added a new OST and about to
remove a faulty one. After deactivating it I would like to migrate the
data from the faulty one, in chapter 14 the command:
lfs find --obd ost_name /mount/point | lfs_migrate -y
is given, and in chapter 19:
lfs fin
Thanks!
If I understand correctly this is taken care of by
tar czvf {backup file}.tgz --xattrs --sparse .
(which should work on CentOS-6.5)?
On 10/21/2014 10:16 AM, Dilger, Andreas wrote:
filesystems, you should also backup and restore the xattrs on all the
files as is needed for the MDT file
Hi again,
We are running lustre 2.5.3 on a small system, consisting of one
combined MGS/MDT and four combined OSS/OSTs. One of the OSS/OSTs has
faulty hardware, and need to be replaced. The procedure I plan on using
is the following.
1. Deactivate the faulty OSS.
2. Make a file-level backup
Hi,
rather new to lustre, and read the following in the manual:
"For better performance, we recommend that you create RAID sets with 4
or 8 data disks plus one or two parity disks."
We were experimenting with OSTs with 6 disks, and tried raid set of both
4 and 5 disks (raid5). While testing
failed OST, I believe you would
reformat the OST and then follow 14.8.5 (in the latest manual).
Dr. Brett Lee, Solutions Architect
High Performance Data Division, Intel
+1.303.625.3595
-Original Message-
From: lustre-discuss-boun...@lists.lustre.org [mailto:lustre-discuss-
boun...@
Hi,
I'm new to lustre, so please excuse me for probably some stupid questions.
I have set up a small test system, consisting of
* 1 MGS/MDT
* 2 OSS/OSTs
* 6 clients on infiniband and one on gigabit.
I have verified the scaling effect (increased performance with two OSTs
compared to one). I fu
37 matches
Mail list logo