Re: [OpenAFS] Update squeeze openafs-fileserver to squeeze-backports

2013-10-04 Thread Torbjörn Moa
Hi,

I don't know if this is at all related, but I vaguely recall having had
a similar problem. It turned out that the cell name string in
/etc/openafs/ThisCell, as made by the initial installation config,
didn't have a line termination, which made the upgrade config script
choke when trying to read it. Consequently, it was easily fixed by just
adding a CR at the end of the line.

But then again, this may not be your problem at all, although the
symptoms are quite similar.

Best wishes,

   Torbjörn

On 10/03/2013 05:24 PM, Jean-Marc Choulet wrote:
 Hello,
 
 We want to upgrade openafs-fileserver to squeeze-backports and we get
 this errors :
 
 LANG=C apt-get -t squeeze-backports install openafs-client
 Reading package lists... Done
 Building dependency tree
 Reading state information... Done
 The following packages were automatically installed and are no longer
 required:
   bison flex
 Use 'apt-get autoremove' to remove them.
 Suggested packages:
   openafs-doc
 The following packages will be upgraded:
   openafs-client
 1 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.
 1 not fully installed or removed.
 Need to get 0 B/3843 kB of archives.
 After this operation, 717 kB of additional disk space will be used.
 Reading changelogs... Done
 Preconfiguring packages ...
 openafs-client failed to preconfigure, with exit status 1
 (Reading database ... 38192 files and directories currently installed.)
 Preparing to replace openafs-client 1.4.12.1+dfsg-4+squeeze2 (using
 .../openafs-client_1.6.1-3+deb7u1~bpo60+1_amd64.deb) ...
 Unpacking replacement openafs-client ...
 Processing triggers for man-db ...
 Setting up openafs-client (1.6.1-3+deb7u1~bpo60+1) ...
 Installing new version of config file /etc/init.d/openafs-client ...
 dpkg: error processing openafs-client (--configure):
  subprocess installed post-installation script returned error exit status 1
 configured to not write apport reports
   Errors were encountered while
 processing:
  openafs-client
 E: Sub-process /usr/bin/dpkg returned an error code (1)
 root@fs01:~#
 
 Do you any ideas ?
 
 Thank,
 
 Jean-Marc
 ___
 OpenAFS-info mailing list
 OpenAFS-info@openafs.org
 https://lists.openafs.org/mailman/listinfo/openafs-info

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


RE: [OpenAFS] Re: [ Openafs : cache on zfs ]

2013-10-04 Thread milek


 -Original Message-
 From: openafs-info-ad...@openafs.org [mailto:openafs-info-
 ad...@openafs.org] On Behalf Of Andrew Deason
 Sent: 03 October 2013 19:17
 To: openafs-info@openafs.org
 Subject: [OpenAFS] Re: [ Openafs : cache on zfs ]
 
 On Thu, 3 Oct 2013 19:34:27 +0200
 nicolas prochazka prochazka.nico...@gmail.com wrote:
 
  Hello again ,
  after some tests to use zfs as afs cache, linux kernel tells :
  BUG : soft lockup - CPU0 stuck for 23s ! [ afs_cachetrim:2908]
 
  Any ideas are welcome,
 
 It seems pretty likely from your other message that using zfsonlinux
 for the openafs client cache is not going to work at all until someone
 takes the time to add support for it. Just use another filesystem.
 
 Even on other platforms, ZFS has some characteristics that make it not
 ideal for a cache, and in the past I've recommended using something
 else when it's easy to do so (e.g. UFS on Solaris, even if it's jus on
 a zvol). At least on Solaris, ZFS does some somewhat unique things with
 space allocations that have created some semi-unavoidable problems for
 the cache manager.
 
 I can understand if someone is using ZFS for their root fs, and they
 don't want to make a separate fs just for the cache, but if you're
 making an fs just for the cache or something, it doesn't make a lot of
 sense.

We've been running AFS cache on ZFS directly for some time now, with no
issues.
The trick is to set AFS cache size somewhat smaller than the amount of space
present by the file system (so essentially we slightly over-provision). On
top of it we enable ZFS compression on the cache file system, so in reality
we overprovision even more. All works fine. 

Given that AFS cache tends to be small (several GBs, anyone is using
considerably larger cache?), a small over-provisioning is not an issue.

For example, create ZFS file system for AFS cache as:

# zfs create -o atime=off -o quota=8G -o reservation=8G -o compression=lzjb
-o recordsize=8K -o mountpoint=/afscache rpool/afscache

And in cacheinfo the specified cache size is: 6710886

This is less than 2GB overprovisioned (assuming nothing will compress), but
with modern disk sizes of OS it doesn't matter at all and seems to be a safe
enough margin. The compression is not for disk space saving it is more for
doing less i/o in some cases (depending on data).


--
Robert Milkowski
http://milek.blogspot.com


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


RE: [OpenAFS] Re: [ Openafs : cache on zfs ]

2013-10-04 Thread milek


 -Original Message-
 From: openafs-info-ad...@openafs.org [mailto:openafs-info-
 ad...@openafs.org] On Behalf Of Andrew Deason
 Sent: 03 October 2013 19:17
 To: openafs-info@openafs.org
 Subject: [OpenAFS] Re: [ Openafs : cache on zfs ]
 
 On Thu, 3 Oct 2013 19:34:27 +0200
 nicolas prochazka prochazka.nico...@gmail.com wrote:
 
  Hello again ,
  after some tests to use zfs as afs cache, linux kernel tells :
  BUG : soft lockup - CPU0 stuck for 23s ! [ afs_cachetrim:2908]
 
  Any ideas are welcome,
 
 It seems pretty likely from your other message that using zfsonlinux
 for the openafs client cache is not going to work at all until someone
 takes the time to add support for it. Just use another filesystem.
 
 Even on other platforms, ZFS has some characteristics that make it not
 ideal for a cache, and in the past I've recommended using something
 else when it's easy to do so (e.g. UFS on Solaris, even if it's jus on
 a zvol). At least on Solaris, ZFS does some somewhat unique things with
 space allocations that have created some semi-unavoidable problems for
 the cache manager.
 
 I can understand if someone is using ZFS for their root fs, and they
 don't want to make a separate fs just for the cache, but if you're
 making an fs just for the cache or something, it doesn't make a lot of
 sense.


We've been running AFS cache on ZFS directly for some time now, with no
issues.
The trick is to set AFS cache size somewhat smaller than the amount of space
present by the file system (so essentially we slightly over-provision). On
top of it we enable ZFS compression on the cache file system, so in reality
we overprovision even more. All works fine. 

Given that AFS cache tends to be small (several GBs, anyone is using
considerably larger cache?), a small over-provisioning is not an issue.

For example, create ZFS file system for AFS cache as:

# zfs create -o atime=off -o quota=8G -o reservation=8G -o compression=lzjb
-o recordsize=8K -o mountpoint=/afscache rpool/afscache

And in cacheinfo the specified cache size is: 6710886

This is less than 2GB overprovisioned (assuming nothing will compress), but
with modern disk sizes of OS it doesn't matter at all and seems to be a safe
enough margin. The compression is not for disk space saving it is more for
doing less i/o in some cases (depending on data).


--
Robert Milkowski
http://milek.blogspot.com


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] ZFS-on-Linux on production fileservers?

2013-10-04 Thread Jeff Blaine
[ For those running ext3/ext4, a question further down for you as ]
[ well!   ]

We're still a 100% Solaris + ZFS file server shop. We're EOLing
our Sun SPARC hardware (with tears in our eyes) this year.

Before we spend a significant amount of time evaluating this, I
figured I'd ask first. Any brief response would be greatly appre-
ciated. The generously longer the better :)

* Are you using ZFS-on-Linux in production for file servers?
* If not, and you looked into it, what stopped you?
* If you are, how is it working out for you?

ext3/ext4 people: What is your fsck strategy?
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] ZFS-on-Linux on production fileservers?

2013-10-04 Thread Dan Van Der Ster

On Oct 4, 2013, at 4:31 PM, Jeff Blaine jbla...@kickflop.net
 wrote:

 * If not, and you looked into it, what stopped you?

We stopped because of the memory fragmentation issue. ZFS will use ~twice the 
arc limit you set, and (in my experience) if you don't set the it wisely (e.g. 
20-25% total memory), your server will lock up solid.

Cheers, Dan
CERN IT___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] [ Openafs : cache on zfs ]

2013-10-04 Thread Dirk Heinrichs
Am Donnerstag 03 Oktober 2013, 19:34:27 schrieb nicolas prochazka:

 after some tests to use zfs as afs cache,
 linux kernel tells :
 BUG : soft lockup - CPU0 stuck for 23s ! [ afs_cachetrim:2908]
 
 Any ideas are welcome,

You could put the cache on a normal Linux FS inside a ZVOL. See 
http://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/

HTH...

Dirk
-- 
Dirk Heinrichs dirk.heinri...@altum.de
Tel: +49 (0)2471 209385 | Mobil: +49 (0)176 34473913
GPG Public Key C2E467BB | Jabber: dirk.heinri...@altum.de


signature.asc
Description: This is a digitally signed message part.


Re: [OpenAFS] ZFS-on-Linux on production fileservers?

2013-10-04 Thread Dirk Heinrichs
Am Freitag 04 Oktober 2013, 10:31:47 schrieb Jeff Blaine:

 We're still a 100% Solaris + ZFS file server shop. We're EOLing
 our Sun SPARC hardware (with tears in our eyes) this year.
 
 Before we spend a significant amount of time evaluating this, I
 figured I'd ask first. Any brief response would be greatly appre-
 ciated. The generously longer the better :)
 
 * Are you using ZFS-on-Linux in production for file servers?
 * If not, and you looked into it, what stopped you?
 * If you are, how is it working out for you?

A couple of weeks ago, I tried to install a _desktop_ system on ZFSonLinux. 
Can't remember the exact reason, but I quickly decided to stick with a native 
Linux FS.

OTOH, I run my own small home cell on an Arm box (Guruplug) using btrfs (both 
vicepXX and client cache). If it must be ZFS, would FreeBSD be an option?

Bye...

Dirk
-- 
Dirk Heinrichs dirk.heinri...@altum.de
Tel: +49 (0)2471 209385 | Mobil: +49 (0)176 34473913
GPG Public Key C2E467BB | Jabber: dirk.heinri...@altum.de


signature.asc
Description: This is a digitally signed message part.


RE: [OpenAFS] ZFS-on-Linux on production fileservers?

2013-10-04 Thread milek


 -Original Message-
 From: openafs-info-ad...@openafs.org [mailto:openafs-info-
 ad...@openafs.org] On Behalf Of Jeff Blaine
 Sent: 04 October 2013 15:32
 To: OpenAFS
 Subject: [OpenAFS] ZFS-on-Linux on production fileservers?
 
 [ For those running ext3/ext4, a question further down for you as ]
 [ well!   ]
 
 We're still a 100% Solaris + ZFS file server shop. We're EOLing our Sun
 SPARC hardware (with tears in our eyes) this year.
 

Why not ZFS on Solaris 11 x86? You can run it on non-Oracle hardware if you
prefer.
We have a pretty large installation on top of Solaris 11 x86 + ZFS, both on
Oracle and 3rd party x86 servers.
See my presentation about it last year.


--
Robert Milkowski
http://milek.blogspot.com



___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] ZFS-on-Linux on production fileservers?

2013-10-04 Thread Dirk Heinrichs
Am Freitag 04 Oktober 2013, 16:51:28 schrieb mi...@task.gda.pl:

 See my presentation about it last year.

Link?

Bye...

Dirk
-- 
Dirk Heinrichs dirk.heinri...@altum.de
Tel: +49 (0)2471 209385 | Mobil: +49 (0)176 34473913
GPG Public Key C2E467BB | Jabber: dirk.heinri...@altum.de


signature.asc
Description: This is a digitally signed message part.


Re: [OpenAFS] ZFS-on-Linux on production fileservers?

2013-10-04 Thread Stephan Wiesand

On Oct 4, 2013, at 18:08 , Dirk Heinrichs wrote:

 Am Freitag 04 Oktober 2013, 16:51:28 schrieb mi...@task.gda.pl:
 
 See my presentation about it last year.
 
 Link?

http://conferences.inf.ed.ac.uk/eakc2012/

-- 
Stephan Wiesand
DESY -DV-
Platanenenallee 6
15738 Zeuthen, Germany

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


RE: [OpenAFS] ZFS-on-Linux on production fileservers?

2013-10-04 Thread milek


 -Original Message-
 From: openafs-info-ad...@openafs.org [mailto:openafs-info-
 ad...@openafs.org] On Behalf Of Dirk Heinrichs
 Sent: 04 October 2013 17:08
 To: openafs-info@openafs.org
 Subject: Re: [OpenAFS] ZFS-on-Linux on production fileservers?
 
 Am Freitag 04 Oktober 2013, 16:51:28 schrieb mi...@task.gda.pl:
 
  See my presentation about it last year.
 
 Link?


http://conferences.inf.ed.ac.uk/eakc2012/slides/AFS_on_Solaris_ZFS.pdf

-- 
Robert Milkowski
http://milek.blogspot.com


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] ZFS-on-Linux on production fileservers?

2013-10-04 Thread Dirk Heinrichs
Am Freitag 04 Oktober 2013, 17:18:24 schrieb mi...@task.gda.pl:

 http://conferences.inf.ed.ac.uk/eakc2012/slides/AFS_on_Solaris_ZFS.pdf

Thanks a lot.

Bye...

Dirk
-- 
Dirk Heinrichs dirk.heinri...@altum.de
Tel: +49 (0)2471 209385 | Mobil: +49 (0)176 34473913
GPG Public Key C2E467BB | Jabber: dirk.heinri...@altum.de


signature.asc
Description: This is a digitally signed message part.


Re: [OpenAFS] Update squeeze openafs-fileserver to squeeze-backports

2013-10-04 Thread Russ Allbery
Torbjörn Moa m...@fysik.su.se writes:

 I don't know if this is at all related, but I vaguely recall having had
 a similar problem. It turned out that the cell name string in
 /etc/openafs/ThisCell, as made by the initial installation config,
 didn't have a line termination, which made the upgrade config script
 choke when trying to read it. Consequently, it was easily fixed by just
 adding a CR at the end of the line.

 But then again, this may not be your problem at all, although the
 symptoms are quite similar.

Oh!  It's failing in the *.config script!  Of course.  That's why the -x
output goes away; that script is being run by debconf and it's a separate
script that doesn't have -x.

# Configure the client cell.  Default to the current ThisCell file and,
# failing that, the lowercased local domain name, if available.  Ignore errors
# on read, since it may fail if there's no newline in the file.
if [ -r /etc/openafs/ThisCell ] ; then
read cell  /etc/openafs/ThisCell
db_set openafs-client/thiscell $cell
fi

Well, the comment indicates that I knew about this problem at some point,
but I see no sign of actually doing what the comment says it should be
doing.  On the other hand, I can't figure out why that read would fail
when there's no trailing newline (it doesn't for me with either bash or
dash).

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] ZFS-on-Linux on production fileservers?

2013-10-04 Thread James E. Dobson


FWIW: I'm using SmartOS (USB device boot) to boot a smart machine (aka 
a zone) with OpenAFS. Giving up any large amount of space for boot 
devices on my fileservers seems rather stupid. This has been in 
production for several months now.


Using Debian + zfsonlinux with some success for other systems.

Jed Dobson

Dartmouth College
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] Re: Update squeeze openafs-fileserver to squeeze-backports

2013-10-04 Thread Andrew Deason
On Fri, 04 Oct 2013 09:30:04 -0700
Russ Allbery r...@stanford.edu wrote:

 Well, the comment indicates that I knew about this problem at some
 point, but I see no sign of actually doing what the comment says it
 should be doing.  On the other hand, I can't figure out why that read
 would fail when there's no trailing newline (it doesn't for me with
 either bash or dash).

Looks like it does to me (bash):

$ printf foo\n  foo.file ; echo $?
0
$ read foo  foo.file ; echo $?
0
$ printf foo  foo.file ; echo $?
0
$ read foo  foo.file ; echo $?
1

-- 
Andrew Deason
adea...@sinenomine.net

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Re: Update squeeze openafs-fileserver to squeeze-backports

2013-10-04 Thread Russ Allbery
Andrew Deason adea...@sinenomine.net writes:
 Russ Allbery r...@stanford.edu wrote:

 Well, the comment indicates that I knew about this problem at some
 point, but I see no sign of actually doing what the comment says it
 should be doing.  On the other hand, I can't figure out why that read
 would fail when there's no trailing newline (it doesn't for me with
 either bash or dash).

 Looks like it does to me (bash):

 $ printf foo\n  foo.file ; echo $?
 0
 $ read foo  foo.file ; echo $?
 0
 $ printf foo  foo.file ; echo $?
 0
 $ read foo  foo.file ; echo $?
 1

Oh, I see.  It reads the file and sets the variable, but then exits with a
non-zero status.  So adding || true will fix this.  I will do that for the
next version of the package.

-- 
Russ Allbery (r...@stanford.edu) http://www.eyrie.org/~eagle/
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] ZFS-on-Linux on production fileservers?

2013-10-04 Thread Harald Barth

 * Are you using ZFS-on-Linux in production for file servers?

Yes.

 * If not, and you looked into it, what stopped you?

Long there was fear and doubt, but the (not) quality of HW-Raid
solutions and hassle of Linux SW-Raid convinced us that it could not
be worse with ZFS.

 * If you are, how is it working out for you?

It does.

The are-the-zpools-OK reporting could be more comfortable and
the zpool status output is not compatible to the old one.
But compared to problems we had before, that's a no-brainer.

Don't be surprised if raidz needs CPU power to calculate the
checksums, so you need to get the balance between I/O and CPU/cores
right for the raidz level you want.

 ext3/ext4 people: What is your fsck strategy?

Before that we used xfs on HW- and SW-Raid. We had no problems with
the xfs part of it. However we felt all the time that the possible max
log sizes were 1990-ish.

Harald.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


[OpenAFS] not enough space in target directory

2013-10-04 Thread Christian

All,

we are seeing some weird issues with the windows client (1.7.26, but hat 
also seen that with previous 1.7 versions). Often, when attempting to 
write data, my users get a popup box complaining about insufficient 
space in the target directory. In those cases, writing the data to the 
RW path (.cell.name) instead works just fine. Note that the volumes 
which are being accessed in those cases do NOT have RO replicas, just 
some of the volumes from which they are mounted. Write access just fails 
intermittently when accessed through a path which contains OTHER 
replicated volumes.


So, for example, say that the volume users containing the mount points 
for the individual user volumes is replicated. Then write access to 
/afs/our.cell/users/joe.user will fail intermittently, while writing to 
/afs/.our.cell/users/joe.user always works. We use dynroot and SRV records.


I have read the debugging instructions, but I am a little unsure about 
how we should proceed here. What should I do? Try fs trace?


Thanks,

Christian
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] not enough space in target directory

2013-10-04 Thread Jeffrey Altman
File a bug report with Microsoft if the problem is experienced when
using the explorer shell or applications relying upon the shell api for
file access.

This is a known bug in the explorer shell and Microsoft has been working
on it for more than six months.  As with all Windows bugs, a fix is
prioritized based upon the number of complaints received from paying
support customers.

Jeffrey Altman

On 10/4/2013 6:36 PM, Christian wrote:
 All,
 
 we are seeing some weird issues with the windows client (1.7.26, but hat
 also seen that with previous 1.7 versions). Often, when attempting to
 write data, my users get a popup box complaining about insufficient
 space in the target directory. In those cases, writing the data to the
 RW path (.cell.name) instead works just fine. Note that the volumes
 which are being accessed in those cases do NOT have RO replicas, just
 some of the volumes from which they are mounted. Write access just fails
 intermittently when accessed through a path which contains OTHER
 replicated volumes.
 
 So, for example, say that the volume users containing the mount points
 for the individual user volumes is replicated. Then write access to
 /afs/our.cell/users/joe.user will fail intermittently, while writing to
 /afs/.our.cell/users/joe.user always works. We use dynroot and SRV records.
 
 I have read the debugging instructions, but I am a little unsure about
 how we should proceed here. What should I do? Try fs trace?
 
 Thanks,
 
 Christian
 ___
 OpenAFS-info mailing list
 OpenAFS-info@openafs.org
 https://lists.openafs.org/mailman/listinfo/openafs-info



smime.p7s
Description: S/MIME Cryptographic Signature