Damon Atkins wrote:
NFSv4 has a concept of a root of the overall exported filesystem
(Pseudofilesystem).
FileHandle 0 in terms of Linux it is setting fsid=0 when exporting.
Which would explain why someone said Linux (NFSv4) automounts an exported
filesystem under another exported filesystem
Stefan Walk wrote:
On debian linux (lenny), the nfs4 client automatically mounts subshares,
but the nfs3 client doesn't (May not be right in all cases, just my
experience).
NFSv3 has no way to do this in the protocol, so this is as
designed (and the same as current Solaris/OpenSolaris).
Rob
Brandon High wrote:
If you mount server:/nfs on another host, it will not include
server:/nfs/foo1 or server:/nfs/foo2. Some nfs clients (notably
Solaris's) will attempt to mount the foo1 & foo2 datasets automatically,
so it looks like you've exported everything under server:/nfs. Linux
clien
Chris Dunbar wrote:
Let's say I create the ZFS file system tank/nfs and
share that over NFS. Then I create the ZFS file systems tank/nfs/foo1 and
tank/nfs/foo2. I want to manage snapshots independently for foo1 and foo2,
but I would like to be able to access both from the single NFS share for
ta
Ian Collins wrote:
On 03/11/10 05:42 AM, Andrew Daugherity wrote:
I've found that when using hostnames in the sharenfs line, I had to use
the FQDN; the short hostname did not work, even though both client and
server were in the same DNS domain and that domain is in the search
path, and nsswitc
Frank Cusack wrote:
I thought with NFS4 *on solaris* that clients would follow the zfs
filesystem hierarchy and mount sub-filesystems. That doesn't seem
to be happening and I can't find any documentation on it (either way).
Did I only dream up this feature or does it actually exist? I am
using
Andrew Daugherity wrote:
if I invoke bart via truss, I see it calls statvfs() and fails. Way to keep up
with the times, Sun!
% file /bin/truss /bin/amd64/truss
/bin/truss: ELF 32-bit LSB executable 80386 Version 1 [FPU],
dynamically linked, not stripped, no debugging information availa
David Dyer-Bennet wrote:
And I haven't been able to make incremental replication send/receive work.
Supposed to be working on that, but now I'm having trouble getting a
VirtualBox install that works (my real NAS is physical, but I'm using
virtual systems to test things).
I've had good success
Roland Mainz wrote:
Ok... does that mean that I have to create a ZFS filesystem to actually
test ([1]) an application which modifies ZFS/NFSv4 ACLs or are there any
other options ?
By all means, test with ZFS. But it's easy to do that:
# mkfile 64m /zpool.file
# zpool create test /zpool.file
Gregory Skelton wrote:
What are you using for a client? What version of NFS?
We're using Red Hat Enterprise Linux(Centos) 5.3 for the clients, with
nfs 3
You should try NFSv4 - Linux NFSv4 support came in with this
"mirror mount" support.
If its possible, I'd still like to mount the base
Gregory Skelton wrote:
Yes, that is exactly whats happening to us. I've tried to "share" the
zfs inside the other zfs. Like so, but I'm still seeing an empty directory.
What are you using for a client? What version of NFS?
NFSv4 in Solaris Nevada build 77 and later, or any OpenSolaris
versio
James Lever wrote:
I had help trying to create a crash dump, but everything we tried didn't
cause the system to panic. 0>eip;:c;:c and other weird magic I don't
fully grok
I can't help with your ZFS issue, but to get a reasonable crash
dump in circumstances like these, you should be able to
Harry Putnam wrote:
I think this has probably been discussed here.. but I'm getting
confused about how to determine actual disk usage of zfs filesystems.
Here is an example:
$ du -sb callisto
46744 callisto
$ du -sb callisto/.zfs/snapshot
86076 callisto/.zfs/snapshot
Two questions th
dick hoogendijk wrote:
Sorry Uwe, but the answer is yes. Assuming that your hardware is in
order. I've read quite some msgs from you here recently and all of them
make me think you're no fan of zfs at all. Why don't you quit using it
and focus a little more on installing SunStudio
I would real
Michael Shadle wrote:
I'm going to try to move one of my disks off my rpool tomorrow (since
it's a mirror) to a different controller.
According to what I've heard before, ZFS should automagically
recognize this new location and have no problem, right?
Or do I need to do some sort of detach/etc.
Harry Putnam wrote:
There is a comment in those directions about installing a SMB PAM
module:
6. Install the SMB PAM module
Add the below line to the end of /etc/pam.conf:
other password required pam_smb_passwd.so.1 nowarn
Do you know what that is?
It's part of the Solaris
Aaron wrote:
> Ok, thanks for the info - I was really puling my hair out over this. Would
> you know if sharing over nfs via zfs would fare any better?
I am *quite* happy with the Mac NFS client, and use it against ZFS
files all the time. It's worth the time to make sure you're using
the same n
Aaron wrote:
> I have setup a fileserver using zfs and am able to see the share from my mac.
> I am able to create/write to the share as well as read. I've ensured
that I
> have the same user and uid on both the server (opensolaris snv101b)
as well
> as the mac. The root folder of the share
Frank Cusack wrote:
> just installed s10_u6 with a root pool. i'm blown away. so now i want
> to attach my external storage via firewire.
I was able to use this cheap thing with good initial results:
http://www.newegg.com/Product/Product.aspx?Item=N82E16815124002
However, I ran into a frequent
Mark Wiederspahn wrote:
> I'm exporting/importing a zpool from a sun 4200 running Solaris 10 10/08
> s10x_u6wos_07b X86
> to a t2000 running Solaris 10 10/08 s10s_u6wos_07b SPARC. Neither one is yet
> patched,
> but I didn't see anything obvious on sunsolve for recent updates.
>
> The filesysem
Miles Nordin wrote:
> sounds
> like they are not good enough though, because unless this broken
> router that Robert and Darren saw was doing NAT, yeah, it should not
> have touch the TCP/UDP checksum.
I believe we proved that the problem bit flips were such
that the TCP checksum was the same, so
Miles Nordin wrote:
> There are checksums in the ethernet FCS, checksums in IP headers,
> checksums in UDP headers (which are sometimes ignored), and checksums
> in TCP (which are not ignored). There might be an RPC layer checksum,
> too, not sure.
>
> Different arguments can be made against eac
Bob Friesenhahn wrote:
> On Wed, 1 Oct 2008, Ahmed Kamal wrote:
>> So, I guess this makes them equal. How about a new "reliable NFS" protocol,
>> that computes the hashes on the client side, sends it over the wire to be
>> written remotely on the zfs storage node ?!
>
> Modern NFS runs over a TCP
Ahmed Kamal wrote:
> BTW, for everyone saying zfs is more reliable because it's closer to the
> application than a netapp, well at least in my case it isn't. The
> solaris box will be NFS sharing and the apps will be running on remote
> Linux boxes. So, I guess this makes them equal. How about
Brandon High wrote:
> On Mon, Jun 9, 2008 at 3:14 PM, Andy Lubel <[EMAIL PROTECTED]> wrote:
>> Tried this today and although things appear to function correctly, the
>> performance seems to be steadily degrading. Am I getting burnt by
>> double-caching? If so, what is the best way to workaround f
Andy Lubel wrote:
> I've got a real doozie.. We recently implemented a b89 as zfs/nfs/
> cifs server. The NFS client is HP-UX (11.23).
>
> What's happening is when our dba edits a file on the nfs mount with
> vi, it will not save.
>
> I removed vi from the mix by doing 'touch /nfs/file1' t
Bob Friesenhahn wrote:
> I can't speak from a Mac-centric view, but for my purposes NFS in
> Leopard works well. The automounter in Leopard is a perfect clone of
> the Solaris automounter, and may be based on OpenSolaris code.
It is based on osol code. The implementor worked a long time at Su
[EMAIL PROTECTED] wrote:
>
>>> I made the mistake of umount -f /net/x4500/export/mail, even when autofs
>>> was disabled, and now all I get is I/O Errors.
>>>
>>> Is it always this sensitive?
>> "umount -f" is a power tool with no guard. If you had local
>> apps using the filesystem, they would
Jorgen Lundman wrote:
>
>> SXCE is coming out _very_ soon. But all of your clients need
>> to support NFSv4 mount point crossing to make full use of it,
>> unless the automounter works out well enough.
>>
>
> Ahh, that's a shame.. Automounter works sufficiently at the moment, but
> it does not
Jorgen Lundman wrote:
>> NFSv4 will let the client cross mount points transparently;
>> this is implemented in Nevada build 77, and in Linux and AIX.
>
> Looks like I have 70b only. Wonder what the chances are of another
> release coming out in the 2 month trial period.
>
> Does only the x4500 n
Jorgen Lundman wrote:
> Software we use are the usual. Postfix with dovecot, apache with
> double-hash, https with TLS/SNI, LDAP for provisioning, pure-ftpd, DLZ,
> freeradius. No local config changes needed for any setup, just ldap and
> netapp.
I meant your client operating systems, actually
Jorgen Lundman wrote:
> *** NFS Option
>
> Start:
>
> Since we need quota per user, I need to create a file-system of
> size=$quota for each user.
>
> But NFS will not let you cross mount-point/file-systems so mounting just
> "/export/mail/" means I will not see any directory below that.
NFS
Ian Collins wrote:
> I have a build 62 system with a zone that NFS mounts an ZFS filesystem.
>
>>From the zone, I keep seeing issues with .nfs files remaining in
> otherwise empty directories preventing their deletion. The files appear
> to be immediately replaced when they are deleted.
>
>
[EMAIL PROTECTED] wrote:
>
>> For NFSv2/v3, there's no easy answers. Some have experimented
>> with executable automounter maps that build a list of filesystems
>> on the fly, but ick. At some point, some of the global namespace
>> ideas we kick around may benefit NFSv2/v3 as well.
>
>
> The q
Alec Muffett wrote:
>> But
>> finally, and this is the critical problem, each user's home
>> directory is now a separate NFS share.
>>
>> At first look that final point doesn't seem to be much of a worry
>> until you look at the implications that brings. To cope with a
>> distributed syste
Robert Olinski wrote:
> I have a customer who is running into bug 6538387. This is a problem
> with HP-UX clients accessing NFS mounts which are on a ZFS file
> system. This has to do with ZFS using nanosecond times and the HP
> client does not use this amount of precision. This is not an
Albert Chin wrote:
Well, there is no data on the file server as this is an initial copy,
Sorry Albert, I should have noticed that from your e-mail :-(
I think the bigger problem is the NFS performance penalty so we'll go
lurk somewhere else to find out what the problem is.
Is this with Sol
Albert Chin wrote:
Why can't the NFS performance match that of SSH?
One big reason is that the sending CPU has to do all the comparisons to
compute the list of files to be sent - it has to fetch the attributes
from both local and remote and compare timestamps. With ssh, local
processes at eac
mike wrote:
this is exactly the kind of feedback i was hoping for.
i'm wondering if some people consider firewire to be better in opensolaris?
I've written some about a 4-drive Firewire-attached box based on the
Oxford 911 chipset, and I've had I/O grind to a halt in the face of
media errors
Tom Buskey wrote:
How well does ZFS work on removable media? In a RAID configuration? Are there
issues with matching device names to disks?
I've had a zpool with 4-250Gb IDE drives in three places recently:
- in an external 4-bay Firewire case, attached to a Sparc box
- inside a dual-Opter
Constantin Gonzalez Schmitz wrote:
2. After going through the zfs-bootification, Solaris complains on reboot that
/etc/dfs/sharetab is missing. Somehow this seems to have been fallen through
the cracks of the find command. Well, touching /etc/dfs/sharetab just fixes
the issue.
This is
Nicolas Williams wrote:
Also, why not just punt to NDMP?
While I like NDMP, the protocol is just a transport for blobs of data
in vendor-specific data formats. We could put a ufsdump or star or
'zfs send' bag-o-bits in there, and call it ours. So it's a part of
a solution, but not a complete
Jens Elkner wrote:
The only problem I encountered with this approach was with pkgmk:
If e.g. /develop/lnf/i386 is not mounted, when it runs, pkgmk doesn't
trigger an automount and thinks, the target FS has a size of 0 bytes - no
space available. However a short cd /develop/lnf/i386 ; cd -
before
Anthony Scarpino wrote:
[EMAIL PROTECTED] wrote:
Anthony J. Scarpino wrote:
I'm not sure if this is an nfs/autofs problem or zfs problem... But
I'll try here first...
On our server, I've got a zfs directory called "cube/builds/izick/".
In this directory I have a number of mountpoints to oth
Anthony J. Scarpino wrote:
I'm not sure if this is an nfs/autofs problem or zfs problem... But I'll try
here first...
On our server, I've got a zfs directory called "cube/builds/izick/". In this
directory I have a number of mountpoints to other zfs file systems.. The problem
happens when w
Richard Elling wrote:
Peter Eriksson wrote:
ufsdump/ufsrestore doesn't restore the ACLs so that doesn't work, same
with rsync.
ufsrestore obviously won't work on ZFS.
ufsrestore works fine; it only reads from a 'ufsdump' format medium and
writes through generic filesystem APIs. I did some
Bill Sommerfeld wrote:
On Fri, 2007-03-30 at 06:12 -0600, Robert Thurlow wrote:
Last night, after moving the
drives, I started a scrub. It's still running. At 20 hours, I
was up to 57.75%, and had 14.5 hours left.
Do you have any cron jobs which are creating periodic snapshots?
No
Hi folks,
In some prior posts, I've talked about trying to get four IDE drives in
a Firewire case working. Yesterday, I bailed out due to hangs, filed
6539587, and moved the drives inside my Opteron box, hanging off one
of these:
http://www.newegg.com/Product/Product.asp?Item=N82E16816124001
N
Jürgen Keil wrote:
I still haven't got any "warm and fuzzy" responses
yet solidifying ZFS in combination with Firewire or USB enclosures.
I was unable to use zfs (that is "zpool create" or "mkfs -F ufs") on
firewire devices, because scsa1394 would hang the system as
soon as multiple concurrent
Darren Reed wrote:
Using Solaris 10, Update 2
I've just rebooted my desktop and I have discovered that a ZFS
filesystem appears to have gone missing.
The filesystem in question was called "biscuit/home" and should
have been modified to have its mountpoint set to /export/home.
Is ther
I have this external Firewire box with 4 IDE drives in it, attached to
a Sunblade 2500. I've built the following pool on them:
banff[1]% zpool status
pool: pond
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
pond ONLINE 0
Hi all,
My disk resources are all getting full again, so it must be time to
buy more storage :-) I'm using ZFS at home, and it's worked great
on the concat of a 74Gb IDE and a 74 Gb SATA drive, especially with
redundant meta-data. That's puny compared to some of the external
storage bricks I se
Eric Enright wrote:
Samba does not currently support ZFS ACLs.
Yes, but this just means you can't get/set your ACLs from a CIFS
client. ACLs will be enforced just fine once set locally on the
server; you may also be able to get/set them from an NFS client.
You may know this, but I know some a
Robert Petkus wrote:
When using sharenfs, do I really need to NFS export the parent zfs
filesystem *and* all of its children? For example, if I have
/zfshome
/zfshome/user1
/zfshome/user1+n
it seems to me like I need to mount each of these exported filesystems
individually on the NFS client. T
54 matches
Mail list logo