Hit this myself. I could be wrong, but from memory I think the paths are ok if
you're a normal user, it's just root that's messed up.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
On Wed, Jan 28, 2009 at 11:07 PM, Christine Tran
wrote:
> What is wrong with this?
>
> # chmod -R A+user:webservd:add_file/write_data/execute:allow /var/apache
> chmod: invalid mode: `A+user:webservd:add_file/write_data/execute:allow'
> Try `chmod --help' for more information.
>
Never mind. /usr/
What is wrong with this?
# chmod -R A+user:webservd:add_file/write_data/execute:allow /var/apache
chmod: invalid mode: `A+user:webservd:add_file/write_data/execute:allow'
Try `chmod --help' for more information.
This works in a zone, works on S10u5, does not work on OpenSolaris2008.11.
CT
__
Yes. I have disconnected the bad disk and booted with nothing in the slot, and
also with known good replacement disk in on the same sata port. Doesn't change
anything.
Running 2008.11 on the box and 2008.11 snv_101b_rc2 on the LiveCD. I'll give it
a shot booting from the latest build and see if
On Wed, Jan 28, 2009 at 19:04, Chris Kirby wrote:
> On Jan 28, 2009, at 11:49 AM, Will Murnane wrote:
>
>>
>> (on the client workstation)
>> wil...@chasca:~$ dd if=/dev/urandom of=bigfile
>> dd: closing output file `bigfile': Disk quota exceeded
>> wil...@chasca:~$ rm bigfile
>> rm: cannot remove
On Wed, Jan 28, 2009 at 5:18 PM, Nathan Kroenert
wrote:
> As a side note, I had a look for anything that looked like a CR for zfs
> destroy / undestroy and could not find one.
>
> Anyone interested in me submitting an RFE to have something like a
>
>zfs undestroy pool/fs
Heh, this questi
On Jan 28, 2009, at 16:39, Miles Nordin wrote:
> Oxford 911 seems to describe a brand of chips, not a specific chip,
> but it's been a good brand, and it's a very old brand for firewire.
As an added bonus this chipset allow "multiple logins" so it can be
used to experiment with this like Oracle
On Jan 28, 2009, at 11:49 AM, Will Murnane wrote:
>
> (on the client workstation)
> wil...@chasca:~$ dd if=/dev/urandom of=bigfile
> dd: closing output file `bigfile': Disk quota exceeded
> wil...@chasca:~$ rm bigfile
> rm: cannot remove `bigfile': Disk quota exceeded
Will,
I filed a CR on th
On 1/28/2009 12:16 PM, Nicolas Williams wrote:
On Wed, Jan 28, 2009 at 09:07:06AM -0800, Frank Cusack wrote:
On January 28, 2009 9:41:20 AM -0600 Bob Friesenhahn
wrote:
On Tue, 27 Jan 2009, Frank Cusack wrote:
i was wondering if you have a zfs filesystem that mounts in a su
Orvar,
In an existing RAIDZ configuration, you would add the cache device like
this:
# zpool add pool-name cache device-name
Currently, cache devices are only supported in the OpenSolaris and SXCE
releases.
The important thing is determining whether the cache device would
improve your workload'
On Wed, 28 Jan 2009, Richard Elling wrote:
> Orvar Korvar wrote:
>
>> I have 5 terabyte discs in a raidz1. Could I add one SSD drive in a
>> similar vein? Would it be easy to do?
>
> Yes.
>
To be specific, you use the 'cache' argument to zpool, as in:
zpool create <...> cache
Regard
> The means to specify this is "sndradm -nE ...",
> when 'E' is equal enabled.
Got it. Nothing on the disk, nothing to replicate (yet).
>The manner in which SNDR can guarantee that
>two or more volumes are write-order consistent, as they are
>replicated is place them in the same I/O consistency
On Wed, Jan 28, 2009 at 02:11:54PM -0800, bdebel...@intelesyscorp.com wrote:
> Recovering Destroyed ZFS Storage Pools.
> You can use the zpool import -D command to recover a storage pool that has
> been destroyed.
> http://docs.sun.com/app/docs/doc/819-5461/gcfhw?a=view
But the OP destroyed a dat
He's not trying to recover a pool - Just a filesystem...
:)
bdebel...@intelesyscorp.com wrote:
> Recovering Destroyed ZFS Storage Pools.
> You can use the zpool import -D command to recover a storage pool that has
> been destroyed.
> http://docs.sun.com/app/docs/doc/819-5461/gcfhw?a=view
--
//
Also - My experience with a very small ARC is that your performance will
stink. ZFS is an advanced filesystem that IMO makes some assumptions
about capability and capacity of current hardware. If you don't give
what it's expecting, your results may be equally unexpected.
If you are keen to test
I'm no authority, but I believe it's gone.
Some of the others on the list might have some funky thoughts, but I
would suggest that if you have already done any other I/O's to the disk
that you have likely rolled past the point of no return.
Anyone else care to comment?
As a side note, I had a
Recovering Destroyed ZFS Storage Pools.
You can use the zpool import -D command to recover a storage pool that has been
destroyed.
http://docs.sun.com/app/docs/doc/819-5461/gcfhw?a=view
--
This message posted from opensolaris.org
___
zfs-discuss mailing
In the process of replacing a raidz1 of four 500GB drives with four 1.5TB
drives on the third one I ran into an interesting issue. The process was to
remove the old drive, put the new drive in and let it rebuild.
The problem was the third drive I put in had a hardware fault. That caused
both
> "fc" == Frank Cusack writes:
fc> say you have pool1/data which mounts on /data and pool2/foo
fc> which mounts on /data/subdir/foo,
From the rest of the thread I guess the mounts aren't reordered across
pool boundarires, but I have this problem even for mount-ordering
within the sam
BJ Quinn wrote:
> I have two servers set up, with two drives each. The OS is stored
> on one drive, and the data on the second drive. I have SNDR
> replication set up between the two servers for the data drive only.
>
> I'm running out of space on my data drive, and I'd like to do a
> simp
> "ap" == Alan Perry writes:
ap> the firewire drive that you want to use or, more precisely,
ap> the 1394-to-ATA (or SATA) bridge
for me Oxford 911 worked well, and PL-3507 crashed daily and needed a
reboot of the case to come back. Prolific released new firmware, but
it didn't help
bash-3.00# uname -a
SunOS opf-01 5.10 Generic_13-01 sun4v sparc SUNW,T5140
It has dual port SAS HBA connected to a dual controller ST2530. Storage is
connected to two 5140's. Tried exporting the pool to other node and tried
destroying without any luck.
thanks
ramesh
--
This message posted
I have two servers set up, with two drives each. The OS is stored on one
drive, and the data on the second drive. I have SNDR replication set up
between the two servers for the data drive only.
I'm running out of space on my data drive, and I'd like to do a simple "zpool
attach" command to ad
Orvar Korvar wrote:
> I understand Fishworks has a L2ARC cache, which as I have understood it, is a
> SSD drive as a cache?
>
Fishworks is an engineering team, I hear they have many L2ARCs in
their lab :-) Yes, the Sun Storage 7000 series systems can have
read-optimized SSDs for use as L2ARC
ok, how about a 4 port PCIe usb2.0 card that works?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi,
I just said zfs destroy pool/fs, but meant to say zfs destroy
pool/junk. Is 'fs' really gone?
thx
jake
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 28 Jan 2009, at 19:40, BJ Quinn wrote:
>>> What about when I pop in the drive to be resilvered, but right
>>> before I add it back to the mirror, will Solaris get upset that I
>>> have two drives both with the same pool name?
>> No, you have to do a manual import.
>
> What you mean is that
>> What about when I pop in the drive to be resilvered, but right before I add
>> it back to the mirror, will Solaris get upset that I have two drives both
>> with the same pool name?
>No, you have to do a manual import.
What you mean is that if Solaris/ZFS detects a drive with an identical pool
Thanks a lot Ethan - that helped!
-Garima
Ethan Quach wrote:
> You've got to import the pool first:
>
># zpool import (to see the names of pools available to import)
>
> The name of the pool is likely "rpool", so
>
># zpool import -f rpool
>
>
> Then you mount your root dataset via zfs,
You've got to import the pool first:
# zpool import (to see the names of pools available to import)
The name of the pool is likely "rpool", so
# zpool import -f rpool
Then you mount your root dataset via zfs, or use the
beadm(1M) command to mount it:
# beadm list (to see the b
Can anyone help me figure this out:
I am a new user of ZFS, and recently installed 2008.11 with ZFS.
Unfortunately I messed up the system and had to boot using LiveCD.
In the legacy systems, it was possible to get to the boot prompt, and
then mount the disk containing the "/" on /mnt, and then fr
I understand Fishworks has a L2ARC cache, which as I have understood it, is a
SSD drive as a cache?
I have 5 terabyte discs in a raidz1. Could I add one SSD drive in a similar
vein? Would it be easy to do? What would be the impact? Has anyone tried this?
--
This message posted from opensolaris
On Wed, Jan 28, 2009 at 09:07:06AM -0800, Frank Cusack wrote:
> On January 28, 2009 9:41:20 AM -0600 Bob Friesenhahn
> wrote:
> > On Tue, 27 Jan 2009, Frank Cusack wrote:
> >> i was wondering if you have a zfs filesystem that mounts in a subdir
> >> in another zfs filesystem, is there any problem
On Wed, Jan 28, 2009 at 09:32:23AM -0800, Frank Cusack wrote:
> On January 28, 2009 9:24:21 AM -0800 Richard Elling
> wrote:
> > Frank Cusack wrote:
> >> i was wondering if you have a zfs filesystem that mounts in a subdir
> >> in another zfs filesystem, is there any problem with zfs finding
> >>
We have been using ZFS for user home directories for a good while now.
When we discovered the problem with full filesystems not allowing
deletes over NFS, we became very anxious to fix this; our users fill
their quotas on a fairly regular basis, so it's important that they
have a simple recourse t
Rob Brown wrote:
> Afternoon,
>
> In order to test my storage I want to stop the cacheing effect of the
> ARC on a ZFS filesystem. I can do similar on UFS by mounting it with
> the directio flag.
No, not really the same concept, which is why Roch wrote
http://blogs.sun.com/roch/entry/zfs_and_dir
On January 28, 2009 9:24:21 AM -0800 Richard Elling
wrote:
> Frank Cusack wrote:
>> i was wondering if you have a zfs filesystem that mounts in a subdir
>> in another zfs filesystem, is there any problem with zfs finding
>> them in the wrong order and then failing to mount correctly?
>>
>> say yo
Peter van Gemert wrote:
> I have a need to created pool that only concatenates the LUNS assigned to it.
> The default for a pool is stripe and other possibilities are mirror, raidz
> and raidz2.
>
> Is there any way I can create concat pools. Main reason is that the
> underlying LUNs are alread
Frank Cusack wrote:
> i was wondering if you have a zfs filesystem that mounts in a subdir
> in another zfs filesystem, is there any problem with zfs finding
> them in the wrong order and then failing to mount correctly?
>
> say you have pool1/data which mounts on /data and pool2/foo which
> mounts
On January 28, 2009 9:41:20 AM -0600 Bob Friesenhahn
wrote:
> On Tue, 27 Jan 2009, Frank Cusack wrote:
>
>> i was wondering if you have a zfs filesystem that mounts in a subdir
>> in another zfs filesystem, is there any problem with zfs finding
>> them in the wrong order and then failing to mount
Hi,
I have the following setup that worked fine for a couple of months.
(root disk)
- zfs rootpool (build 100)
(on 2 mirrored data disks:)
- datapool/export
- datapool/export/home
- datapool/export/fotos
- datapool/export/fotos/2008
Now I tried to live upgrade from build 100 to 106 things got m
On Wed, 28 Jan 2009, Peter van Gemert wrote:
> I have a need to created pool that only concatenates the LUNS
> assigned to it. The default for a pool is stripe and other
> possibilities are mirror, raidz and raidz2.
Zfs does "concatenate" vdevs, and load-shares the writes across vdevs.
If each
On Wed, Jan 28, 2009 at 07:37, Peter van Gemert wrote:
> Is there any way I can create concat pools.
Not that I'm aware of. However, pools that are not redundant at the
zpool level (i.e., mirror or raidz{,2}) are prone to becoming
irrevocably faulted; creating non-redundant pools, even on
"intell
On Tue, 27 Jan 2009, Frank Cusack wrote:
> i was wondering if you have a zfs filesystem that mounts in a subdir
> in another zfs filesystem, is there any problem with zfs finding
> them in the wrong order and then failing to mount correctly?
I have not encountered that problem here and I do have
# zpool status -xv
all pools are healthy
Ben
> What does 'zpool status -xv' show?
>
> On Tue, Jan 27, 2009 at 8:01 AM, Ben Miller
> wrote:
> > I forgot the pool that's having problems was
> recreated recently so it's already at zfs version 3.
> I just did a 'zfs upgrade -a' for another pool, bu
Afternoon,
In order to test my storage I want to stop the cacheing effect of the ARC on
a ZFS filesystem. I can do similar on UFS by mounting it with the directio
flag. I saw the following two options on a nevada box which presumably
control it:
primarycache
secondarycache
But I¹m running Sola
I have a need to created pool that only concatenates the LUNS assigned to it.
The default for a pool is stripe and other possibilities are mirror, raidz and
raidz2.
Is there any way I can create concat pools. Main reason is that the underlying
LUNs are already striping and we do not want to st
Which firewire card? Any firewire card that is OHCI compliant, which is almost
any add-on firewire card that you would buy new these days.
The bigger question is the firewire drive that you want to use or, more
precisely, the 1394-to-ATA (or SATA) bridge used by the drive. Some work
better th
48 matches
Mail list logo