Re: [zfs-discuss] lucreate error: Cannot determine the physical

2008-04-08 Thread Roman Morokutti
> Support will become available in the build 89 or 90
> time frame, at the same time that zfs as a root file
> system is supported.

I greatly appreciate this and there is nothing more to do
than to wait for zfs being capable of live upgrading.

Roman
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Device naming weirdness -- possible bug report?

2008-04-08 Thread Luke Scharf
*Platform:*

* OpenSolaris snv79 on an older beige-box Intel x86
* Apple XRaid disk box, with 7 JBOD disks
* LSI FC controller -
  
http://www.lsi.com/storage_home/products_home/host_bus_adapters/fibre_channel_hbas/lsi7404eplc/index.html?remote=1&locale=EN
  


*Description:*
When a drive is yanked, this happy pool:

datapool   ONLINE   0 0 0
  raidz1   ONLINE   0 0 0
c9t6000393214EEd0  ONLINE   0 0 0
c9t6000393214EEd1  ONLINE   0 0 0
c9t6000393214EEd2  ONLINE   0 0 0
c9t6000393214EEd3  ONLINE   0 0 0
c9t6000393214EEd4  ONLINE   0 0 0
c9t6000393214EEd5  ONLINE   0 0 0
c9t6000393214EEd6  ONLINE   0 0 0
  


Turns into this unhappy pool that cannot reflect reality:

datapool   DEGRADED 0 0 0
  raidz1   DEGRADED 0 0 0
c9t6000393214EEd0  ONLINE   0 0 0
c9t6000393214EEd1  ONLINE   0 0 0
c9t6000393214EEd2  ONLINE   0 0 0
c9t6000393214EEd3  ONLINE   0 0 0
c9t6000393214EEd4  ONLINE   0 0 0
c9t6000393214EEd6  FAULTED  0 0 0  corrupted 
data
c9t6000393214EEd6  ONLINE   0 0 0
  

Note that c9t6000393214EEd6, impossibly, appears in _*TWICE*_ in the 
list!

After replacing the disk with a mostly-blank disk (with some leftover 
zfs headers on it from another experiment), I'm unable to offline of 
repleace c9t6000393214EEd5, or generally do anything that would 
bring the array out of the degraded state.

If I export/import the pool, it looks like this:

NAME   STATE READ WRITE CKSUM
datapool   DEGRADED 0 0 0
  raidz1   DEGRADED 0 0 0
c9t6000393214EEd0  ONLINE   0 0 0
c9t6000393214EEd1  ONLINE   0 0 0
c9t6000393214EEd2  ONLINE   0 0 0
c9t6000393214EEd3  ONLINE   0 0 0
c9t6000393214EEd4  ONLINE   0 0 0
6898074116173351320FAULTED  0 0 0  was 
/dev/dsk/c9t6000393214EEd6s0
c9t6000393214EEd6  ONLINE   0 0 0

errors: No known data errors
  


*Some thoughts:*

* Has anyone else seen this?
* Having a device in the raidz list twice is clearly a problem!
* Being able to change the device list by exporting/importing
  (without plugging/unplugging any hardware) is clearly a problem, too!
* Might the LSI driver or the XRaid re-order the d[0-9] devices when
  one of them goes away?
* We're thinking of various other ways to expose at this problem: a
  newer version of OpenSolaris (b85, probably), and blanking
  drives-used-in-other-experiments more aggressively.


Thanks,
-Luke

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Downgrade zpool version?

2008-04-08 Thread David Loose
> Another approach might be to stick with Solaris on the server, and
> run netatalk instead of SAMBA (or, you
> know your macs can speak NFS ;>).

> I also built mt-daapd on Solaris (just for fun) and iTunes can see that
> shared library - however this wasn't much use to me as I still want to
> use iTunes to manage/populate the library.

> Alternatively you could run Banshee or mt-daapd on the Solaris box and
> just rely on iTunes sharing. =P
> 
> Seriously, NFS is a totally reasonable way to go.

Thanks for the replies. These are all good solutions that I've considered in 
the past. Having 1 Mac to manage all of my media is my ideal solution. It lets 
me sync all of the iPods in the house at 1 point and alleviates the pain of 
having to merge music from several computers to one central library. That said, 
my ideal solution seems to be a no-go at this point, so it looks like I'll have 
to settle on one of these suggestions anyway.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs filesystem metadata checksum

2008-04-08 Thread asa
Hello all. I am looking to be able to verify my zfs backups in the  
most minimal way, ie without having to md5 the whole volume.

Is there a way to get a checksum for a snapshot and compare it to  
another zfs volume, containing all the same blocks and verify they  
contain the same information? Even when I destroy the snapshot on the  
source?

kind of like:

zfs create tank/myfs
dd if=/dev/urandom bs=128k count=1000 of=/tank/myfs/TESTFILE
zfs snapshot tank/[EMAIL PROTECTED]
zfs send tank/[EMAIL PROTECTED] | zfs recv tank/myfs_BACKUP

zfs destroy tank/[EMAIL PROTECTED]

zfs snapshot tank/[EMAIL PROTECTED]


someCheckSumVodooFunc(tank/myfs)
someCheckSumVodooFunc(tank/myfs_BACKUP)

is there some zdb hackery which results in a metadata checksum usable  
in this scenario?

Thank you all!

Asa
zfs worshiper
Berkeley, CA
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ls -lt for links slower than for regular files

2008-04-08 Thread Bob Friesenhahn
On Tue, 8 Apr 2008, [EMAIL PROTECTED] wrote:
> a few seconds and the links list in, perhaps, 60 seconds.  Is there a
> difference in what ls has to do when listing links versus listing regular 
> files
> in ZFS that would cause a slowdown?

Since you specified '-t' the links have to be "dereferenced" (find the 
file that is referred to) which results in opening the directory to 
see if the file exists, and what its properties are.  With 50K+ files, 
opening the directory and finding the file will take tangible time. 
If there are multiple directories in the symbolic link path, then 
these directories need to be opened as well.  Symbolic links are not 
free.

More RAM may help if it results in keeping the directory data hot in 
the cache.

If the links were hard links rather than symbolic links, then 
performance will be similar to a regular file (since it is then a 
regular file).

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ls -lt for links slower than for regular files

2008-04-08 Thread ap60
Hi...

System Config:
 2 Intel 3 Ghz 5160 dual-core cpu's
 10 SATA 750 GB disks running as a ZFS RAIDZ2 pool
 8 GB Memory
 SunOS 5.11 snv_79a on a separate UFS mirror
 ~150 Read I/O's/second, ~300 Write I/O's/second
on the ZFS pool when busy
 ARC size ~2 GB
 No separate ARC or ZIL cache

I have a couple of large directories, ~58,000 files, where one contains all 
regular files while the other contains all links pointing back to the regular 
files.  On a busy ZFS filesystem, when I do an "ls -lat" on the regular file 
directory, it returns within a few minutes or less, whereas when I do the same 
thing on the directory of links, it can take from 15 minutes to over an hour. 
When I stop our data collection application, the regular files then list within 
a few seconds and the links list in, perhaps, 60 seconds.  Is there a 
difference in what ls has to do when listing links versus listing regular files 
in ZFS that would cause a slowdown?

Thanks...

  Art

Arthur A. Person
Research Assistant, System Administrator
Penn State Department of Meteorology
email:  [EMAIL PROTECTED], phone:  814-863-1563
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Algorithm for expanding RAID-Z

2008-04-08 Thread Adam Leventhal
After hearing many vehement requests for expanding RAID-Z vdevs, Matt Ahrens
and I sat down a few weeks ago to figure out an mechanism that would work.
While Sun isn't committing resources to imlementing a solution, I've written
up our ideas here:

  http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z

I'd encourage anyone interested in getting involved with ZFS development to
take a look.

Adam

-- 
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How many ZFS pools is it sensible to use on a single server?

2008-04-08 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 04/08/2008 11:22:53 AM:

>  In our environment, the politically and administratively simplest
> approach to managing our storage is to give each separate group at
> least one ZFS pool of their own (into which they will put their various
> filesystems). This could lead to a proliferation of ZFS pools on our
> fileservers (my current guess is at least 50 pools and perhaps up to
> several hundred), which leaves us wondering how well ZFS handles this
> many pools.
>
>  So: is ZFS happy with, say, 200 pools on a single server? Are there any
> issues (slow startup, say, or peculiar IO performance) that we'll run
> into? Has anyone done this in production? If there are issues, is there
> any sense of what the recommended largest number of pools per server is?
>

Chris,

  Well,  I have done testing with filesystems and not as much with
pools -- I believe the core design premise for zfs is that administrators
would use few pools and many filesystems.  I would think that Sun would
recommend that you make a large pool (or a few) and divvy out filesystem
with reservations to the groups (to which they can add sub filesystems).
As far as ZFS filesystems are concerned my testing has shown that the mount
time and io overhead for multiple filesystems seems to be pretty linear --
timing 10 mounts translates pretty well to 100 and 1000.  After you hit
some level (depending on processor and memory) the mount time, io and
write/read batching spikes up pretty heavily.  This is one of the reasons I
take a strong stance against the recommendation that people use
reservations and filesystems as user/group quotas (ignoring that the
functionality is not by any means in parity.)

-Wade



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Administration

2008-04-08 Thread Aaron Epps
Oh, one more thing

 - a tool to schedule the deletion of snapshots (Keep the past 14 Daily, 4 
Weekly, 6 Monthly, etc.)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Administration

2008-04-08 Thread Aaron Epps
A couple things that are lacking from the web administrative ZFS interface that 
would be nice...

 - a tool to schedule backups (Snapshot this filesystem every 2 hours)
 - a tool to schedule scrubs (Scrub this pool once every week)
 - a tool to configure notifications (Email the SysAdmin if a Disk Dies/etc.)

I realize these things can be done via the command-line and cron, or by rolling 
your own script, but it we want to make ZFS accessible to the masses, I think 
these features would be beneficial.

Has there been any discussion about these? In comparing Sun's ZFS 
Administration Tool to NetApp's these things seem to be missing.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How many ZFS pools is it sensible to use on a single server?

2008-04-08 Thread Chris Siebenmann
 In our environment, the politically and administratively simplest
approach to managing our storage is to give each separate group at
least one ZFS pool of their own (into which they will put their various
filesystems). This could lead to a proliferation of ZFS pools on our
fileservers (my current guess is at least 50 pools and perhaps up to
several hundred), which leaves us wondering how well ZFS handles this
many pools.

 So: is ZFS happy with, say, 200 pools on a single server? Are there any
issues (slow startup, say, or peculiar IO performance) that we'll run
into? Has anyone done this in production? If there are issues, is there
any sense of what the recommended largest number of pools per server is?

 Thanks in advance.

- cks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance of one single 'cp'

2008-04-08 Thread Thomas Maier-Komor
Bob Friesenhahn schrieb:
> On my drive array (capable of 260MB/second single-process writes and 
> 450MB/second single-process reads) 'zfs iostat' reports a read rate of 
> about 59MB/second and a write rate of about 59MB/second when executing 
> 'cp -r' on a directory containing thousands of 8MB files.  This seems 
> very similar to the performance you are seeing.
> 
> The system indicators (other than disk I/O) are almost flatlined at 
> zero while the copy is going on.
> 
> It seems that a multi-threaded 'cp' could be much faster.
> 
> With GNU xargs, find, and cpio, I think that it is possible to cobble 
> together a much faster copy since GNU xargs supports --max-procs and 
> --max-args arguments to allow executing commands concurrently with 
> different sets of files.
> 
> Bob


That's the reason I wrote a binary patch (preloadable shared object) for
cp, tar, and friends. You might want to take a look at it...
Here: http://www.maier-komor.de/mtwrite.html

- Thomas

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] lucreate error: Cannot determine the physical

2008-04-08 Thread Lori Alt

It's true that liveupgrade doesn't support zfs yet.  That
support will become available in the build 89 or 90
time frame, at the same time that zfs as a root file system
is supported.

Lori

Ether.pt wrote:
> Hi,
>
> This was taken from where? From liveupgrade??? As long as I know, liveupgrade 
> works only with ufs. At the time of my first install I choose ufs exactly for 
> the reason to be able to do liveupgrade. 
>
> What you have there is something that I agree but NOT for liveupgrade but yes 
> to work with intensive operations under ZFS. Lots of clones, changing the 
> pools struts and so on ...
>
> Best regards
> Ether.pt
>
>   
>> "ZFS is ideally suited to making “clone and
>> modify” fast, easy, and space-efficient. Both
>> “clone and modify” tools will work much better
>> if your root file system is ZFS. (The new install
>> tool will require it for some features.)"
>>
>> Roman
>> 
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] lucreate error: Cannot determine the physical

2008-04-08 Thread Ether.pt
Hi,

This was taken from where? From liveupgrade??? As long as I know, liveupgrade 
works only with ufs. At the time of my first install I choose ufs exactly for 
the reason to be able to do liveupgrade. 

What you have there is something that I agree but NOT for liveupgrade but yes 
to work with intensive operations under ZFS. Lots of clones, changing the pools 
struts and so on ...

Best regards
Ether.pt

> 
> "ZFS is ideally suited to making “clone and
> modify” fast, easy, and space-efficient. Both
> “clone and modify” tools will work much better
> if your root file system is ZFS. (The new install
> tool will require it for some features.)"
> 
> Roman
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] lucreate error: Cannot determine the physical

2008-04-08 Thread Roman Morokutti
> I didn't think that we had live upgrade support for
> zfs root filesystem yet.
> 

Original quote from Lori Alt:

"ZFS is ideally suited to making “clone and
modify” fast, easy, and space-efficient. Both
“clone and modify” tools will work much better
if your root file system is ZFS. (The new install
tool will require it for some features.)"

Roman
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] lucreate error: Cannot determine the physical boot device ...

2008-04-08 Thread Roman Morokutti
I further found out that there exists a nearly similar
problem described in Bug-Id: 6442921.

lubootdev reported:

# /etc/lib/lu/lubootdev -b
/dev/dsk/c0d0p0

Using this info for -C I got the following:

# lucreate -C /dev/dsk/c0d0p0 -n B85
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name .
Current boot environment is named .
Creating initial configuration for primary boot environment .
INFORMATION: Unable to determine size or capacity of slice 
.
ERROR: Unable to determine major and minor device numbers for root device 
.
INFORMATION: Unable to determine size or capacity of slice <>.
ERROR: Internal Configuration File  exists but has no contents.
ERROR: The file  specified by the <-f> option is not a valid ICF 
file.
ERROR: Cannot update boot environment configuration file with the current BE 
 information.
ERROR: Cannot create configuration for primary boot environment.

Roman
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance of one single 'cp'

2008-04-08 Thread Bob Friesenhahn
On my drive array (capable of 260MB/second single-process writes and 
450MB/second single-process reads) 'zfs iostat' reports a read rate of 
about 59MB/second and a write rate of about 59MB/second when executing 
'cp -r' on a directory containing thousands of 8MB files.  This seems 
very similar to the performance you are seeing.

The system indicators (other than disk I/O) are almost flatlined at 
zero while the copy is going on.

It seems that a multi-threaded 'cp' could be much faster.

With GNU xargs, find, and cpio, I think that it is possible to cobble 
together a much faster copy since GNU xargs supports --max-procs and 
--max-args arguments to allow executing commands concurrently with 
different sets of files.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Downgrade zpool version?

2008-04-08 Thread Albert Lee

On Mon, 2008-04-07 at 20:21 -0600, Keith Bierman wrote:
> On Apr 7, 2008, at 1:46 PM, David Loose wrote:
> >  my Solaris samba shares never really played well with iTunes.
> >
> >
> Another approach might be to stick with Solaris on the server, and  
> run netatalk  instead of SAMBA (or, you  
> know your macs can speak NFS ;>).

Alternatively you could run Banshee or mt-daapd on the Solaris box and
just rely on iTunes sharing. =P

Seriously, NFS is a totally reasonable way to go.

-Albert

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] lucreate error: Cannot determine the physical boot device ...

2008-04-08 Thread Terry Smith

Roman

I didn't think that we had live upgrade support for zfs root filesystem yet.

T

Roman Morokutti wrote:
> # lucreate -n B85
> Analyzing system configuration.
> Hi,
> 
> after typing 
> 
>   # lucreate -n B85
> 
> I get the following error:
> 
> No name for current boot environment.
> INFORMATION: The current boot environment is not named - assigning name .
> Current boot environment is named .
> Creating initial configuration for primary boot environment .
> ERROR: Unable to determine major and minor device numbers for root device 
> .
> ERROR: Cannot determine the physical boot device for the current boot 
> environment .
> Use the <-C> command line option to specify the physical boot device for the 
> current boot environment .
> ERROR: Cannot create configuration for primary boot environment.
> 
> 
> I tried to use the -C option like:
> 
> lucreate -C c0d0s0 -n B85 but also without 
> success and got this:
> 
> # lucreate -C c0d0s0 -n B85
> ERROR: No such file or directory: cannot stat 
> ERROR: cannot use  as a boot device because it is not a block device
> Usage: lucreate -n BE_name [ -A BE_description ] [ -c BE_name ]
> [ -C ( boot_device | - ) ] [ -f exclude_list-file [ -f ... ] ] [ -I ]
> [ -l error_log-file ] [ -M slice_list-file [ -M ... ] ]
> [ -m mountPoint:devicePath:fsOptions [ -m ... ] ] [ -o out_file ]
> [ -s ( - | source_BE_name ) ] [ -x exclude_dir/file [ -x ... ] ] [ -X 
> ]
> [ -y include_dir/file [ -y ... ] ] [ -Y include_list-file [ -Y ... ] ]
> [ -z filter_list-file ]
> 
> 
> Could someone please tell me how to use lucreate?
> 
> Roman
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-08 Thread Chris Siebenmann
| Is it really true that as the guy on the above link states (Please
| read the link, sorry) when one iSCSI mirror goes off line, the
| initiator system will panic?  Or even worse, not boot its self cleanly
| after such a panic?  How could this be?  Anyone else with experience
| with iSCSI based ZFS mirrors?

 Our experience with Solaris 10U4 and iSCSI targets is that Solaris only
panics if the pool fails entirely (eg, you lose both/all mirrors in a
mirrored vdev). The fix for this is in current OpenSolaris builds, and
we have been told by our Sun support people that it will (only) appear
in Solaris 10 U6, apparently scheduled for sometime around fall.

 My experience is that Solaris will normally recover after the panic and
reboot, although failed ZFS pools will be completely inaccessible as you'd
expect. However, there are two gotchas:

* under at least some circumstances, a completely inaccessible iSCSI
  target (as you might get with, eg, a switch failure) will stall booting
  for a significant length of time (tens of minutes, depending on how many
  iSCSI disks you have on it).

* if a ZFS pool's storage is present but unwritable for some reason,
  Solaris 10 U4 will panic the moment it tries to bring the pool up;
  you will wind up stuck in a perpetual 'boot, panic, reboot, ...'
  cycle until you forcibly remove the storage entirely somehow.

The second issue is presumably fixed as part of the general fix of 'ZFS
panics on pool failure', although we haven't tested it explicitly. I
don't know if the first issue is fixed in current Nevada builds.

- cks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] lucreate error: Cannot determine the physical boot device ...

2008-04-08 Thread Roman Morokutti
# lucreate -n B85
Analyzing system configuration.
Hi,

after typing 

  # lucreate -n B85

I get the following error:

No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name .
Current boot environment is named .
Creating initial configuration for primary boot environment .
ERROR: Unable to determine major and minor device numbers for root device 
.
ERROR: Cannot determine the physical boot device for the current boot 
environment .
Use the <-C> command line option to specify the physical boot device for the 
current boot environment .
ERROR: Cannot create configuration for primary boot environment.


I tried to use the -C option like:

lucreate -C c0d0s0 -n B85 but also without 
success and got this:

# lucreate -C c0d0s0 -n B85
ERROR: No such file or directory: cannot stat 
ERROR: cannot use  as a boot device because it is not a block device
Usage: lucreate -n BE_name [ -A BE_description ] [ -c BE_name ]
[ -C ( boot_device | - ) ] [ -f exclude_list-file [ -f ... ] ] [ -I ]
[ -l error_log-file ] [ -M slice_list-file [ -M ... ] ]
[ -m mountPoint:devicePath:fsOptions [ -m ... ] ] [ -o out_file ]
[ -s ( - | source_BE_name ) ] [ -x exclude_dir/file [ -x ... ] ] [ -X ]
[ -y include_dir/file [ -y ... ] ] [ -Y include_list-file [ -Y ... ] ]
[ -z filter_list-file ]


Could someone please tell me how to use lucreate?

Roman
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS volume export to USB-2 or Firewire?

2008-04-08 Thread Bob Friesenhahn
Currently it is easy to share a ZFS volume as an iSCSI target.  Has 
there been any thought toward adding the ability to share a ZFS volume 
via USB-2 or Firewire to a directly attached client?

There is a substantial market for storage products which act like a 
USB-2 or Firewire "drive".  Some of these offer some form of RAID. 
It seems to me that ZFS with a server capability to appear as several 
USB-2 or Firewire drives (or eSATA) may be appealing for larger RAIDs 
of several terrabytes.

Is anyone aware of an application which can usefully share a ZFS 
volume (essentially a file) in this way?

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Performance of one single 'cp'

2008-04-08 Thread Henrik Hjort
Hi!

I just want to check with the community to see if this is normal.

I have used a X4500 with 500Gb disks and I'm not impressed by the copy 
performance.
I can run several jobs in parallel and get close to 400mb/s but I need better 
performance
from a single copy.  I have tried to be "EVIL" as well but without success.

Tests done with:
Solaris 10 U4
Solaris 10 U5 (B10)
Nevada B86

*Setup*

# zpool status
 pool: datapool
state: ONLINE
scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   datapoolONLINE   0 0 0
 mirrorONLINE   0 0 0
   c0t0d0  ONLINE   0 0 0
   c1t0d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c4t0d0  ONLINE   0 0 0
   c6t0d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c7t0d0  ONLINE   0 0 0
   c0t1d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c1t1d0  ONLINE   0 0 0
   c4t1d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c5t1d0  ONLINE   0 0 0
   c6t1d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c7t1d0  ONLINE   0 0 0
   c0t2d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c1t2d0  ONLINE   0 0 0
   c4t2d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c5t2d0  ONLINE   0 0 0
   c6t2d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c7t2d0  ONLINE   0 0 0
   c0t3d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c1t3d0  ONLINE   0 0 0
   c4t3d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c5t3d0  ONLINE   0 0 0
   c6t3d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c7t3d0  ONLINE   0 0 0
   c0t4d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c1t4d0  ONLINE   0 0 0
   c4t4d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c6t4d0  ONLINE   0 0 0
   c7t4d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c0t5d0  ONLINE   0 0 0
   c1t5d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c4t5d0  ONLINE   0 0 0
   c5t5d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c6t5d0  ONLINE   0 0 0
   c7t5d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c0t6d0  ONLINE   0 0 0
   c1t6d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c4t6d0  ONLINE   0 0 0
   c5t6d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c6t6d0  ONLINE   0 0 0
   c7t6d0  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   c0t7d0  ONLINE   0 0 0
   c1t7d0  ONLINE   0 0 0

*Result*  - Around 50-60mb/s read

parsing profile for config: copyfiles
Running 
/tmp/temp165-231.*.*.COM-zfs-readtest-Apr_8_2008-09h_09m_07s/copyfiles/thisrun.f
FileBench Version 1.2.2
 5109: 0.005: CopyFiles Version 2.3 personality successfully loaded
 5109: 0.005: Creating/pre-allocating files and filesets
 5109: 0.069: Fileset destfiles: 1 files, avg dir = 20, avg depth = 3.1, 
mbytes=156
 5109: 3.922: Removed any existing fileset destfiles in 4 seconds
 5109: 3.952: Creating fileset destfiles...
 5109: 3.952: Preallocated 0 of 1 of fileset destfiles in 1 seconds
 5109: 4.039: Fileset bigfileset: 1 files, avg dir = 20, avg depth = 3.1, 
mbytes=158
 5109: 4.071: Removed any existing fileset bigfileset in 1 seconds
 5109: 4.098: Creating fileset bigfileset...
 5109: 117.245: Preallocated 1 of 1 of fileset bigfileset in 114 seconds
 5109: 117.245: waiting for fileset pre-allocation to finish
 5109: 117.245: Running '/opt/filebench/scripts/fs_flush zfs /export/transcoded'
'zpool export datapool'
'zpool import datapool'
 5109: 127.338: Change dir to 
/tmp/temp165-231.*.*.COM-zfs-readtest-Apr_8_2008-09h_09m_07s/copyfiles
 5109: 127.339: Starting 1 filereader instances
 5287: 128.348: Starting 16 filereaderthread threads
 5109: 131.358: Running...
 5109: 134.378: Run took 3 seconds...
 5109: 134.378: Per-Operation Breakdown
closefile2   3312ops/s   0.0mb/s  0.0ms/op3us/op-cpu
closefile1   3312ops/s   0.0mb/s  0.0ms/op4us/op-cpu
writefile2   

Re: [zfs-discuss] Downgrade zpool version?

2008-04-08 Thread Darren J Moffat
Keith Bierman wrote:
> On Apr 7, 2008, at 1:46 PM, David Loose wrote:
>>  my Solaris samba shares never really played well with iTunes.
>>
>>
> Another approach might be to stick with Solaris on the server, and  
> run netatalk  instead of SAMBA (or, you  
> know your macs can speak NFS ;>).

My iTunes and iPhoto libraries are served from Solaris to MacOS over NFS 
and it works just fine.  Sadly NFSv3 only but that is a MacOS X issue.

I also built mt-daapd on Solaris (just for fun) and iTunes can see that 
shared library - however this wasn't much use to me as I still want to 
use iTunes to manage/populate the library.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs with SAN / cluster problem

2008-04-08 Thread Christophe Rolland
Hi

I got a san disk visible on two nodes (global or zone).
On the first node, i can create a pool using "zpool create x1 sandisk".
If i try to reuse this disk on the first node, i got a "vdev in use" warning.
If i try to create a pool on the second node using the same disk, "zpool create 
x2 sandisk", it works fine, without warning, before leading to obvious problems.

I am using sol10 u4 .
did anyone encounter the same problem on opensolaris or s10 ? 
What could i be missing ?
this happens whatever the NOINUSE_CHECK variable is set to.

thanks a lot
christophe
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss