Re: [zfs-discuss] slog failure ... *ANY* way to recover?

2008-05-29 Thread Joe Little
On Thu, May 29, 2008 at 8:59 PM, Joe Little <[EMAIL PROTECTED]> wrote:
> On Thu, May 29, 2008 at 7:25 PM, Jeb Campbell <[EMAIL PROTECTED]> wrote:
>> Meant to add that zpool import -f pool doesn't work b/c of the missing log 
>> vdev.
>>
>> All the other disks are there and show up with "zpool import", but it won't 
>> import.
>>
>> Is there anyway a util could clear the log device vdev from the remaining 
>> raidz2 devices?
>>
>> Then I could import just a standard raidz2 pool.
>>
>> I really love zfs (and had recently upgraded to 6 disks in raidz2), but this 
>> is *really* gonna hurt to lose all this stuff (yeah, the work stuff is 
>> backed up, but I have/had tons of personal stuff on there).
>>
>> I definitely would prefer to just sit tight, and see if there is any way to 
>> get this going (read only would be fine).
>>

More to the point, does it say there are any permanent errors that you
find? Again, I was able to import it after reassigning the log device
so it thinks its there. I got to this point:

[EMAIL PROTECTED]:~# zpool status -v
  pool: data
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
dataONLINE   0 024
  raidz1ONLINE   0 024
c2t0d0  ONLINE   0 0 0
c2t1d0  ONLINE   0 0 0
c1t1d0  ONLINE   0 0 0
logsONLINE   0 024
  c3t1d0ONLINE   0 0 0

errors: Permanent errors have been detected in the following files:

data/home:<0x0>

Yes, because of the "error" I can no longer have any mounts created at
import, but the "zfs mount data/proj" or other filesystem, but not
data/home, is still possible. Again, I think that you will want to use
"-o ro" as an option to that mount command to not have the system go
bonkers. Check my blog for more info on reseting the log device for a
"zfs replace" action -- which itself puts you into more troubling
position of possibly having corruptions from the resilver, but at
least for me allowed me to mount the pool for read-only mounts of the
remaining filesystems.

>
> You can mount all those filesystems, and then zfs send/recv them off
> to another box. Its sucks, but as of now, there is no re-importing of
> the pool UNTIL the log can be removed. Sadly, I think that log removal
> will at least require importation of the pool in question first. For
> some reason you already can't import your pool.
>
> In my case, I was running B70 and could import the pool still, but
> just degraded. I think that once you are at a higher rev (which I do
> not know, but inclusive of B82 and B85), you won't be able to import
> it anymore when it fails.
>
>
>> Jeb
>>
>>
>> This message posted from opensolaris.org
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-29 Thread Mathew P
I've had a RAIDZ/ZFS File Server since Update 2, so I thought I'd share my 
setup.

Opteron FX-51 (2.3Ghz, Socket 939)
Asus SK8N
4x 512MB EBB Unbuffered DDR1 Memory
2x Skymaster PCI-X 4 Port SATA (based on SI3114 Chipset).  currently deployed 
over 2x PCI ports on the motherboard.
1x Intel 10/100 NIC PCI.
8x 320GB Western Digital SATA Drives

As you can see, I'm sharing the PCI bus for both controllers and my NIC.  So 
the speed isn't very fast (10MB/s) from a Windows XP client through Samba.  

I'm considering changing motherboard to Asus K8N-LR, which will allow me to use 
both PCI-X slots, and a dedicated Intel Gigabit NIC in the PCIe slot.  Which 
will dramatically speed things up.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slog failure ... *ANY* way to recover?

2008-05-29 Thread Joe Little
On Thu, May 29, 2008 at 7:25 PM, Jeb Campbell <[EMAIL PROTECTED]> wrote:
> Meant to add that zpool import -f pool doesn't work b/c of the missing log 
> vdev.
>
> All the other disks are there and show up with "zpool import", but it won't 
> import.
>
> Is there anyway a util could clear the log device vdev from the remaining 
> raidz2 devices?
>
> Then I could import just a standard raidz2 pool.
>
> I really love zfs (and had recently upgraded to 6 disks in raidz2), but this 
> is *really* gonna hurt to lose all this stuff (yeah, the work stuff is backed 
> up, but I have/had tons of personal stuff on there).
>
> I definitely would prefer to just sit tight, and see if there is any way to 
> get this going (read only would be fine).
>

You can mount all those filesystems, and then zfs send/recv them off
to another box. Its sucks, but as of now, there is no re-importing of
the pool UNTIL the log can be removed. Sadly, I think that log removal
will at least require importation of the pool in question first. For
some reason you already can't import your pool.

In my case, I was running B70 and could import the pool still, but
just degraded. I think that once you are at a higher rev (which I do
not know, but inclusive of B82 and B85), you won't be able to import
it anymore when it fails.


> Jeb
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slog failure ... *ANY* way to recover?

2008-05-29 Thread Jeb Campbell
Meant to add that zpool import -f pool doesn't work b/c of the missing log vdev.

All the other disks are there and show up with "zpool import", but it won't 
import.

Is there anyway a util could clear the log device vdev from the remaining 
raidz2 devices?

Then I could import just a standard raidz2 pool.

I really love zfs (and had recently upgraded to 6 disks in raidz2), but this is 
*really* gonna hurt to lose all this stuff (yeah, the work stuff is backed up, 
but I have/had tons of personal stuff on there).

I definitely would prefer to just sit tight, and see if there is any way to get 
this going (read only would be fine).

Jeb
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] slog failure ... *ANY* way to recover?

2008-05-29 Thread Jeb Campbell
Wow -- I had seen Joe Little's blog about i-ram and slog and was using it that 
way, and mine just failed also.

I'm not at the point he is with data loss, but I've booted the OS 2008.05 
livecd and I can see all the disks (minus the log) in an zpool import.

Please, Please, Please, tell me there is a way to recover from this

Jeb
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] >1TB ZFS thin provisioned partition prevents Opensolaris from booting.

2008-05-29 Thread James C. McPherson
Tano wrote:
> Not sure where to put this but I am cc'ing the ZFS - discussion board.
> 
> I was successfull in creating iscsi shares using ZFS set shareiscsi=on
> with 2 thin provisioned partitions of 1TB each (zfs create -s -V 1tb
> idrive/d1). Access to the shares with an iscsi initiator was successful,
> all was smooth, until the reboot.
> 
> Upon reboot, the console reports the following errors.
> 
> WARNING: /scsi_vhci/[EMAIL PROTECTED] (sd9): disk has
> 3221225472 blocks, which is too large for a 32-bit kernel WARNING:
> /iscsi/[EMAIL PROTECTED],0
> (sd10): disk has 3221225472 blocks, which is too large for a 32-bit kernl
> 
> 
> And it continues to do this on the other partition i had created.
> 
> Ultimately coreadm:default fails bad and the server is stuck at 
> svc.startd[7]: Lost repository event due to disconnection.
> 
> I am on a Poweredge 2650 with 2xXeon Processors @2.8GHZ 1.5 GB Ram 
> Running Opensolaris 2008.05
> 
> Anyideas, or is ZFS partition greater than 1 tb on a 32bit kernel  not
> possible. Do I have to move to 64 bit Solaris?


You'll have to move to 64bit Solaris. There's only so much
that you can do when you're running in 32bit mode.


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Space used by the snapshot

2008-05-29 Thread Silvio Armando Davi
Hi,

I create a pool mirrored with 1gb of space. After I create a file system in 
that pool and put a file (file1) of 300MB it that file system. After that, I 
create a snapshot in the file system. With the zfs list command the space used 
by the snapshot is 0 (zero). It´s ok.

Well after that I copied the file of 300 mb to a another file (file2) in the 
same file system. Listing the files in the file system I can see the two files 
and listing the files in the snapshot I can see only the first file. It´s ok 
too, but the zfs list now shows that the snapshot uses 23.5KB of space. 

I suppose that the copy of the file1 change the atime of the inode and for this 
reason the inode of the file1 needed to be copied to the snapshot using the 
space of the snapshot. I tried to set the atime to off, but the space of 23.5KB 
of the snapshot still being used after the copy of the file.

Anyone knows the reason the snapshot uses that 23.5kb of space?

Thanks,

Silvio
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-29 Thread Jonathan Hogg
On 29 May 2008, at 17:52, Chris Siebenmann wrote:

> The first issue alone makes 'zfs send' completely unsuitable for the
> purposes that we currently use ufsdump. I don't believe that we've  
> lost
> a complete filesystem in years, but we restore accidentally deleted
> files all the time. (And snapshots are not the answer, as it is common
> that a user doesn't notice the problem until well after the fact.)
>
> ('zfs send' to live disks is not the answer, because we cannot afford
> the space, heat, power, disks, enclosures, and servers to spin as many
> disks as we have tape space, especially if we want the fault isolation
> that separate tapes give us. most especially if we have to build a
> second, physically separate machine room in another building to put  
> the
> backups in.)

However, the original poster did say they were wanting to backup to  
another disk and said they wanted something lightweight/cheap/easy.  
zfs send/receive would seem to fit the bill in that case. Let's answer  
the question rather than getting into an argument about whether zfs  
send/receive is suitable for an enterprise archival solution.

Using snapshots is a useful practice as it costs fairly little in  
terms of disk space and provides immediate access to fairly recent,  
accidentally deleted files. If one is using snapshots, sending the  
streams to the backup pool is a simple procedure. One can then keep as  
many snapshots on the backup pool as necessary to provide the amount  
of history required. All of the files are kept in identical form on  
the backup pool for easy browsing when something needs to be restored.  
In event of catastrophic failure of the primary pool, one can quickly  
move the backup disk to the primary system and import it as the new  
primary pool.

It's a bit-perfect incremental backup strategy that requires no  
additional tools.

Jonathan

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] >1TB ZFS thin provisioned partition prevents Opensolaris from booting.

2008-05-29 Thread Tano
Not sure where to put this but I am cc'ing the ZFS - discussion board.

I was successfull in creating iscsi shares using ZFS set shareiscsi=on with 2 
thin provisioned partitions of 1TB each (zfs create -s -V 1tb idrive/d1). 
Access to the shares with an iscsi initiator was successful, all was smooth, 
until the reboot.

Upon reboot, the console reports the following errors.

WARNING: /scsi_vhci/[EMAIL PROTECTED] (sd9): 
disk has 3221225472 blocks, which is too large for a 32-bit kernel
WARNING: /iscsi/[EMAIL PROTECTED],0 (sd10):
   disk has 3221225472 blocks, which is too large for a 32-bit kernl

And it continues to do this on the other partition i had created.

Ultimately coreadm:default fails bad
and the server is stuck at
svc.startd[7]: Lost repository event due to disconnection.

I am on a Poweredge 2650 with 2xXeon Processors @2.8GHZ
1.5 GB Ram
Running Opensolaris 2008.05 

Anyideas, or is ZFS partition greater than 1 tb on a 32bit kernel  not 
possible. Do I have to move to 64 bit Solaris?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Liveupgrade snv_77 with a ZFS root to snv_89

2008-05-29 Thread Albert Lee
On Thu, 2008-05-29 at 07:07 -0700, Jim Klimov wrote:
> We have a test machine installed with a ZFS root (snv_77/x86 and 
> "rootpol/rootfs" with grub support).
> 
> Recently tried to update it to snv_89 which (in Flag Days list) claimed more 
> support for ZFS boot roots, but the installer disk didn't find any previously 
> installed operating system to upgrade.
> 
> Then we tried to install SUNWlu* packages from snv_89 disk onto snv_77 
> system. It worked in terms of package updates, but lucreate fails:
> 
> # lucreate -n snv_89
> ERROR: The system must be rebooted after applying required patches.
> Please reboot and try again.
> 
> Apparently we rebooted a lot and it did not help...
> 
> How can we upgrade the system?
> 
> In particular, how does LU do it? :)
> 
> Now working on an idea to update all existing packages in the cloned root, 
> using pkgrm/pkgadd -R. Updating "only some" packages didn't help much 
> (kernel, zfs, libs).
> 
> A backup plan is to move the ZFS root back to UFS, update and move it back. 
> Probably would work, but not an elegant job ;)
> 
> Suggestions welcome, maybe we'll try out some of them and report ;)


The LU support for ZFS root is part of a set of updates to the installer
that are not available until snv_90. There is a hack to do an offline
upgrade from DVD/CD (zfs_ttinstall), if you can't wait.

-Albert



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] any 64-bit mini-itx successes

2008-05-29 Thread Benjamin Ellison
Ok, all the bits and pieces I (thought I) needed became available and I have 
ordered & assembled my "mini" NAS.  Once I get the pictures off my camera (when 
will the Eye-Fi support CF!?)  I will do up a blog entry on the construction & 
update this thread.

My biggest problem so far is that there is no driver support for the onboard 
network interfaces (Broadcom BCM5787M) in OpenSolaris 2008.05.  The bge driver 
isn't working, but I just saw a couple posts that the bcem driver might work, 
so we'll give it a whirl tonight.  I really hope it works, because I really, 
*really* want to have this thing utilizing ZFS.

Hardware list, for those interested...
(from newegg):
OCZ 2GB 200-Pin DDR2 SO-DIMM DDR2 667 (PC2 5400) Laptop Memory
SAMSUNG Spinpoint M Series HM080GC 80GB 5400 RPM ATA-6 Notebook Hard Drive
KINAMAX ADP-IDE23 Laptop 2.5" to Desktop 3.5" IDE Hard Drive Adapter Converter
4 x SAMSUNG Spinpoint F1 HD753LJ 750GB 7200 RPM SATA 3.0Gb/s Hard Drives
(from Logic Supply):
IEI KINO-690S1 AMD Turion 64 Mini-ITX Mainboard
CoolerMaster EPN-41CSS-01 - Socket 479, Socket M, Socket P
Panasonic SR-8178-B Slimline Tray-loading CD/DVD-ROM
Slimline CD to 40 pin IDE adapter (NOTE: I ended up with two at some point... I 
think the case I ordered may have come with one).
Chenbro ES34069 Mini-ITX Home Server/NAS Chassis
(from CompuVest):
2.0GHz AMD Turion 64 X2 Mobile TL-60 FSB 1600MHz 2x512KB S1

Things that I still probably could use:
Low profile SATA cables
Long, round IDE cable

See this for cost breakdown, if interested:
http://spreadsheets.google.com/pub?key=pblAtpLs7JXRv1q3YBevVZQ 

I'll let everyone know if the driver thing works out, and when I get my pics 
online.  the Chenbro case is pretty cool, even if it does seem spendy -- 
although things start to even out once you take into account the 
included/built-in power supply, fans, and SATA II backplane/hot-swap bays.

--Ben
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with raidz

2008-05-29 Thread Marcelo Leal
Hello...
 If i have understood well, you will have a host with EMC RAID5 discs. Is that 
right?
 You pay a lot of money to have EMC discs, and i think is not a good idea have 
another layer of *any* RAID on top of it. If you have EMC RAID5 (eg. 
symmetrix), you don't need to have a software RAID... 
 ZFS was designed to have a RAID solution to cheap discs! I think is not your 
case, and anything that is "too much" is not good. Generates complexity and 
loop... :)
 I think ZFS can "trust" on the EMC thing...
 
 Leal.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-29 Thread Ralf Bertling
Hi list,
I'd recommend using zfs send /receive and use a secondary machine that  
keeps the received filesystems in a backup pool.
This gives you the advantage of being able to scrub your backups.

I'd like to add another question: Is there a way to efficiently  
replicating a complete zfs-pool including all filesystems and snapshots?
Since it is currently impossible to change the vdev-structure of a  
pool the "easiest workaround" would be:
1. create the new pool.
2. create all the filesystems on the new pool
3. send all snapshots from the old pool and receive them in the new  
pool.
If there was a way to do this for a whole pool or at least a full  
filesystem including history, this could be done with relative ease  
borrowing some cheap disk space.
Any ideas?

ralf
--- this mail is made from 100% recycled electrons
Am 29.05.2008 um 17:40 schrieb [EMAIL PROTECTED]:

> Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore
> To: Darren J Moffat <[EMAIL PROTECTED]>
> Cc: zfs-discuss@opensolaris.org
> Message-ID: <[EMAIL PROTECTED]>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
>>>
>>
>> I very strongly disagree.  The closest ZFS equivalent to ufsdump is  
>> 'zfs
>> send'.  'zfs send' like ufsdump has initmiate awareness of the the
>> actual on disk layout and is an integrated part of the filesystem
>> implementation.
>>
>> star is a userland archiver.
>>
>
> The man page for zfs states the following for send:
>
>  The format of the stream is evolving. No backwards  compati-
>  bility  is  guaranteed.  You may not be able to receive your
>  streams on future versions of ZFS.
>
> I think this should be taken into account when considering 'zfs send'
> for backup purposes...
>
> - Thomas

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-29 Thread Richard Elling
Chris Siebenmann wrote:
> | I very strongly disagree.  The closest ZFS equivalent to ufsdump is
> | 'zfs send'. 'zfs send' like ufsdump has initmiate awareness of the
> | the actual on disk layout and is an integrated part of the filesystem
> | implementation.
>
>  I must strongly disagree in turn, at least for Solaris 10. 'zfs send'
> suffers from three significant defects:
>
> - you cannot selectively restore files from a 'zfs send' archive;
>   restoring is an all or nothing affair.
>
> - incrementals can only be generated relative to a snapshot, which
>   means that doing incrementals may require you to use up significant
>   amounts of disk space.
>
> - it is currently explicitly documented as not being forward or backwards
>   compatible. (I understand that this is not really the case and that this
>   change of heart will be officially documented at some point; I hope that
>   people will forgive me for not basing a backup strategy on word of future
>   changes.)
>
>  The first issue alone makes 'zfs send' completely unsuitable for the
> purposes that we currently use ufsdump. I don't believe that we've lost
> a complete filesystem in years, but we restore accidentally deleted
> files all the time. (And snapshots are not the answer, as it is common
> that a user doesn't notice the problem until well after the fact.)
>
> ('zfs send' to live disks is not the answer, because we cannot afford
> the space, heat, power, disks, enclosures, and servers to spin as many
> disks as we have tape space, especially if we want the fault isolation
> that separate tapes give us. most especially if we have to build a
> second, physically separate machine room in another building to put the
> backups in.)
>   

It does depend on your requirements.  I use ZFS send/receive to save my 
stuff
to (multiple) USB drives.  One is stored onsite in a fire safe and the other
is stored offsite.  There is no requirement that the target device is 
spinning
except when you are copying.  By using this method, I can follow the
declining price of disks over time: by the time I have 500 GBytes of 
pictures,
a 1TByte disk will cost $70.

I have also sent snapshots to DVDs, but in truth tape will be easier because
it can store much more.  Contrary to popular belief, tapes are still the 
best
long-term storage medium.  The commercial backup products work with
ZFS without needing to use the send/receive interfaces.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-29 Thread Richard Elling
Jonathan Hogg wrote:
> On 29 May 2008, at 15:51, Thomas Maier-Komor wrote:
>
>   
>>> I very strongly disagree.  The closest ZFS equivalent to ufsdump is  
>>> 'zfs
>>> send'.  'zfs send' like ufsdump has initmiate awareness of the the
>>> actual on disk layout and is an integrated part of the filesystem
>>> implementation.
>>>
>>> star is a userland archiver.
>>>
>>>   
>> The man page for zfs states the following for send:
>>
>>  The format of the stream is evolving. No backwards  compati-
>>  bility  is  guaranteed.  You may not be able to receive your
>>  streams on future versions of ZFS.
>> 

To date, there has been one incompatibility jump, required to fix a
bug.  For details, see:
http://www.opensolaris.org/os/community/on/flag-days/pages/2008042301/

>> I think this should be taken into account when considering 'zfs send'
>> for backup purposes...
>> 
>
> Presumably, if one is backing up to another disk, one could zfs  
> receive to a pool on that disk. That way you get simple file-based  
> access, full history (although it could be collapsed by deleting older  
> snapshots as necessary), and no worries about stream format changes.
>
>   

You can also implement different policies.  For example, the backup
file system may use compression with gzip-9 while the primary uses
no compression for better interactive performance.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-29 Thread Chris Siebenmann
| I very strongly disagree.  The closest ZFS equivalent to ufsdump is
| 'zfs send'. 'zfs send' like ufsdump has initmiate awareness of the
| the actual on disk layout and is an integrated part of the filesystem
| implementation.

 I must strongly disagree in turn, at least for Solaris 10. 'zfs send'
suffers from three significant defects:

- you cannot selectively restore files from a 'zfs send' archive;
  restoring is an all or nothing affair.

- incrementals can only be generated relative to a snapshot, which
  means that doing incrementals may require you to use up significant
  amounts of disk space.

- it is currently explicitly documented as not being forward or backwards
  compatible. (I understand that this is not really the case and that this
  change of heart will be officially documented at some point; I hope that
  people will forgive me for not basing a backup strategy on word of future
  changes.)

 The first issue alone makes 'zfs send' completely unsuitable for the
purposes that we currently use ufsdump. I don't believe that we've lost
a complete filesystem in years, but we restore accidentally deleted
files all the time. (And snapshots are not the answer, as it is common
that a user doesn't notice the problem until well after the fact.)

('zfs send' to live disks is not the answer, because we cannot afford
the space, heat, power, disks, enclosures, and servers to spin as many
disks as we have tape space, especially if we want the fault isolation
that separate tapes give us. most especially if we have to build a
second, physically separate machine room in another building to put the
backups in.)

- cks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-29 Thread Brian Hechinger
On Wed, May 28, 2008 at 12:01:36PM -0400, Bill McGonigle wrote:
> On May 28, 2008, at 05:11, James Andrewartha wrote:
> 
> That's not a huge price difference when building a server - thanks  
> for the pointer.  Are there any 'gotchas' the list can offer when  
> using a SAS card with SATA drives?   I've been told that SATA drives  
> can have a lower MTBF than SAS drives (by a guy working QA for  
> BigDriveCo), but ZFS helps keep the I in RAID.

I'm running 3 (used to be 4, but I repurposed that drive) 500GB Seagate
SATA disks on an LSI SAS3080X in a RAIDZ1 pool in my Ultra80 and it's
been working great.  The only 'gothca' that I can think of is the loss
of the ability to run more than one drive per channel, but I guess I can
live with that. :)

I got my SAS3080X for, uhm, let's see, including shipping and the SAS to
4 cable SATA breakout cable, it was less than $100 off of ebay, probably
closer to $80.

I don't know prices on the PCIe version of those cards on ebay though.
Probably more expensive as everyone wants PCIe these days.

-brian
-- 
"Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix." -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-29 Thread Jonathan Hogg
On 29 May 2008, at 15:51, Thomas Maier-Komor wrote:

>> I very strongly disagree.  The closest ZFS equivalent to ufsdump is  
>> 'zfs
>> send'.  'zfs send' like ufsdump has initmiate awareness of the the
>> actual on disk layout and is an integrated part of the filesystem
>> implementation.
>>
>> star is a userland archiver.
>>
>
> The man page for zfs states the following for send:
>
>  The format of the stream is evolving. No backwards  compati-
>  bility  is  guaranteed.  You may not be able to receive your
>  streams on future versions of ZFS.
>
> I think this should be taken into account when considering 'zfs send'
> for backup purposes...

Presumably, if one is backing up to another disk, one could zfs  
receive to a pool on that disk. That way you get simple file-based  
access, full history (although it could be collapsed by deleting older  
snapshots as necessary), and no worries about stream format changes.

Jonathan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SMC Webconsole 3.1 and ZFS Administration 1.0 - stacktraces in snv_b89

2008-05-29 Thread Andy Lubel


On May 29, 2008, at 9:52 AM, Jim Klimov wrote:

I've installed SXDE (snv_89) and found that the web console only  
listens on https://localhost:6789/ now, and the module for ZFS admin  
doesn't work.


It works for out of the box without any special mojo.  In order to get  
the webconsole to listen on something other than localhost did you do  
this?


# svccfg -s svc:/system/webconsole setprop options/tcp_listen = true
# svcadm disable svc:/system/webconsole
# svcadm enable svc:/system/webconsole

-Andy





When I open the link, the left frame lists a stacktrace (below) and  
the right frame is plain empty. Any suggestions?


I tried substituting different SUNWzfsgr and SUNWzfsgu packages from  
older Solarises (x86/sparc, snv_77/84/89, sol10u3/u4), and directly  
substituting the zfs.jar file, but these actions resulted in either  
the same error or crash-and-restart of SMC Webserver.


I didn't yet try installing an older SUNWmco* packages (a 10u4  
system with SMC 3.0.2 works ok), I'm not sure it's a good idea ;)


The system has JDK 1.6.0_06 per default, maybe that's the culprit? I  
tried setting it to JDL 1.5.0_15 and web-module zfs refused to start  
and register itself...



===
Application Error
com.iplanet.jato.NavigationException: Exception encountered during  
forward
Root cause = [java.lang.IllegalArgumentException: No enum const  
class com.sun.zfs.common.model.AclInheritProperty 
$AclInherit.restricted]

Notes for application developers:

   * To prevent users from seeing this error message, override the  
onUncaughtException() method in the module servlet and take action  
specific to the application
   * To see a stack trace from this error, see the source for this  
page


Generated Thu May 29 17:39:50 MSD 2008
===

In fact, the traces in the logs are quite long (several screenfulls)  
and nearly the same; this one starts as:

===
com.iplanet.jato.NavigationException: Exception encountered during  
forward
Root cause = [java.lang.IllegalArgumentException: No enum const  
class com.sun.zfs.common.model.AclInheritProperty 
$AclInherit.restricted]
   at  
com.iplanet.jato.view.ViewBeanBase.forward(ViewBeanBase.java:380)
   at  
com.iplanet.jato.view.ViewBeanBase.forwardTo(ViewBeanBase.java:261)
   at  
com 
.iplanet 
.jato 
.ApplicationServletBase.dispatchRequest(ApplicationServletBase.java: 
981)
   at  
com 
.iplanet 
.jato 
.ApplicationServletBase.processRequest(ApplicationServletBase.java: 
615)
   at  
com 
.iplanet 
.jato.ApplicationServletBase.doGet(ApplicationServletBase.java:459)

   at javax.servlet.http.HttpServlet.service(HttpServlet.java:690)
...
===


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS with raidz

2008-05-29 Thread David M Singer
"The important thing is to protect your data. You have lots of options here,
so we'd need to know more precisely what the other requirements are before
we could give better advice.
-- richard"

Please let me come in with a parallel need, the answer to which should 
contribute to this thread.

-Physical details:
3-drive (plus DVD) box with Micro-ATX board, 1 on-board controller and the 
option for one raid card.  Actual board, CPU and Memory yet-to-be-spec'd, but 
we'll throw in whatever the "hardware-compatible" Micro-ATX board can handle.
-Software details:
OpenSolaris 2008-05, ZFS+PostgreSQL+Python.
-Mission:
ZFS box is to watch a Windoze box (or a MAC box) on which new files are being 
created and old ones changed, plus many deletions (animation system).
-Objectives:
(a) make periodic snapshots of animator's box (actual copies of files) onto ZFS 
box, and
(b) Write metadata into the PostgreSQL database to record event changes 
happening to key files.
-Design concept:
Integrate ZFS+SQL+Python into a rules-based backup device that notifies a third 
party elsewhere in the world about project progress (or lack thereof), and 
forwards key files and the SQL metadata (via internet) to some host ZFS box 
elsewhere.
-Observations:
(a) The local and the host ZFS boxes are not expected to contain the same 
images; indeed, many local ZFS boxes will be distributed, and one host ZFS box 
will be the ultimate repository of "completed" works.
(b) High Performance is not an overriding consideration because this box 
"serves" only two users (the watched box on the local network and the host down 
the internet pipe).

Question that relates to the on-going thread:
What configuration of ZFS and the hardware would serve "reliable and cheap"?

David Singer
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-29 Thread Thomas Maier-Komor
Darren J Moffat schrieb:
> Joerg Schilling wrote:
>> "Poulos, Joe" <[EMAIL PROTECTED]> wrote:
>>
>>> Is there a  ZFS equivalent of ufsdump and ufsrestore? 
>>>
>>>  
>>>
>>>  Will creating a tar file work with ZFS? We are trying to backup a
>>> ZFS file system to a separate disk, and would like to take advantage of
>>> something like ufsdump rather than using expensive backup software.
>> The closest equivalent to ufsdump and ufsrestore is "star".
> 
> I very strongly disagree.  The closest ZFS equivalent to ufsdump is 'zfs 
> send'.  'zfs send' like ufsdump has initmiate awareness of the the 
> actual on disk layout and is an integrated part of the filesystem 
> implementation.
> 
> star is a userland archiver.
> 

The man page for zfs states the following for send:

  The format of the stream is evolving. No backwards  compati-
  bility  is  guaranteed.  You may not be able to receive your
  streams on future versions of ZFS.

I think this should be taken into account when considering 'zfs send' 
for backup purposes...

- Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-29 Thread Mark Shellenbaum
Joerg Schilling wrote:
> Darren J Moffat <[EMAIL PROTECTED]> wrote:
> 
>>> The closest equivalent to ufsdump and ufsrestore is "star".
>> I very strongly disagree.  The closest ZFS equivalent to ufsdump is 'zfs 
>> send'.  'zfs send' like ufsdump has initmiate awareness of the the 
>> actual on disk layout and is an integrated part of the filesystem 
>> implementation.
> 
> I strongly disagree. Like ufsdump, star creates archives that allow a file 
> based access. This does not work with zfs send.
> 

But star doesn't support Solaris Extended attributes and ZFS ACLs.  This 
means you *may* loose critical data if you use star.  Whereas zfs send 
preserves everything.

   -Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-29 Thread Joerg Schilling
Darren J Moffat <[EMAIL PROTECTED]> wrote:

> > The closest equivalent to ufsdump and ufsrestore is "star".
>
> I very strongly disagree.  The closest ZFS equivalent to ufsdump is 'zfs 
> send'.  'zfs send' like ufsdump has initmiate awareness of the the 
> actual on disk layout and is an integrated part of the filesystem 
> implementation.

I strongly disagree. Like ufsdump, star creates archives that allow a file 
based access. This does not work with zfs send.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Liveupgrade snv_77 with a ZFS root to snv_89

2008-05-29 Thread Jim Klimov
We have a test machine installed with a ZFS root (snv_77/x86 and 
"rootpol/rootfs" with grub support).

Recently tried to update it to snv_89 which (in Flag Days list) claimed more 
support for ZFS boot roots, but the installer disk didn't find any previously 
installed operating system to upgrade.

Then we tried to install SUNWlu* packages from snv_89 disk onto snv_77 system. 
It worked in terms of package updates, but lucreate fails:

# lucreate -n snv_89
ERROR: The system must be rebooted after applying required patches.
Please reboot and try again.

Apparently we rebooted a lot and it did not help...

How can we upgrade the system?

In particular, how does LU do it? :)

Now working on an idea to update all existing packages in the cloned root, 
using pkgrm/pkgadd -R. Updating "only some" packages didn't help much (kernel, 
zfs, libs).

A backup plan is to move the ZFS root back to UFS, update and move it back. 
Probably would work, but not an elegant job ;)

Suggestions welcome, maybe we'll try out some of them and report ;)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-29 Thread Darren J Moffat
Joerg Schilling wrote:
> "Poulos, Joe" <[EMAIL PROTECTED]> wrote:
> 
>> Is there a  ZFS equivalent of ufsdump and ufsrestore? 
>>
>>  
>>
>>  Will creating a tar file work with ZFS? We are trying to backup a
>> ZFS file system to a separate disk, and would like to take advantage of
>> something like ufsdump rather than using expensive backup software.
> 
> The closest equivalent to ufsdump and ufsrestore is "star".

I very strongly disagree.  The closest ZFS equivalent to ufsdump is 'zfs 
send'.  'zfs send' like ufsdump has initmiate awareness of the the 
actual on disk layout and is an integrated part of the filesystem 
implementation.

star is a userland archiver.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] SMC Webconsole 3.1 and ZFS Administration 1.0 - stacktraces in snv_b89

2008-05-29 Thread Jim Klimov
I've installed SXDE (snv_89) and found that the web console only listens on 
https://localhost:6789/ now, and the module for ZFS admin doesn't work.

When I open the link, the left frame lists a stacktrace (below) and the right 
frame is plain empty. Any suggestions?

I tried substituting different SUNWzfsgr and SUNWzfsgu packages from older 
Solarises (x86/sparc, snv_77/84/89, sol10u3/u4), and directly substituting the 
zfs.jar file, but these actions resulted in either the same error or 
crash-and-restart of SMC Webserver.

I didn't yet try installing an older SUNWmco* packages (a 10u4 system with SMC 
3.0.2 works ok), I'm not sure it's a good idea ;)

The system has JDK 1.6.0_06 per default, maybe that's the culprit? I tried 
setting it to JDL 1.5.0_15 and web-module zfs refused to start and register 
itself...

===
Application Error
com.iplanet.jato.NavigationException: Exception encountered during forward
Root cause = [java.lang.IllegalArgumentException: No enum const class 
com.sun.zfs.common.model.AclInheritProperty$AclInherit.restricted]
Notes for application developers:

* To prevent users from seeing this error message, override the 
onUncaughtException() method in the module servlet and take action specific to 
the application
* To see a stack trace from this error, see the source for this page

Generated Thu May 29 17:39:50 MSD 2008
===

In fact, the traces in the logs are quite long (several screenfulls) and nearly 
the same; this one starts as:
===
com.iplanet.jato.NavigationException: Exception encountered during forward
Root cause = [java.lang.IllegalArgumentException: No enum const class 
com.sun.zfs.common.model.AclInheritProperty$AclInherit.restricted]
at com.iplanet.jato.view.ViewBeanBase.forward(ViewBeanBase.java:380)
at com.iplanet.jato.view.ViewBeanBase.forwardTo(ViewBeanBase.java:261)
at 
com.iplanet.jato.ApplicationServletBase.dispatchRequest(ApplicationServletBase.java:981)
at 
com.iplanet.jato.ApplicationServletBase.processRequest(ApplicationServletBase.java:615)
at 
com.iplanet.jato.ApplicationServletBase.doGet(ApplicationServletBase.java:459)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:690)
...
===
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-29 Thread Joerg Schilling
"Poulos, Joe" <[EMAIL PROTECTED]> wrote:

> Is there a  ZFS equivalent of ufsdump and ufsrestore? 
>
>  
>
>  Will creating a tar file work with ZFS? We are trying to backup a
> ZFS file system to a separate disk, and would like to take advantage of
> something like ufsdump rather than using expensive backup software.

The closest equivalent to ufsdump and ufsrestore is "star".

Star includes the ability do do true incremental dumps/restores using the same
basic method as ufsdump/ufsrstore do. Star just uses a portable 
tar/POSIX.1-2001 
based archive to store the results. See "man star" and seach for the sections:

INCREMENTAL BACKUPS
BACKUP SCHEDULES
INCREMENTAL RESTORES
SYNCHRONIZING FILESYSTEMS



Source:

ftp://ftp.berlios.de/pub/star/

A binary package for star-1.5-final is on Blastwave.org


Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs equivalent of ufsdump and ufsrestore

2008-05-29 Thread Poulos, Joe
Is there a  ZFS equivalent of ufsdump and ufsrestore? 

 

 Will creating a tar file work with ZFS? We are trying to backup a
ZFS file system to a separate disk, and would like to take advantage of
something like ufsdump rather than using expensive backup software.

 

Thanks for any suggestions.

 

Joe



This message and its attachments may contain legally privileged or confidential 
information.  It is intended solely for the named addressee.  If you are not 
the addressee indicated in this message (or responsible for delivery of the 
message to the addressee), you may not copy or deliver this message or its 
attachments to anyone.  Rather, you should permanently delete this message and 
its attachments and kindly notify the sender by reply e-mail.  Any content of 
this message and its attachments that does not relate to the official business 
of News America Incorporated or its subsidiaries must be taken not to have been 
sent or endorsed by any of them.  No warranty is made that the e-mail or 
attachment(s) are free from computer virus or other defect.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss