[zfs-discuss] zpool kernel panics.

2007-12-09 Thread Edward Irvine
Hi Folks,

I've got a 3.9 Tb zpool, and it is casing kernel panics on my Solaris  
10 280r (SPARC) server.

The message I get on panic is this:

panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment  
(offset=423713792 size=1024)

This seems to come about when the zpool is being used or being  
scrubbed - about twice a day at the moment. After the reboot, the  
scrub seems to have been forgotten about - I can't get a zpool scrub  
to complete.

Any suggestions very much appreciated...

--- snip ---

$ zpool status zpool1
   pool: zpool1
  state: ONLINE
  scrub: none requested
config:

 NAME   STATE READ  
WRITE CKSUM
 zpool1 ONLINE
0 0 0
   c7t600C0FF00B44BCE6BB00d0s2  ONLINE
0 0 0
   c7t600C0FF00B44BCE6BB01d0s2  ONLINE
0 0 0
   c7t600C0FF00B44BCE6BB02d0s0  ONLINE
0 0 0
   c7t600C0FF00B0BD10ACD00d0s3  ONLINE
0 0 0
   c7t600C0FF00B03D27D7100d0s0  ONLINE
0 0 0

errors: No known data error

$ uname -a
SunOS servername 5.10 Generic_120011-14 sun4u sparc SUNW,Sun-Fire-280R

 snip 

Eddie


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-09 Thread Selim Daoud
 grand-dad,
 why don't you put your immense experience and knowledge to contribute
to what is going to be
the next and only filesystems in modern operating systems, instead of
spending your time asking for specifics and  treating everyone of
ignorant..at least we will remember you in the after life as being a
major contributor to zfs success.

Considering that you have never been considered by anyone until now
(excpet your dog?)...who has ever heard of you ?? have you ever
published anything worth reading? give us some of you mighty
accomplishements.
remember now it's about opensourcing, reducing complexity and
cost...keep the old propriatery things in DEC's drawers and bring us
real ideas

s-

On Dec 9, 2007 4:32 AM, can you guess? [EMAIL PROTECTED] wrote:
  can you run a database on RMS?

 As well as you could on must Unix file systems.  And you've been able to do 
 so for almost three decades now (whereas features like asynchronous and 
 direct I/O are relative newcomers in the Unix environment).

  I guess its not suited

 And you guess wrong:  that's what happens when you speak from ignorance 
 rather than from something more substantial.

  we are already trying to get ride of a 15 years old
  filesystem called
  wafl,

 Whatever for?  Please be specific about exactly what you expect will work 
 better with whatever you're planning to replace it with - and why you expect 
 it to be anywhere nearly as solid.

  and a 10 years old file system called
  Centera,

 My, you must have been one of the *very* early adopters, since EMC launched 
 it only 5 1/2 years ago.

  so do you thing
  we are going to consider a 35 years old filesystem
  now... computer
  science made a lot of improvement since

 Well yes, and no.  For example, most Unix platforms are still struggling to 
 match the features which VMS clusters had over two decades ago:  when you 
 start as far behind as Unix did, even continual advances may still not be 
 enough to match such 'old' technology.

 Not that anyone was suggesting that you replace your current environment with 
 RMS:  if it's your data, knock yourself out using whatever you feel like 
 using.  On the other hand, if someone else is entrusting you with *their* 
 data, they might be better off looking for someone with more experience and 
 sense.


 - bill


 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




-- 
--
Blog: http://fakoli.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-09 Thread David Dyer-Bennet
can you guess? wrote:
 can you guess? wrote:
 
 can you run a database on RMS?
 
 
 As well as you could on must Unix file systems.
   
 And you've been able to do so for almost three
 decades now (whereas features like asynchronous and
 direct I/O are relative newcomers in the Unix
  environment).

 nny, I remember trying to help customers move their
 applications from 
 TOPS-20 to VMS, back in the early 1980s, and finding
 that the VMS I/O 
 capabilities were really badly lacking.
 

 Funny how that works:  when you're not familiar with something, you often 
 mistake your own ignorance for actual deficiencies.  Of course, the TOPS-20 
 crowd was extremely unhappy at being forced to migrate at all, and this 
 hardly improved their perception of the situation.

 If you'd like to provide specifics about exactly what was supposedly lacking, 
 it would be possible to evaluate the accuracy of your recollection.
   

I've played this game before, and it's off-topic and too much work to be 
worth it.  Researching exactly when specific features were released into 
VMS RMS from this distance would be a total pain, and then we'd argue 
about which ones were beneficial for which situations, which people 
didin't much agree about then or since.   My experience at the time was 
that RMS was another layer of abstraction and performance loss between 
the application and the OS, and it made it harder to do things and it 
made them slower and it made files less interchangeable between 
applications; but I'm not interested in trying to defend this position 
for weeks based on 25-year-old memories.

-- 
David Dyer-Bennet, [EMAIL PROTECTED]; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Does ZFS handle a SATA II port multiplier ?

2007-12-09 Thread Lars Tunkrans
Anyone tried to use ZFS  with this type of box  ? . The new thing about this 
one is that 
   it contains a1x eSATA  to 4x SATA Port multipler 


http://www.stardom.com.tw/sohotank%20st5610-4s-sb2.htm

  //Lars
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool kernel panics.

2007-12-09 Thread James C. McPherson
Edward Irvine wrote:
 Hi Folks,
 
 I've got a 3.9 Tb zpool, and it is casing kernel panics on my Solaris  
 10 280r (SPARC) server.
 
 The message I get on panic is this:
 
 panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment  
 (offset=423713792 size=1024)
 
 This seems to come about when the zpool is being used or being  
 scrubbed - about twice a day at the moment. After the reboot, the  
 scrub seems to have been forgotten about - I can't get a zpool scrub  
 to complete.
 
 Any suggestions very much appreciated...

Hi Edwards,
Yyou haven't provided enough information to determine where
the issue might lie.

Please provide the output of the following commands, run from
within an mdb session run on your crash dump:


::status
::msgbuf
*panicthread::findstack -v
$R


It would also help the ZFS folks if you could provide access
to the crash dump, and the output of showrev -p so that your
current patch levels can be identified.



James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Backup in general (was Does ZFS handle a SATA II ' port multiplier' ?)

2007-12-09 Thread David Dyer-Bennet
Lars Tunkrans wrote:
 Anyone tried to use ZFS  with this type of box  ? . The new thing about this 
 one is that 
it contains a1x eSATA  to 4x SATA Port multipler 


 http://www.stardom.com.tw/sohotank%20st5610-4s-sb2.htm
   

There won't be a ZFS issue; ZFS talks to any kind of Solaris block 
device, right?  The question is, will Solaris handle this concept, and 
this particular implementation.

I'm interested in the same question.  I'm looking at what to use for 
backup from my Solaris file server.  I've had rather bad experiences 
with external Firewire and USB disks, especially in performance (can't 
be absolutely sure the problem isn't with Windows there, though, or even 
the specific backup software).  So I'm wondering if using the eSATA port 
to connect to an external enclosure with multiple drives in it might be 
a winning strategy.  Two external enclosures, alternate monthly for a 
full backup, say.  I'm tempted to use ZFS on a random selection of disks 
with no redundancy, as a way to keep costs down. This does of course 
multiply the chance of a drive going bad and invalidating a big chunk of 
the backup just when it hurts most.

I've also considered buying two Drobos for this, but as a USB device I 
think of it as painfully slow.  But it would let me stick my spare 
drives into it in random combinations and give me redundant protection 
on my backups.  If I were using a single drive, I'd accept the risk of 
it failing, but when I'm using three or four drives, I'm not so sanguine 
about it.  I could buy two 750GB external drives and just back up to 
those, for a while longer (and then presumably move those drives into 
the server, and get something even bigger for the backup drives; but in 
the long run I don't think it's smart for me to count on always using a 
single drive for each backup).

Tape drives and tapes seem to be just too expensive.  Am I out of date 
here?  What would I need to buy to back up a system that currently has 
about 600GB of data in it, growing a few GB a month on average?  
(Digital photos; not as bad as if I were recording HD video, but still 
pretty bad at about 9MB a shot for the camera originals).

Also, what *software* does one use?  For a full, and for an 
incremental?  One obvious idea is to just cp -a to the drive for a full 
backup.  This leaves each file easily findable and individually 
accessible, which is good.  ZFS can give me a view equivalent to an 
incremental, can't it?  Which I could then copy somewhere suitable?

-- 
David Dyer-Bennet, [EMAIL PROTECTED]; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does ZFS handle a SATA II port multiplier ?

2007-12-09 Thread Eric Haycraft
Last I had heard, there was no solaris support for port multipliers yet, but I 
believe that they plan on supporting it in the future. That said, I think that 
the FreeBSD port fully supports it now as well as FUSE on Linux. This isn't 
really a zfs issue, but more of a driver issue. 

The other thing to mention is that you can pick up an 8 port sata card for like 
100 bucks. That is what I did and managed to find an 8 bay external enclosure, 
and connected everything with simple converters that took 8 SATA2 cables down 
to 2 infiniband cables. This kept things fairly clean outside of the boxes and 
allowed for full bandwidth to all drives.  The cabling and case set me back 
like 200 bucks, so it came in about the same as a port multiplier setup in 
terms of costs - and I expect that I have higher IO bandwidth this way.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does ZFS handle a SATA II port multiplier ?

2007-12-09 Thread James C. McPherson
Lars Tunkrans wrote:
 Anyone tried to use ZFS  with this type of box  ? . The new thing about this 
 one is that 
it contains a1x eSATA  to 4x SATA Port multipler 
 
 
 http://www.stardom.com.tw/sohotank%20st5610-4s-sb2.htm

Hi Lars,
we don't currently have support for SATA port multipliers
in the SATA framework. It's on the roadmap, not quite sure
of the timeline though.


I think you'd be better off, at this point, going for one
of their other offerings such as


http://www.stardom.com.tw/sohotank%20st5610.htm or
http://www.stardom.com.tw/sohotank%20st5650-4s-u5.htm


I'm looking at the ST5650, myself.


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Does ZFS handle a SATA II port multiplier ?

2007-12-09 Thread Anon
Which 8 bay external case did you end up using?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] bug id 6458218

2007-12-09 Thread Vahid Moghaddasi
Hi,
Can anybody explain the reason that zpool is completely destroyed and must be 
restored from tape after is hitting the bug 6458218?
Also, why most of the machines are OK just one so far (but very high profile) 
was hit?
Is this not the matter of if, but when we get hit...until we upgrade?
Is there any workaround until we upgrade? We have already started.
Thanks,
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup in general (was Does ZFS handle a SATA II ' port multiplier' ?)

2007-12-09 Thread Richard Elling
David Dyer-Bennet wrote:

 I'm interested in the same question.  I'm looking at what to use for 
 backup from my Solaris file server.  I've had rather bad experiences 
 with external Firewire and USB disks, especially in performance (can't 
 be absolutely sure the problem isn't with Windows there, though, or even 
 the specific backup software).  So I'm wondering if using the eSATA port 
 to connect to an external enclosure with multiple drives in it might be 
 a winning strategy.  Two external enclosures, alternate monthly for a 
 full backup, say.  I'm tempted to use ZFS on a random selection of disks 
 with no redundancy, as a way to keep costs down. This does of course 
 multiply the chance of a drive going bad and invalidating a big chunk of 
 the backup just when it hurts most.
   

If you care enough to do backups, at least care enough to be
able to restore.  For my home backups, I use portable drives with
copies=2 or 3 and compression enabled.  I don't fool with
incrementals, but many people do.  The failure mode I'm worried
about is decay, as the drives will be off most of the time.  The
copies feature works well for this failure mode.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Yager on ZFS

2007-12-09 Thread can you guess?
...

 I remember trying to help customers move
 their
  applications from 
  TOPS-20 to VMS, back in the early 1980s, and
 finding
  that the VMS I/O 
  capabilities were really badly lacking.
  
 
  Funny how that works:  when you're not familiar
 with something, you often mistake your own ignorance
 for actual deficiencies.  Of course, the TOPS-20
 crowd was extremely unhappy at being forced to
 migrate at all, and this hardly improved their
 perception of the situation.
 
  If you'd like to provide specifics about exactly
 what was supposedly lacking, it would be possible to
 evaluate the accuracy of your recollection.

 
 I've played this game before, and it's off-topic and
 too much work to be 
 worth it.

In other words, you've got nothing, but you'd like people to believe it's 
something.

The phrase Put up or shut up comes to mind.

  Researching exactly when specific features
 were released into 
 VMS RMS from this distance would be a total pain,

I wasn't asking for anything like that:  I was simply asking for specific 
examples of the VMS I/O capabilities that you allegedly 'found' were really 
badly lacking in the early 1980s.  Even if the porting efforts you were 
involved in predated the pivotal cancellation of Jupiter in 1983, that was 
still close enough to the VMS cluster release that most VMS development effort 
had turned in that direction (i.e., the single-system VMS I/O subsystem had 
pretty well reached maturity), so there won't be any need to quibble about what 
shipped when.

Surely if you had a sufficiently strong recollection to be willing to make such 
a definitive assertion you can remember *something* specific.

 and
 then we'd argue 
 about which ones were beneficial for which
 situations, which people 
 didin't much agree about then or since.

No, no, no:  you're reading far more generality into this than I ever 
suggested.  I'm not asking you to judge what was useful, and I couldn't care 
less whether you thought the features that VMS had and TOPS lacked were 
valuable:  I'm just asking you to be specific about what VMS I/O capabilities 
you claim were seriously deficient.

   My
 experience at the time was 
 that RMS was another layer of abstraction and
 performance loss between 
 the application and the OS,

Ah - your 'experience'.  So you actually measured RMS's effect on performance, 
rather than just SWAGged that adding a layer that you found unappealing in a 
product that your customers were angry about having to move to Must Be A Bad 
Idea?  What was the quantitative result of that measurement, and how was RMS 
configured for the relevant workload?  After all, the extra layer wasn't 
introduced just to give you something to complain about:  it was there to 
provide additional features and configuration flexibility (much of it 
performance-related), as described above.  If you didn't take advantage of 
those facilities, that could be a legitimate *complexity* knock against the 
environment but it's not a legitimate *capability* or *performance* knock 
(rather the opposite, in fact).

 and it made it harder to
 do things

If you were using the RMS API itself rather than accessing RMS through a 
higher-level language that provided simple I/O handling for simple I/O needs, 
that was undoubtedly the case:  as I observed above, that's a price that VMS 
was happy to pay for providing complete control to applications that wanted it. 
 RMS was designed from the start to provide that alternative with the 
understanding that access via higher-level language mechanisms would usually be 
used by those people who didn't need the low-level control that the native RMS 
API provided.

 and it 
 made them slower

That's the second time you've claimed that, so you'll really at least have to 
describe *how* you measured this even if the detailed results of those 
measurements may be lost in the mists of time.

 and it made files less
 interchangeable between 
 applications;

That would have been some trick, given that RMS supported pure byte-stream 
files as well as its many more structured types (and I'm pretty sure that the C 
run-time system took this approach, using RMS direct I/O and doing its own 
deblocking to ensure that some of the more idiomatic C activities like 
single-character reads and writes would not inadvertently perform poorly).  So 
at worst you could have used precisely the same in-file formats that were being 
used in the TOPS-20 environment and achieved the same degree of portability 
(unless you were actually encountering peculiarities in language access rather 
than in RMS itself:  I'm considerably less familiar with that end of the 
environment).

 but I'm not interested in trying to
 defend this position 
 for weeks based on 25-year-old memories.

So far you don't really have much of a position to defend at all:  rather, you 
sound like a lot of the disgruntled TOPS users of that era.  Not that they 
didn't have good reasons to feel disgruntled - but 

Re: [zfs-discuss] Yager on ZFS

2007-12-09 Thread can you guess?
 why don't you put your immense experience and
 knowledge to contribute
 to what is going to be
 the next and only filesystems in modern operating
 systems,

Ah - the pungent aroma of teenage fanboy wafts across the Net.

ZFS is not nearly good enough to become what you suggest above, nor is it 
amenable to some of the changes necessary to make it good enough.  So while I'm 
happy to give people who have some personal reason to care about it pointers on 
how it could be improved, I have no interest in working on it myself.

 instead of
 spending your time asking for specifics

You'll really need to learn to pay a lot more attention to specifics yourself 
if you have any desire to become technically competent when you grow up.

 and
  treating everyone of
 ignorant

I make some effort only to treat the ignorant as ignorant.  It's hardly my 
fault that they are so common around here, but I'd like to think that there's a 
silent majority of more competent individuals in the forum who just look on 
quietly (and perhaps somewhat askance).

It used to be that the ignorant felt motivated to improve themselves, but now 
they seem more inclined to engage in aggressive denial (which may be easier on 
the intellect but seems a less productive use of energy).

- bill
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Backup in general (was Does ZFS handle a SATA II ' port multiplier' ?)

2007-12-09 Thread David Dyer-Bennet
Richard Elling wrote:
 David Dyer-Bennet wrote:

 I'm interested in the same question.  I'm looking at what to use for 
 backup from my Solaris file server.  I've had rather bad experiences 
 with external Firewire and USB disks, especially in performance 
 (can't be absolutely sure the problem isn't with Windows there, 
 though, or even the specific backup software).  So I'm wondering if 
 using the eSATA port to connect to an external enclosure with 
 multiple drives in it might be a winning strategy.  Two external 
 enclosures, alternate monthly for a full backup, say.  I'm tempted to 
 use ZFS on a random selection of disks with no redundancy, as a way 
 to keep costs down. This does of course multiply the chance of a 
 drive going bad and invalidating a big chunk of the backup just when 
 it hurts most.
   

 If you care enough to do backups, at least care enough to be
 able to restore.  For my home backups, I use portable drives with
 copies=2 or 3 and compression enabled.  I don't fool with
 incrementals, but many people do.  The failure mode I'm worried
 about is decay, as the drives will be off most of the time.  The
 copies feature works well for this failure mode.



I am definitely and strongly interested in restoring!  That's why I hate 
my previous backup solutions so much (NTI backup and then Acronis True 
Image); I verified backups and tested restores, and had *FAR* too much 
trouble to be at all comfortable.  The photos and the ebooks are backed 
up eventually (but not always within the month) to good DVDs, and one 
copy is kept off-site, and that's the stuff I'd miss most if it went, 
but I want a good *overall* solution.

The copies thing sounds familiar from discussion here...ah.  Yes, 
that's exactly perfect; it lets me make up a batch of miscellaneous 
spare disks totaling enough space, each one a vdev, put them into one 
pool (no redundancy), but with copies=2 get nearly the redundancy of 
mirroring which would have required matching drives.   At least, if I 
find a solution for connecting that bunch of disks conveniently.  I 
really want one box with easily swappable disks, and one cable.  (And 
then two of them, since of course I need two sets of backup media to 
alternate between.)  And I could update the old full backup to become 
the new one using rsync locally, perhaps much faster than doing a full CP. 

So how do I get introduced to SAS, and how does that relate to SATA, and 
where does infiniband come in (I know of that one only in terms of 
huge expensive switches, does it actually apply to home disk setups at 
all?)?  I'm going to start with Wikipedia tonight, and then see what 
people suggest for further information.

-- 
David Dyer-Bennet, [EMAIL PROTECTED]; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moving ZFS file system to a different system

2007-12-09 Thread Walter Faleiro
Hi Robert,
Thanks it worked like a charm.

--Walter

On Dec 7, 2007 7:33 AM, Robert Milkowski [EMAIL PROTECTED] wrote:

  Hello Walter,


 Thursday, December 6, 2007, 7:05:54 PM, you wrote:


   

 Hi All,

 We are currently a hardware issue with our zfs file server hence the file
 system is unusable.

 We are planning to move it to a different system.


 The setup on the file server when it was running was


 bash-3.00# zpool status

   pool: store1

  state: ONLINE

  scrub: none requested

 config:


 NAMESTATE READ WRITE CKSUM

 backup  ONLINE   0 0 0

   c1t2d1ONLINE   0 0 0

   c1t2d2ONLINE   0 0 0

   c1t2d3ONLINE   0 0 0

   c1t2d4ONLINE   0 0 0

   c1t2d5ONLINE   0 0 0


 errors: No known data errors


   pool: store2

  state: ONLINE

 status: One or more devices has experienced an unrecoverable error.  An

 attempt was made to correct the error.  Applications are
 unaffected.

 action: Determine if the device needs to be replaced, and clear the
 errors

 using 'zpool clear' or replace the device with 'zpool replace'.

see: http://www.sun.com/msg/ZFS-8000-9P

  scrub: none requested

 config:


 NAMESTATE READ WRITE CKSUM

 store   ONLINE   0 0 1

   c1t3d0ONLINE   0 0 0

   c1t3d1ONLINE   0 0 0

   c1t3d2ONLINE   0 0 1

   c1t3d3ONLINE   0 0 0

   c1t3d4ONLINE   0 0 0


 errors: No known data errors


 The store1 was a external raid device with slice configured to boot the
 system+swap and the remaining disk space configured for use with zfs.


 The store2 was a similar external raid device which had all slices
 configured for use for zfs.


 Since both are scsi raid devices, we are thinking of booting up the former
 using a different SUN Box.


 Are there some precautions to be taken to avoid any data loss?


 Thanks,

 --W



 Just make sure the external storage is not connected to both hosts at the
 same time.

 Once you connect it to another host simply import both pools with -f (as
 pool wasn't cleanly exported I guess).



 Please also notice that you've encountered one uncorrectable error in
 store2 pool.

 Well, actually it looks like it was corrected judging from the message.

 IIRC it's a known bug (should have been already corrected) - metadata
 cksum error propagates to top level vdev unnecessarily.


 --

 Best regards,

  Robert Milkowskimailto:[EMAIL PROTECTED][EMAIL 
 PROTECTED]

http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Trial x4500, zfs with NFS and quotas.

2007-12-09 Thread Jorgen Lundman


Robert Milkowski wrote:
 Hello Jorgen,
 
 Honestly - I don't think zfs is a good solution to your problem.
 
 What you could try to do however when it comes to x4500 is:
 
 1. Use SVM+UFS+user quotas

I am now trying zfs -V 1Tb and newfs'ed ufs on that device. This looks 
like a potential solution at least. Even appears that I am allowed to 
enable compression on the volume.

Thanks




-- 
Jorgen Lundman   | [EMAIL PROTECTED]
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Performance writing to USB drive, performance reporting

2007-12-09 Thread David Dyer-Bennet
So I'm doing an rsync between a ZFS filesystem on local SATA disks and 
an empty ZFS filesystem on a drive connected via USB 2.0.   zpool iostat 
is showing me a write bandwidth of about 30M.  That does mean 30MB/sec, 
right?  That's compatible with how long the test took.

I used up 47368826 blocks, du -s says here, or about 24GB.  It took just 
under 14 minutes, so 28MB/s rounded to whole numbers. 

USB 2 is nominally 480 Mbits/s, which is 60Mbytes/sec, and one doesn't 
expect to achieve nominal performance from disks connected via USB.   So 
if I'm getting 28MB/sec, half nominal, is that about the best I can 
expect?  Or is that poor?  Disks themselves can do, what, 50MB/sec to 
80MB/sec (this is definitely not a 10k or 15k rpm drive here)?  The disk 
in this box is an old PATA; would I be likely to notice the difference 
with a modern SATA in a modern USB case?

How much better would Firewire 400 be?  How much does it depend on the 
controller?  My M2n-sli-deluxe motherboard has IEEE 1394, and there are 
some hits on 1394 in syslog during startup, so that looks vaguely 
hopeful.

I'm not unhappy with backing up 24GB in 14 minutes, all in all.

-- 
David Dyer-Bennet, [EMAIL PROTECTED]; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss