Re: [zfs-discuss] Oracle releases Solaris 11 for Sparc and x86 servers

2011-11-10 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.


AFAIK, there is no change in open source policy for Oracle Solaris

On 11/9/2011 10:34 PM, Fred Liu wrote:

... so when will zfs-related improvement make it to solaris-
derivatives :D ?


I am also very curious about Oracle's policy about source code. ;-)


Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Hung-Sheng Tsao Ph D.
Founder  Principal
HopBit GridComputing LLC
cell: 9734950840
http://laotsao.wordpress.com/
http://laotsao.blogspot.com/

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Data distribution not even between vdevs

2011-11-10 Thread Edward Ned Harvey
 From: Gregg Wonderly [mailto:gregg...@gmail.com]
 
  There is no automatic way to do it.
 For me, this is a key issue.  If there was an automatic rebalancing
mechanism,
 that same mechanism would work perfectly to allow pools to have disk sets
 removed.  It would provide the needed basic mechanism of just moving stuff
 around to eliminate the use of a particular part of the pool that you
wanted
 to
 remove.

Search this list for bp_rewrite.  There are many features that are dependent
on this feature - rebalance, defrag, vdev removal, toggle compression or
dedup for existing data, etc.  It's long since requested by many people, but
apparently fundamentally difficult to do, or something.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Jeff Savit
 
 Also, not a good idea for
 performance to partition the disks as you suggest.

Not totally true.  By default, if you partition the disks, then the disk write 
cache gets disabled.  But it's trivial to simply force enable it thus solving 
the problem.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Reversing fdisk changes

2011-11-10 Thread Peter Tribble
I have a Solaris 10 machine that I've been having an interesting time with
today. (Live Upgrade didn't work, stmsboot didn't work, I managed to rebuild
it with jumpstart at about the 10th attempt.)

Anyway, it looks like one of my drives has had its label overwritten by
fdisk,

  pool: disk00
id: 10866402904016234458
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:

disk00   UNAVAIL  missing device
  c2t0d1 ONLINE

Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.

And if I look at format, I see the drives [it's a 2530, btw] as

   0. c1t0d0 DEFAULT cyl 44381 alt 2 hd 255 sec 126
  /pci@0,0/pci10de,5d@d/pci1000,3150@0/sd@0,0
   4. c2t0d1 SUN-LCSM100_S-0670-680.00GB
  /pci@0,0/pci10de,5d@e/pci1000,3150@0/sd@0,1

So it looks like c1t0d0 (which is where I think the other half of the
pool is) has been relabelled by fdisk and has an SMI label on it.

Is there a way to reverse this, and if so, how?

This is annoying, rather than critical: the system is out of service
and I can reconstruct the data if necessary. Although knowing
how to fix this would be generally useful in the future...

Thanks,

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of darkblue
 
 1 * XEON 5606
 1 * supermirco X8DT3-LN4F
 6 * 4G RECC RAM
 22 * WD RE3 1T harddisk
 4 * intel 320 (160G) SSD
 1 * supermicro 846E1-900B chassis

I just want to say, this isn't supported hardware, and although many people 
will say they do this without problem, I've heard just as many people 
(including myself) saying it's unstable that way.

I recommend buying either the oracle hardware or the nexenta on whatever they 
recommend for hardware.

Definitely DO NOT run the free version of solaris without updates and expect it 
to be reliable.  But that's a separate issue.  I'm also emphasizing that even 
if you pay for solaris support on non-oracle hardware, don't expect it to be 
great.  But maybe it will be.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of darkblue
 
 Why would you want your root pool to be on the SSD? Do you expect an
 extremely high I/O rate for the OS disks? Also, not a good idea for
 performance to partition the disks as you suggest.
 
  because the solaris os occuppied the whole 1TB disk is a waste
  and the RAM is only 24G, does this could handle such big cache(160G)?

Putting rpool on the SSD is a waste.  Instead of partitioning the SSD into 
cache  rpool, why not parttiion the 1TB HDD into something like 100G for 
rpool, and the rest for the main data pool?  It makes sense if you're using 
mirrors instead of raidz.  (I definitely recommend using mirrors instead of 
raidz for your system running VM's)



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sync=disabled property

2011-11-10 Thread Bob Friesenhahn

On Wed, 9 Nov 2011, Tomas Forsman wrote:


At all times, if there's a server crash, ZFS will come back along at next
boot or mount, and the filesystem will be in a consistent state, that was
indeed a valid state which the filesystem actually passed through at some
moment in time.  So as long as all the applications you're running can
accept the possibility of going back in time as much as 30 sec, following
an ungraceful ZFS crash, then it's safe to disable ZIL (set sync=disabled).


Client writes block 0, server says OK and writes it to disk.
Client writes block 1, server says OK and crashes before it's on disk.
Client writes block 2.. waaiits.. waiits.. server comes up and, server
says OK and writes it to disk.

Now, from the view of the clients, block 0-2 are all OK'd by the server
and no visible errors.
On the server, block 1 never arrived on disk and you've got silent
corruption.


The silent corruption (of zfs) does not occur due to simple reason 
that flushing all of the block writes are acknowledged by the disks 
and then a new transaction occurs to start the next transaction group. 
The previous transaction is not closed until the next transaction has 
been successfully started by writing the previous TXG group record to 
disk.  Given properly working hardware, the worst case scenario is 
losing the whole transaction group and no corruption occurs.


Loss of data as seen by the client can definitely occur.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Couple of questions about ZFS on laptops

2011-11-10 Thread Garrett D'Amore

On Nov 9, 2011, at 6:08 PM, Francois Dion wrote:

 Some laptops have pc card and expresscard slots, and you can get an adapter 
 for sd card, so you could set up your os non mirrored and just set up home on 
 a pair of sd cards. Something like
 http://www.amazon.com/Sandisk-SDAD109A11-Digital-Card-Express/dp/B000W3QLLW
 
 I've done this in the past, variations of this, including using a partition 
 and a usb stick:

SDcard is suitable for boot *only* if it is connected via USB.  While the 
drivers I wrote for SDHCI work fine for using media, you can't boot off it 
generally -- usually the laptop BIOS simply lacks the support needed to see 
these. 

It used to be that CompactFlash was a preferred option, but I think CF is 
falling out of favor these days.

- Garrett

 
 http://solarisdesktop.blogspot.com/2007/02/stick-to-zfs-or-laptop-with-mirrored.html
 Wow, where did the time go, that was almost 5 years ago...
 
 Anyway, i pretty much ditched carrying the laptop, the current one i have is 
 too heavy (m4400). But it does run really nicely sol11 and openindiana. The 
 m4400 is set up with 2 drives, not mirrored. I'm tempted to put a sandforce 
 based ssd for faster booting and better zfs perf for demos. Then i have an 
 sdcard and expresscard adapter for sd. This gives me 16gb mirrored for my 
 documents, which is plenty. 
 
 Francois
 Sent from my iPad
 
 On Nov 8, 2011, at 12:05 PM, Jim Klimov jimkli...@cos.ru wrote:
 
 Hello all,
 
 I am thinking about a new laptop. I see that there are
 a number of higher-performance models (incidenatlly, they
 are also marketed as gamer ones) which offer two SATA
 2.5 bays and an SD flash card slot. Vendors usually
 position the two-HDD bay part as either get lots of
 capacity with RAID0 over two HDDs, or get some capacity
 and some performance by mixing one HDD with one SSD.
 Some vendors go as far as suggesting a highest performance
 with RAID0 over two SSDs.
 
 Now, if I were to use this for work with ZFS on an
 OpenSolaris-descendant OS, and I like my data enough
 to want it mirrored, but still I want an SSD performance
 boost (i.e. to run VMs in real-time), I seem to have
 a number of options:
 
 1) Use a ZFS mirror of two SSDs
  - seems too pricey
 2) Use a HDD with redundant data (copies=2 or mirroring
  over two partitions), and an SSD for L2ARC (+maybe ZIL)
  - possible unreliability if the only HDD breaks
 3) Use a ZFS mirror of two HDDs
  - lowest performance
 4) Use a ZFS mirror of two HDDs and an SD card for L2ARC.
  Perhaps add another built-in flash card with PCMCIA
  adapters for CF, etc.
 
 Now, there is a couple of question points for me here.
 
 One was raised in my recent questions about CF ports in a
 Thumper. The general reply was that even high-performance
 CF cards are aimed for linear RW patterns and may be
 slower than HDDs for random access needed as L2ARCs, so
 flash cards may actually lower the system performance.
 I wonder if the same is the case with SD cards, and/or
 if anyone encountered (and can advise) some CF/SD cards
 with good random access performance (better than HDD
 random IOPS). Perhaps an extra IO path can be beneficial
 even if random performances are on the same scale - HDDs
 would have less work anyway and can perform better with
 their other tasks?
 
 On another hand, how would current ZFS behave if someone
 ejects an L2ARC device (flash card) and replaces it with
 another unsuspecting card, i.e. one from a photo camera?
 Would ZFS automatically replace the L2ARC device and
 kill the photos, or would the cache be disabled with
 no fatal implication for the pools nor for the other
 card? Ultimately, when the ex-L2ARC card gets plugged
 back in, would ZFS automagically attach it as the cache
 device, or does this have to be done manually?
 
 
 Second question regards single-HDD reliability: I can
 do ZFS mirroring over two partitions/slices, or I can
 configure copies=2 for the datasets. Either way I
 think I can get protection from bad blocks of whatever
 nature, as long as the spindle spins. Can these two
 methods be considered equivalent, or is one preferred
 (and for what reason)?
 
 
 Also, how do other list readers place and solve their
 preferences with their OpenSolaris-based laptops? ;)
 
 Thanks,
 //Jim Klimov
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-11-10 Thread John D Groenveld
In message 4e9db04b.80...@oracle.com, Cindy Swearingen writes:
This is CR 7102272.

What is the title of this BugId?
I'm trying to attach my Oracle CSI to it but Chuck Rozwat
and company's support engineer can't seem to find it.

Once I get upgraded from S11x SRU12 to S11, I'll reproduce
on a more recent kernel build.

Thanks,
John
groenv...@acm.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread Jeff Savit

On 11/10/2011 06:38 AM, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jeff Savit

Also, not a good idea for
performance to partition the disks as you suggest.

Not totally true.  By default, if you partition the disks, then the disk write 
cache gets disabled.  But it's trivial to simply force enable it thus solving 
the problem.

Granted - I just didn't want to get into a long story. With a 
self-described 'newbie' building a storage server I felt the best advice 
is to keep as simple as possible without adding steps (and without 
adding exposition about cache on partitioned disks - but now that you 
brought it up, yes, he can certainly do that).


Besides, there's always a way to fill up the 1TB disks :-) Besides the 
OS image, it could also store gold images for the guest virtual 
machines, maintained separately from the operational images.


regards, Jeff



--


*Jeff Savit* | Principal Sales Consultant
Phone: 602.824.6275 | Email: jeff.sa...@oracle.com | Blog: 
http://blogs.oracle.com/jsavit

Oracle North America Commercial Hardware
Operating Environments  Infrastructure S/W Pillar
2355 E Camelback Rd | Phoenix, AZ 85016



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-11-10 Thread Cindy Swearingen

Hi John,

CR 7102272:

 ZFS storage pool created on a 3 TB USB 3.0 device has device label 
problems


Let us know if this is still a problem in the OS11 FCS release.

Thanks,

Cindy


On 11/10/11 08:55, John D Groenveld wrote:

In message4e9db04b.80...@oracle.com, Cindy Swearingen writes:

This is CR 7102272.


What is the title of this BugId?
I'm trying to attach my Oracle CSI to it but Chuck Rozwat
and company's support engineer can't seem to find it.

Once I get upgraded from S11x SRU12 to S11, I'll reproduce
on a more recent kernel build.

Thanks,
John
groenv...@acm.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sync=disabled property

2011-11-10 Thread Tomas Forsman
On 10 November, 2011 - Bob Friesenhahn sent me these 1,6K bytes:

 On Wed, 9 Nov 2011, Tomas Forsman wrote:

 At all times, if there's a server crash, ZFS will come back along at next
 boot or mount, and the filesystem will be in a consistent state, that was
 indeed a valid state which the filesystem actually passed through at some
 moment in time.  So as long as all the applications you're running can
 accept the possibility of going back in time as much as 30 sec, following
 an ungraceful ZFS crash, then it's safe to disable ZIL (set sync=disabled).

 Client writes block 0, server says OK and writes it to disk.
 Client writes block 1, server says OK and crashes before it's on disk.
 Client writes block 2.. waaiits.. waiits.. server comes up and, server
 says OK and writes it to disk.

 Now, from the view of the clients, block 0-2 are all OK'd by the server
 and no visible errors.
 On the server, block 1 never arrived on disk and you've got silent
 corruption.

 The silent corruption (of zfs) does not occur due to simple reason that 
 flushing all of the block writes are acknowledged by the disks and then a 
 new transaction occurs to start the next transaction group. The previous 
 transaction is not closed until the next transaction has been 
 successfully started by writing the previous TXG group record to disk.  
 Given properly working hardware, the worst case scenario is losing the 
 whole transaction group and no corruption occurs.

 Loss of data as seen by the client can definitely occur.

When a client writes something, and something else ends up on disk - I
call that corruption. Doesn't matter whose fault it is and technical
details, the wrong data was stored despite the client being careful when
writing.

/Tomas
-- 
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sync=disabled property

2011-11-10 Thread Will Murnane
On Thu, Nov 10, 2011 at 14:12, Tomas Forsman st...@acc.umu.se wrote:
 On 10 November, 2011 - Bob Friesenhahn sent me these 1,6K bytes:
 On Wed, 9 Nov 2011, Tomas Forsman wrote:

 At all times, if there's a server crash, ZFS will come back along at next
 boot or mount, and the filesystem will be in a consistent state, that was
 indeed a valid state which the filesystem actually passed through at some
 moment in time.  So as long as all the applications you're running can
 accept the possibility of going back in time as much as 30 sec, following
 an ungraceful ZFS crash, then it's safe to disable ZIL (set sync=disabled).

 Client writes block 0, server says OK and writes it to disk.
 Client writes block 1, server says OK and crashes before it's on disk.
 Client writes block 2.. waaiits.. waiits.. server comes up and, server
 says OK and writes it to disk.
 When a client writes something, and something else ends up on disk - I
 call that corruption. Doesn't matter whose fault it is and technical
 details, the wrong data was stored despite the client being careful when
 writing.
If the hardware is behaving itself (actually doing a cache flush when
ZFS asks it to, for example) the server won't say OK for block 1 until
it's actually on disk.  This behavior is what makes NFS over ZFS slow
without a slog: NFS does everything O_SYNC by default, so ZFS runs
around syncing all the disks all the time.  Therefore, you won't lose
data in this circumstance.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sync=disabled property

2011-11-10 Thread Tomas Forsman
On 10 November, 2011 - Will Murnane sent me these 1,5K bytes:

 On Thu, Nov 10, 2011 at 14:12, Tomas Forsman st...@acc.umu.se wrote:
  On 10 November, 2011 - Bob Friesenhahn sent me these 1,6K bytes:
  On Wed, 9 Nov 2011, Tomas Forsman wrote:
 
  At all times, if there's a server crash, ZFS will come back along at next
  boot or mount, and the filesystem will be in a consistent state, that was
  indeed a valid state which the filesystem actually passed through at some
  moment in time.  So as long as all the applications you're running can
  accept the possibility of going back in time as much as 30 sec, 
  following
  an ungraceful ZFS crash, then it's safe to disable ZIL (set 
  sync=disabled).
 
  Client writes block 0, server says OK and writes it to disk.
  Client writes block 1, server says OK and crashes before it's on disk.
  Client writes block 2.. waaiits.. waiits.. server comes up and, server
  says OK and writes it to disk.
  When a client writes something, and something else ends up on disk - I
  call that corruption. Doesn't matter whose fault it is and technical
  details, the wrong data was stored despite the client being careful when
  writing.
 If the hardware is behaving itself (actually doing a cache flush when
 ZFS asks it to, for example) the server won't say OK for block 1 until
 it's actually on disk.  This behavior is what makes NFS over ZFS slow
 without a slog: NFS does everything O_SYNC by default, so ZFS runs
 around syncing all the disks all the time.  Therefore, you won't lose
 data in this circumstance.

Which is exactly what this thread is about, consequences from
-disabling- sync.

/Tomas
-- 
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread Ian Collins

On 11/11/11 02:42 AM, Edward Ned Harvey wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of darkblue

1 * XEON 5606
1 * supermirco X8DT3-LN4F
6 * 4G RECC RAM
22 * WD RE3 1T harddisk
4 * intel 320 (160G) SSD
1 * supermicro 846E1-900B chassis

I just want to say, this isn't supported hardware, and although many people 
will say they do this without problem, I've heard just as many people 
(including myself) saying it's unstable that way.


I've never had issues with Supermicro boards.  I'm using a similar model 
and everything on the board is supported.

I recommend buying either the oracle hardware or the nexenta on whatever they 
recommend for hardware.

Definitely DO NOT run the free version of solaris without updates and expect it 
to be reliable.


That's a bit strong.  Yes I do regularly update my supported (Oracle) 
systems, but I've never had problems with my own build Solaris Express 
systems.


I waste far more time on (now luckily legacy) fully supported Solaris 10 
boxes!


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-11-10 Thread Daniel Carosone
On Tue, Oct 11, 2011 at 08:17:55PM -0400, John D Groenveld wrote:
 Under both Solaris 10 and Solaris 11x, I receive the evil message:
 | I/O request is not aligned with 4096 disk sector size.
 | It is handled through Read Modify Write but the performance is very low.

I got similar with 4k sector 'disks' (as a comstar target with
blk=4096) when trying to use them to force a pool to ashift=12. The
labels are found at the wrong offset when the block numbers change,
and maybe the GPT label has issues too. 

--
Dan.


pgp36Oq3osVOg.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sync=disabled property

2011-11-10 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Bob Friesenhahn
 
 The silent corruption (of zfs) does not occur due to simple reason
 that flushing all of the block writes are acknowledged by the disks
 and then a new transaction occurs to start the next transaction group.
 The previous transaction is not closed until the next transaction has
 been successfully started by writing the previous TXG group record to
 disk.  Given properly working hardware, the worst case scenario is
 losing the whole transaction group and no corruption occurs.
 
 Loss of data as seen by the client can definitely occur.

Tomas is right on this point - If you have a ZFS NFS server running with
sync disabled, and the ZFS server reboots ungracefully and starts serving
NFS again without the NFS clients dismounting/remounting, then ZFS hasn't
been corrupted but NFS has.  Exactly the way Tomas said.  The server has
lost its mind and gone back into the past, but the clients remember their
state (which is/was in the future) and after the server comes up again in
the past, the clients will simply assume the server hasn't lost its mind and
continue as if nothing went wrong, which is precisely the wrong thing to do.

This is why, somewhere higher up in this thread, I said, if you have a NFS
server running with sync disabled, you need to ensure NFS services don't
automatically start at boot time.  If your server crashes ungracefully, you
need to crash your clients too (NFS dismount/remount).

Personally, this is how I operate the systems I support.  Because running
with sync disabled is so DARN fast, and a server crash is so DARN rare, I
feel the extra productivity for 500 days in a row outweigh the productivity
loss that occurs on that one fateful day, when I have to reboot or
dismount/remount all kinds of crap around the office.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] weird bug with Seagate 3TB USB3 drive

2011-11-10 Thread David Magda
On Nov 10, 2011, at 18:41, Daniel Carosone wrote:

 On Tue, Oct 11, 2011 at 08:17:55PM -0400, John D Groenveld wrote:
 Under both Solaris 10 and Solaris 11x, I receive the evil message:
 | I/O request is not aligned with 4096 disk sector size.
 | It is handled through Read Modify Write but the performance is very low.
 
 I got similar with 4k sector 'disks' (as a comstar target with
 blk=4096) when trying to use them to force a pool to ashift=12. The
 labels are found at the wrong offset when the block numbers change,
 and maybe the GPT label has issues too. 

Anyone know if Solaris 11 has better support for detecting the native block 
size of the underlying storage?

PSARC 2008/769 (Multiple disk sector size support) was committed in 
OpenSolaris in commit revision  9889:68d0fe4c716e. It appears ZFS makes use of 
the check when opening a vdev:

http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_disk.c#287

Has anyone had a chance to play with S11 to confirm?

We're only going to get more and more Advanced Format drives, never mind all 
the SAN storage units out there as well (and VMFS often on top of that too).

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to set up solaris os and cache within one SSD

2011-11-10 Thread darkblue
2011/11/11 Ian Collins i...@ianshome.com

 On 11/11/11 02:42 AM, Edward Ned Harvey wrote:

 From: 
 zfs-discuss-bounces@**opensolaris.orgzfs-discuss-boun...@opensolaris.org[mailto:
 zfs-discuss-
 boun...@opensolaris.org] On Behalf Of darkblue

 1 * XEON 5606
 1 * supermirco X8DT3-LN4F
 6 * 4G RECC RAM
 22 * WD RE3 1T harddisk
 4 * intel 320 (160G) SSD
 1 * supermicro 846E1-900B chassis

 I just want to say, this isn't supported hardware, and although many
 people will say they do this without problem, I've heard just as many
 people (including myself) saying it's unstable that way.


 I've never had issues with Supermicro boards.  I'm using a similar model
 and everything on the board is supported.

  I recommend buying either the oracle hardware or the nexenta on whatever
 they recommend for hardware.

 Definitely DO NOT run the free version of solaris without updates and
 expect it to be reliable.


 That's a bit strong.  Yes I do regularly update my supported (Oracle)
 systems, but I've never had problems with my own build Solaris Express
 systems.

 I waste far more time on (now luckily legacy) fully supported Solaris 10
 boxes!


what does it mean?
I am going to install solaris 10 u10 on this server.it that any problem
about compatible?
and which version of solaris or solaris derived do you suggest to build
storage with the above hardware.

 --
 Ian.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss