[zfs-discuss] zfs kernel compilation issue

2009-08-29 Thread P. Anil Kumar
I'm trying to compile zfs kernel on the following machine
bash-3.2# uname -a
SunOS solaris-b119-44 5.11 snv_119 i86pc i386 i86pc

I set the env properly using bldenv -d ./opensolaris.sh. 

bash-3.2# pwd
/export/testws/usr/src/uts

bash-3.2# dmake
dmake: defaulting to parallel mode.
See the man page dmake(1) for more information on setting up the .dmakerc file.
/export/testws/usr/src/uts/common/sys
/export/testws/usr/src/uts/common/rpc
/export/testws/usr/src/uts/common/rpcsvc
/export/testws/usr/src/uts/common/gssapi
/export/testws/usr/src/uts/common/idmap
/export/testws/usr/src/uts/intel
/export/testws/usr/src/uts/intel/genassym
/export/testws/usr/src/tools/proto/opt/onbld/bin/genoffsets -s 
/export/testws/usr/src/tools/proto/opt/onbld/bin/i386/ctfstabs -r 
/export/testws/usr/src/tools/proto/opt/onbld/bin/i386/ctfconvert  
/opt/onbld/bin/i386/cw -_cc -_noecho  -W0,-xdbggen=no%usedonly  
-_gcc=-fno-dwarf2-indirect-strings -m64 -Ui386 -U__i386 -xO3 
../../intel/amd64/ml/amd64.il -D_ASM_INLINES -Xa -xspace  -xmodel=kernel 
-Wu,-save_args -v -xildoff  -g -xc99=%all -W0,-noglobal 
-_gcc=-fno-dwarf2-indirect-strings -xdebugformat=stabs -errtags=yes 
-errwarn=%all -W0,-xglobalstatic  -xstrconst -D_KERNEL -D_SYSCALL32 
-D_SYSCALL32_IMPL -D_ELF64  -D_DDI_STRICT -Dsun -D__sun -D__SVR4 
-I../../intel -I../../common/brand/lx -Y I,../../common  
../../intel/genassym/offsets.in ../../intel/genassym/obj64/genassym.h
cc: Warning: illegal option -m64
cc: -xmodel should be used with -xarch={amd64|generic64}
genoffsets: /opt/onbld/bin/i386/cw failed with status 1
*** Error code 1
dmake: Fatal error: Command failed for target 
`../../intel/genassym/obj64/genassym.h'
Current working directory /export/testws/usr/src/uts/intel/genassym
*** Error code 1
The following command caused the error:
BUILD_TYPE=OBJ64 VERSION='testws' dmake  def.targ
dmake: Fatal error: Command failed for target `def.obj64'
Current working directory /export/testws/usr/src/uts/intel/genassym
*** Error code 1
The following command caused the error:
cd genassym; pwd; dmake  def
dmake: Fatal error: Command failed for target `genassym'
Current working directory /export/testws/usr/src/uts/intel
*** Error code 1
The following command caused the error:
cd intel; pwd; dmake  def.prereq
dmake: Fatal error: Command failed for target `intel.prereq'
Current working directory /export/testws/usr/src/uts

I would like to know why its picking up amd64 config params from the Makefile, 
while uname -a clearly shows that its i386 ?

Thanks,
pak
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to prevent /usr/bin/chmod from followingsymboliclinks?

2009-08-29 Thread Kris Larsen
Remove the -R after chmod when running this command! Adding -R here is of 
course just as dangerous as running it without find. Lesson learned: 
Cut'n'paste is dangerous..
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrink the rpool zpool or increase rpool zpool via add disk.

2009-08-29 Thread Robert Milkowski

casper@sun.com wrote:

Randall Badilla wrote:


Hi all:
First; it is possible modify the boot zpool rpool after OS 
installation...? I install the OS on the whole 72GB harddisk.. it is 
mirrored so If I want to decrease the rpool; for example resize to a 
36GB slice it can be done?
As far I remember on UFS/SVM I was able to resize boot OS disk via 
detach mirror (so tranforming to one-way mirror); ajust the partitions 
then attach de mirror. After sync boot form the resized mirror; 
re-doing the resize on the remaining mirror and attach mirror and reboot.

Dowtime reduced to a reboot times.

  

Yes, you can follow same procedure with zfs (details will differ of course).



You can actually change the partitions while you're using the slice.
But after changing the size of both slices you may need to reboot

I've used it also when going from ufs to zfs for boot.

  


But the OP wants to decrease a slice size which if it would work at all 
could lead to loss of data.



--
Robert Milkowski
http://milek.blogspot.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS

2009-08-29 Thread Robin Bowes
On 03/08/09 17:35, Neal Pollack wrote:
 On 07/31/09 06:12 PM, Jorgen Lundman wrote:

 Finding a SATA card that would work with Solaris, and be hot-swap, and
 more than 4 ports, sure took a while. Oh and be reasonably priced ;)
 
 Let's take this first point; card that works with Solaris
 
 I might try to find some engineers to write device drivers to
 improve this situation.
 Would this alias be interested in teaching me which 3 or 4 cards they would
 put at the top of the wish list for Solaris support?
 I assume the current feature gap is defined as needing driver support
 for PCI-express add-in cards that have 4 to 8 ports inexpensive
 JBOD, not expensive HW RAID, and can handle hot-swap while running OS.
 Would this be correct?

That would be correct, except I don't know any cheap, 4- to 8-port PCIe
SATA cards.

I'm still finding that the Supermicro PCI-X 8-port cards are the
cheapest option. But they require PCI-X slot for optimal performance,
which generally means a pricey mobo.

R.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] change raidz1 to raidz2 with BP rewrite?

2009-08-29 Thread Orvar Korvar
Will BP rewrite allow adding a drive to raidz1 to get raidz2? And how is status 
on BP rewrite? Far away? Not started yet? Planning?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs kernel compilation issue

2009-08-29 Thread Bill Sommerfeld

On Fri, 2009-08-28 at 23:12 -0700, P. Anil Kumar wrote:
 I would like to know why its picking up amd64 config params from the 
 Makefile, while uname -a clearly shows that its i386 ?

it's behaving as designed.

on solaris, uname -a always shows i386 regardless of whether the system
is in 32-bit or 64-bit mode.  you can use the isainfo command to tell if
amd64 is available.

on i386, we always build both 32-bit and 64-bit kernel modules; the
bootloader will figure out which kernel to load.

- Bill

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] change raidz1 to raidz2 with BP rewrite?

2009-08-29 Thread Adam Leventhal
Will BP rewrite allow adding a drive to raidz1 to get raidz2? And  
how is status on BP rewrite? Far away? Not started yet? Planning?



BP rewrite is an important component technology, but there's a bunch  
beyond

that. It's not a high priority right now for us at Sun.

Adam

--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Change the volblocksize of a ZFS volume

2009-08-29 Thread stuart anderson
  Question :
 
  Is there a way to change the volume blocksize say
 via 'zfs snapshot send/receive'?
 
  As I see things, this isn't possible as the target
 volume (including property values) gets overwritten
 by 'zfs receive'.

 
 By default, properties are not received.  To pass
 properties, you need 
 to use
 the -R flag.

I have tried that, and while it works for properties like compression, I have 
not found a way to preserve a non-default volblocksize across zfs send | zfs 
receive. the zvol created on the receive side is always defaulting to 8k. Is 
there a way to do this?

Thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] change raidz1 to raidz2 with BP rewrite?

2009-08-29 Thread David Magda

On Aug 29, 2009, at 12:48, Adam Leventhal wrote:

Will BP rewrite allow adding a drive to raidz1 to get raidz2? And  
how is status on BP rewrite? Far away? Not started yet? Planning?


BP rewrite is an important component technology, but there's a bunch  
beyond that. It's not a high priority right now for us at Sun.


What's the bug / RFE number for it? (So those of us with contracts can  
add a request for it.)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pulsing write performance

2009-08-29 Thread David Bond
Hi,

happens on opensolaris build 101b and 111b.
Arc cache max set to 6GB, joined to a windows 2003 r2 ad domain. With a pool of 
4 15Krpm drives in a 2 way mirror.
The bnx driver has been changed to have offloading enabled.

Not much else has been changed.

Ok, so when the chache fills and needs to be flushed, when the flush occurs it 
locks access to it, so no read? or writes can occur from cache, and as 
everything will go through the arc, nothing can happen until the arc has 
finished its flush.

And to compensate for this, I would have to either reduce the cache size to one 
that is small enough that the disk array can write it at such a speed that the 
pauses are reduced to ones that are not really noticable.

Wouldnt that then impact the overal burst write performance also. Why doesnt 
the arc allow writes while flushing? or just have 2 caches so that one can keep 
taking writes while the other flushes. If it allowed writes to the buffer while 
it was flushing, it would just reduce the write speed down to what the disks 
can handel wouldnt it?

Anyway, thanks for the info I will give that parameter a go, see how it works.

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pulsing write performance

2009-08-29 Thread David Bond
Ok,

so by limiting the write cache to that of the controller you were able to 
remove the pauses?

How id that affect your overall write performance, if at all?

thanks I will give that ago.

David
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pulsing write performance

2009-08-29 Thread David Bond
I dont have any windows machine connected to it over iscsi (yet).

My reference to the windows servers was, having the same hapdware running 
windows and its read writes not having these problems, so it isnt hardware 
causing it.

But when I do eventually get iscsi going I will send a message if i have teh 
same problems.

Also with your replication, whats teh perfomance like, does it impact the 
overall write performance of your server having it enabled, is the replication  
continuous?

David
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot import 'tank': pool is formatted using a newer ZFS version

2009-08-29 Thread Simon Breden
Yes, setting the Boot Environment repository URL to 
http://pkg.opensolaris.org/dev/ worked.

My pool had been upgraded to ZFS version 16 previously using the dev repo.
'zpool get all tank' shows the ZFS version. But you can't use this command 
unless the pool is imported, so when you encounter problems like I did, you 
can't see which version the pool's using.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pulsing write performance

2009-08-29 Thread Bob Friesenhahn

On Sat, 29 Aug 2009, David Bond wrote:


Ok, so when the chache fills and needs to be flushed, when the flush 
occurs it locks access to it, so no read? or writes can occur from 
cache, and as everything will go through the arc, nothing can happen 
until the arc has finished its flush.


It has not been proven that reads from the ARC stop.  It is clear that 
reads from physical disk temporarily stop.  It is not clear (to me) if 
reads from physical disk stop because of the huge number of TXG sync 
write operations (up to 5 seconds worth) which are queued prior to the 
read request, or if reads are intentionally blocked due to some sort 
of coherency management.


And to compensate for this, I would have to either reduce the cache 
size to one that is small enough that the disk array can write it at 
such a speed that the pauses are reduced to ones that are not really 
noticable.


That would work.  There is likely to be more total physical I/O though 
since delaying the writes tends to eliminate many redundant writes. 
For example, an application which re-writes the same file over and 
over again would be sending more of that data to physical disk.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot import 'tank': pool is formatted using a newer ZFS version

2009-08-29 Thread Simon Breden
BTW, if you're interested in seeing my attempts to migrate from a 160 GB IDE 
drive-based root boot pool to a pair of mirrored 30 GB SSDs, then take a look 
here:

http://breden.org.uk/2009/08/29/home-fileserver-mirrored-ssd-zfs-root-boot/
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss