Re: [OpenIndiana-discuss] Migrating root zpool from old disks to new one. only one system available

2012-06-12 Thread Jim Klimov

2012-06-12 15:40, J. V. пишет:

On 12/06/12 09:16 AM, Rich wrote:

Won't zpool replace fail b/c the new disks require ashift=12 and his
existing pool devices have ashift=9?


This should work fine:

HP Microserver, upgrading a mirrored pool from 2TB HDs to 3TB HDs. At one
point, the pool had one 2TB HD with 512 sectors and the other with 4k
sectors (both in a mirror). Seems there was a bit of a performance hit, but
everything worked fine.

When I upgraded from 2TB to 3TB, the 3TB HDs were both 4k sectors:


Did the 3TB disk also report to the OS that it uses 4k sectors?
What ashift value is used by the pool, ultimately? Example:
# zdb -l /dev/rdsk/c2t0d0s0 | grep ashift
ashift: 9
ashift: 9
ashift: 9
ashift: 9

If your 4KB disks use ashift=9 it is possible to get problems
worse than decreased performance ;)

Note that the ashift is set on a top-level vdev (the second-tier
component of a pool) and the value is saved into/read from its
vdevs (edge-tier components), and a pool can mix top-level vdevs
with different ashifts, i.e. if it is a raid10 type of setup.

PS: Do you use an HP N40L? Did you test if more than 8Gb RAM
can fit into it (using DDR3 ECC sticks larger than 4Gb), or
does the server's BIOS forbid that?

HTH,
//Jim Klimov

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Migrating root zpool from old disks to new one. only one system available

2012-06-12 Thread J. V.
I did check the ashift after the upgrade and both 3TB HDs report ashift=12.

I am actually using an HP N35L (1.3 Ghz vs. 1.5 GHz in the N40L) and I have
8 GB of non-ECC DDR3 so I could not answer the memory question.

These little machines are really great (If they came with SATA 3, they
would be the ultimate small server):
I do have an N40L with ESXi 5 running my firewall/router (Untangle 9.2) and
Windows Home Server 2011 (For the Media Server, Backup and WSUS services)..

The OpenIndiana server has 6 HDs (loaded modified BIOS so the extra
internal and external SATA ports run at full speeds):
4 x 3TB Seagate ST3000DM001 (Newer Seagates: Fastest 7200 rpm HD right now
... ~140 MB on SATA 2, ~160 on SATA 3).
2 x OCZ 60 GB SSD (Agility 3). Mounted where the CD-ROM would go with a Vantec
5 1/4, 2 x 2.5 in. (Used an eSATA to SATA cable for one of the SSDs).

Config:
Partitioned the SSDs in 3 slices:
rpool (16GB), zil1 (4GB), arc1 (~40GB)
rpool (16GB), zil2 (4GB), arc2 (~40GB)

rpool = 2 x 16GB mirrored
Storage pool 1 = 2 x 3TB mirrored + zil1 + arc1
Storage pool 2 = 2 x 3TB mirrored + zil2 + arc2

The mirrored rpool saved my butt last month when I was playing around and
over-wrote the boot rpool (gets confusing between slices and partitions
with my SSD config). Took me whole 20 min. to fix it:
Exchange SSD places, boot from good rpool, wait 5 min. for resilver,
re-install grub in damaged rpool, poweroff, exchange SSDs again).

No other dual cpu machine that I have uses less power:
The whole server only uses 23, 24 Watts when idle (2 SSDs active, 4 HDs
powered-down).
35 W with 1 pool active (2 SSDs, 2 HDs active)
~50 W with all HDs active and getting hammered.

Once I added zil + arc to the pools, the little thing runs at wire speeds:
Could not get any better!

It took me months to start gathering the hardware and fully learn ZFS,
but I am really glad to be back running a Solaris derivative: I started in
college administering Ultrix, Sun-OS, AIX, NeXTStep and others. Then later
I went to the dark side to learn all about MS, and finally feel back at
home with OpenIndiana!

Thanks to the IllumOS people and contributors here and everywhere else!

(Sorry for the long post. Hope my full config can help/inspire others).

Jose V.

On Tue, Jun 12, 2012 at 8:00 AM, Jim Klimov jimkli...@cos.ru wrote:

 2012-06-12 15:40, J. V. пишет:

  On 12/06/12 09:16 AM, Rich wrote:

 Won't zpool replace fail b/c the new disks require ashift=12 and his
 existing pool devices have ashift=9?


 This should work fine:

 HP Microserver, upgrading a mirrored pool from 2TB HDs to 3TB HDs. At one
 point, the pool had one 2TB HD with 512 sectors and the other with 4k
 sectors (both in a mirror). Seems there was a bit of a performance hit,
 but
 everything worked fine.

 When I upgraded from 2TB to 3TB, the 3TB HDs were both 4k sectors:


 Did the 3TB disk also report to the OS that it uses 4k sectors?
 What ashift value is used by the pool, ultimately? Example:
 # zdb -l /dev/rdsk/c2t0d0s0 | grep ashift
ashift: 9
ashift: 9
ashift: 9
ashift: 9

 If your 4KB disks use ashift=9 it is possible to get problems
 worse than decreased performance ;)

 Note that the ashift is set on a top-level vdev (the second-tier
 component of a pool) and the value is saved into/read from its
 vdevs (edge-tier components), and a pool can mix top-level vdevs
 with different ashifts, i.e. if it is a raid10 type of setup.

 PS: Do you use an HP N40L? Did you test if more than 8Gb RAM
 can fit into it (using DDR3 ECC sticks larger than 4Gb), or
 does the server's BIOS forbid that?

 HTH,
 //Jim Klimov

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Migrating root zpool from old disks to new one. only one system available

2012-06-12 Thread George Wilson
Yes it will. The only way to do this is to create a secondary pool and 
send/receive your root pool to the new pool.

- George

On Jun 11, 2012, at 7:16 PM, Rich wrote:

 Won't zpool replace fail b/c the new disks require ashift=12 and his
 existing pool devices have ashift=9?
 
 - Rich
 
 On Mon, Jun 11, 2012 at 7:12 PM, James C. McPherson
 james.c.mcpher...@gmail.com wrote:
 On 12/06/12 08:39 AM, Hans J. Albertsson wrote:
 
 Suppose:
 
 I have a system with but two disks. They're fairly small: 300GB, and use
 512B sectors.
 
 These two disks are a mirror zpool, creqated from the entire disks.
 
 There are about 20 or so filesystems in there.
 
 The system has room for only two disks.
 
 I'd like to replace these two small disks with two 2TB disks using 4kB
 blocks.
 
 So:
 
 Is there a writeup on how to connect one of these new disks to the
 existing machine, using an external esata cabinet.
 set up a new zpool on this new disk, and transfer all the root pool data
 to the new single disk zpool.
 then set up the new zpool to be bootable.
 And last: taking the old disks out of the machine, place the new single
 disk, and another, empty, similar 2TB disk in the machine, and boot from the
 single new one as the new root zpool. And then add the second new disk as a
 mirror.. effectively running the old system exactly as it was w/o
 reinstalling anything significant, but with much roomier disks.
 
 Note: in this case there is no way to get another system to do it on. And
 a third disk can only be connected using an external cabinet and Esata or
 USB.
 
 
 Hi Hans,
 I suggest this:
 
 #1 set autoexpand=on for rpool
 #2 connect your esata enclosure
 #3 once you've got them sliced up as desired (I suggest slice 0 should cover
 all
   except cylinders 0 and 1), run installgrub on both new disks, to the mbr
 #4 zpool replace one of your rpool disks
 #5 zpool replace the other rpool disk
 #6 poweroff
 #7 do the physical replacement
 #8 poweron
 
 
 
 James C. McPherson
 --
 Solaris kernel software engineer, system admin and troubleshooter
  http://www.jmcp.homeunix.com/blog
 Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
 
 
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] HDD Upgrade problem

2012-06-12 Thread George Wilson

On Jun 12, 2012, at 12:58 AM, Rich wrote:

 On Tue, Jun 12, 2012 at 12:50 AM, Richard Elling
 richard.ell...@richardelling.com wrote:
 On Jun 11, 2012, at 6:08 PM, Bob Friesenhahn wrote:
 
 On Mon, 11 Jun 2012, Jim Klimov wrote:
 ashift=12 (2^12 = 4096). For disks which do not lie, it
 works properly out of the box. The patched zpool binary
 forced ashift=12 at the user's discretion.
 
 It seems like new pools should provide the option to be created with 
 ashift=12 even if none of the original disks need it.  The reason is to 
 allow adding 4K sector disks later.
 
 I don't disagree, in principle. However, history has shown that people don't 
 plan very well
 (eg, reason #1 dedup exists). Is there some other way to be more clever?
  -- richard
 
 I believe that the consensus the last time this came up was agreement
 on that point, but contention over how the semantics of specifying it
 should work?
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

Illumos has a way to override the physical block size of a given disk by using 
the sd.conf file. Here's an example:

sd-config-list =
DGC RAID, physical-block-size:4096,
NETAPP  LUN, physical-block-size:4096;


By adding the VID PID values into the sd-config-list along with the 
'physical-block-size' parameter you can override the value that the disk will 
use for it's block size. ZFS will pick this up and use the correct ashift value.

- George
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] HDD Upgrade problem

2012-06-12 Thread Bob Friesenhahn

On Tue, 12 Jun 2012, George Wilson wrote:


Illumos has a way to override the physical block size of a given disk by using 
the sd.conf file. Here's an example:

sd-config-list =
DGC RAID, physical-block-size:4096,
NETAPP  LUN, physical-block-size:4096;


By adding the VID PID values into the sd-config-list along with the 
'physical-block-size' parameter you can override the value that the 
disk will use for it's block size. ZFS will pick this up and use the 
correct ashift value.


This can obviously work once the OS is installed and it is possible to 
edit system files.  Is there a way to apply it to the root pool when 
the system is installed?


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] HDD Upgrade problem

2012-06-12 Thread George Wilson

On Jun 12, 2012, at 11:00 AM, Bob Friesenhahn wrote:

 On Tue, 12 Jun 2012, George Wilson wrote:
 
 Illumos has a way to override the physical block size of a given disk by 
 using the sd.conf file. Here's an example:
 
 sd-config-list =
  DGC RAID, physical-block-size:4096,
  NETAPP  LUN, physical-block-size:4096;
 
 
 By adding the VID PID values into the sd-config-list along with the 
 'physical-block-size' parameter you can override the value that the disk 
 will use for it's block size. ZFS will pick this up and use the correct 
 ashift value.
 
 This can obviously work once the OS is installed and it is possible to edit 
 system files.  Is there a way to apply it to the root pool when the system is 
 installed?
 
 Bob
 -- 
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

I have not tried this but you might be able to do change sd.conf and then run 
'update_drv -f sd' prior to starting the installation.

- George


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Migrating root zpool from old disks to, new one. only one system available

2012-06-12 Thread Hans J. Albertsson

Actually I found the following blog quite illuminating.

http://www.big-bubbles.fluff.org/blogs/bubbles/blog/2012/02/04/placeholder-migrating-a-zfs-root-pool/


It would have to be worked thru for a number of specific cases. But it 
is basically sound.


A bird's eye view might be: format the new disk to have a partition s0 
starting from cyl 1 and a boot partition 8 on cyl 0
set up a zpool (using the hacked version with --blocksize if you'r not 
certain the disk reports a truthful 4k block size) on s0
create a new BE on that zpool as a clone of your current BE. Add swap 
and dump in the new zpool too
zfs send/recv using -R anything that wasn't brought over by the beadm 
cloning
installgrub -R the new device, and edit vfstab and do a dumpadm to point 
swap and dump as seen from the new environment to the new ones.


Name will be not rpool, and you can do another round as above if you care.

Read that blog while doing it: this bird's eye isn't anywhere near complete.




On 2012-06-12 09:00, openindiana-discuss-requ...@openindiana.org wrote:

Message: 1
Date: Tue, 12 Jun 2012 00:39:32 +0200
From: Hans J. Albertssonhans.j.alberts...@branneriet.se
To:openindiana-discuss@openindiana.org
Subject: [OpenIndiana-discuss] Migrating root zpool from old disks to
new one. only one system available
Message-ID:4fd673a4.5030...@branneriet.se
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Suppose:

I have a system with but two disks. They're fairly small: 300GB, and use
512B sectors.

These two disks are a mirror zpool, creqated from the entire disks.

There are about 20 or so filesystems in there.

The system has room for only two disks.

I'd like to replace these two small disks with two 2TB disks using 4kB
blocks.

So:

Is there a writeup on how to connect one of these new disks to the
existing machine, using an external esata cabinet.
set up a new zpool on this new disk, and transfer all the root pool data
to the new single disk zpool.
then set up the new zpool to be bootable.
And last: taking the old disks out of the machine, place the new single
disk, and another, empty, similar 2TB disk in the machine, and boot from
the single new one as the new root zpool. And then add the second new
disk as a mirror.. effectively running the old system exactly as it was
w/o reinstalling anything significant, but with much roomier disks.

Note: in this case there is no way to get another system to do it on.
And a third disk can only be connected using an external cabinet and
Esata or USB.


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] HDD Upgrade problem

2012-06-12 Thread Jim Klimov

2012-06-12 19:00, Bob Friesenhahn wrote:

On Tue, 12 Jun 2012, George Wilson wrote:


Illumos has a way to override the physical block size of a given disk
by using the sd.conf file. Here's an example:

sd-config-list =
DGC RAID, physical-block-size:4096,
NETAPP  LUN, physical-block-size:4096;


By adding the VID PID values into the sd-config-list along with the
'physical-block-size' parameter you can override the value that the
disk will use for it's block size. ZFS will pick this up and use the
correct ashift value.


This can obviously work once the OS is installed and it is possible to
edit system files.  Is there a way to apply it to the root pool when the
system is (being) installed?


First of all, I believe this snippet belongs on a Wiki page,
and I'll try to make one to sum up all knowledge and FUD we
have about The AShift Problem ;) At least, it would be easier
to point people to this page as a common answer ;)

Second, I believe the sd.conf fix won't apply to drives used
in IDE mode via BIOS, and users might have that (although it
is an item best fixed before installation).

Third, perhaps the sd.conf settings can be re-read by reloading
the module and/or using add_drv or something like that? From
a different thread (on disabling power-saving on Dell HBAs):
...enter into sd.conf and reload the driver via update_drv -vf sd.

Also regarding the install-time setting - if the installation
is done using the wizard, then the zpool binary at the moment
of installation should use the ashift=12 you need, without
extra command-line parameters (as to not go hacking inside
the wizard). Earlier I did this by using a modified binary
from that blog entry instead of the system-provided one,
now I guess this can be done better with sd.conf :)

HTH,
//Jim Klimov

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] HDD Upgrade problem

2012-06-12 Thread Jim Klimov

2012-06-12 19:22, Jim Klimov wrote:

First of all, I believe this snippet belongs on a Wiki page,
and I'll try to make one to sum up all knowledge and FUD we
have about The AShift Problem ;) At least, it would be easier
to point people to this page as a common answer ;)


FWIW, here is the first draft:

http://wiki.illumos.org/display/illumos/The+AShift+Value+and+Advanced+Format+disks+and+ZFS

Comments/additions welcome,
//Jim

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] HDD Upgrade problem

2012-06-12 Thread michelle

That is absolutely stunning.

Many thanks for that. I guess I'm going to have to bite the bullet, wait 
until I've got another two 3tb drives and create a new pool.


Thanks to all for the feedback. Your patience with me is much appreciated.

On 12/06/12 17:44, Jim Klimov wrote:

2012-06-12 19:22, Jim Klimov wrote:

First of all, I believe this snippet belongs on a Wiki page,
and I'll try to make one to sum up all knowledge and FUD we
have about The AShift Problem ;) At least, it would be easier
to point people to this page as a common answer ;)


FWIW, here is the first draft:

http://wiki.illumos.org/display/illumos/The+AShift+Value+and+Advanced+Format+disks+and+ZFS 



Comments/additions welcome,
//Jim

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Access to ZFS viz CIFS from windows regularly hangs.

2012-06-12 Thread Bob Friesenhahn

On Tue, 12 Jun 2012, John McEntee wrote:


I am having problems with a openindiana storage server I have built am I am
trying to track down the cause to fix it. The current symptoms are seen from
all windows clients (both 7 and XP) that will report an error stating.

Path File is not accessible. The specified network name is no longer
available.


The first thing to verify is your network and network interface.  Run 
continuous traffic and see if there are any hickups.  You can use 
/usr/sbin/ping for testing with larger packets.


Also check the log files under /var/adm and /var/log.  Also check 
output of 'fmadm -ev' and 'fmadm faulty'.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Access to ZFS viz CIFS from windows regularly hangs.

2012-06-12 Thread Robbie Crash
I had similar issues before I enabled TLER, and disabled the head parking
on my WD Green drives. A quick Google shows some evidence of similar
features on the 3TB Hitachis.

On Tue, Jun 12, 2012 at 9:55 PM, Bob Friesenhahn 
bfrie...@simple.dallas.tx.us wrote:

 On Tue, 12 Jun 2012, John McEntee wrote:

  I am having problems with a openindiana storage server I have built am I
 am
 trying to track down the cause to fix it. The current symptoms are seen
 from
 all windows clients (both 7 and XP) that will report an error stating.

 Path File is not accessible. The specified network name is no longer
 available.


 The first thing to verify is your network and network interface.  Run
 continuous traffic and see if there are any hickups.  You can use
 /usr/sbin/ping for testing with larger packets.

 Also check the log files under /var/adm and /var/log.  Also check output
 of 'fmadm -ev' and 'fmadm faulty'.

 Bob
 --
 Bob Friesenhahn
 bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/**
 users/bfriesen/ http://www.simplesystems.org/users/bfriesen/
 GraphicsMagick Maintainer,http://www.GraphicsMagick.org/


 __**_
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@**openindiana.orgOpenIndiana-discuss@openindiana.org
 http://openindiana.org/**mailman/listinfo/openindiana-**discusshttp://openindiana.org/mailman/listinfo/openindiana-discuss




-- 
Seconds to the drop, but it seems like hours.

http://www.eff.org/
http://www.eff.org/http://creativecommons.org/
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Access to ZFS viz CIFS from windows regularlyhangs.

2012-06-12 Thread Mike La Spina

Does the suspend event only occur on SMB clients or does it impact the
other storage clients when triggered by the Windows clients?
Any domain controller event errors?

dmsg output?
fmdump -eV output?
uname -a output?

Have you attempted a packet capture of the event?
snoop -o smb-client.cap clientip 

-Original Message-
From: John McEntee [mailto:jmcen...@stirling-dynamics.com] 
Sent: Tuesday, June 12, 2012 8:52 AM
To: Discussion list for OpenIndiana
Subject: [OpenIndiana-discuss] Access to ZFS viz CIFS from windows
regularlyhangs.

I am having problems with a openindiana storage server I have built am I
am trying to track down the cause to fix it. The current symptoms are
seen from all windows clients (both 7 and XP) that will report an error
stating. 

 

Path File is not accessible. The specified network name is no longer
available. 

 

Another symptom  is windows explorer hangs and the user has to wait for
it to some back.

 

Just waiting a while ( a few minutes) and the box comes back.

 

I  currently think the root cause is in openindiana somewhere but am at
a bit of a loss. I have tried many things and have still not fixed it. I
think the box is lightly loaded for the hardware spec but kernel load
increases to 40% when a zfssnap is taking place.

 

Hardware spec.

2 x Xeon E6520 cpus

48 GB RAM

Intel HC5520 motherboard

3 x LSI SAS 9211-8i  cards 

 

Currently on openindiana 148 

 

The box is joined to a windows 2003 domain.

 

Zpool tank is 3 way mirror of 7 x 3TB hitachi disk (using 21 disks in
total,
zpool size of 19 TB, ) with 2 x SSD   8GB ZIL  on each and 140GB L2ARC
on
each, default checksum, no dedup and no compression.

 

Server operates as a windows home directory for 58 users (some laptops
users so just a backup location), a main shared drive for the company of
120 users.

It is also a nfs server to a vmware vsphere 4 server hosting 10 virtual
machines.

 

There are only 8 active production file systems, and 12 backup file
systems from other hosts (done out of hours).

 

Zpool iostat peaks at about 35 MB for the pool mostly around the 0 to 7
MB level.

 

Turning of time-sliderd does not stop the problem. (backups run out of
hours)

 

A  dtrace -n 'sched:::off-cpu { @[execname]=count()}'

Used to give a sched count in the 6 to 7 fiqures over 3 seconds, but
turing apci off with

#eeprom acpi-user-options=0x8

Reduced this to 5 figures.

 

What can I do to identify the problem to be able to fix it?

 

Thanks

 

John

 

Other information:

 

dtrace -n 'sched:::off-cpu { @[execname]=count()}'

dtrace: description 'sched:::off-cpu ' matched 3 probes

^C

 

  gconfd-2  2

  idmapd2

  inetd 2

  nscd  2

  sendmail  2

  svc.startd2

  gnome-power-mana  3

  fmd   4

  sshd  4

  devfsadm  6

  fsflush   7

  nfsmapid  7

  ntpd  7

  dtrace   13

  Xorg 17

  gdm-simple-greet 17

  svc.configd  71

  smbd113

  time-sliderd138

  zpool-rpool 597

  nfsd918

  zpool-tank 1968

  sched 80542

 

# echo hz/D | sudo mdb -k

hz:

hz: 100

 

# echo apic_timer::print apic_timer_t | sudo mdb -k

{

mode = 0

apic_timer_enable_ops = oneshot_timer_enable

apic_timer_disable_ops = oneshot_timer_disable

apic_timer_reprogram_ops = oneshot_timer_reprogram

}



___

The contents of this e-mail and any attachment(s) are strictly
confidential and are solely for the person(s) at the e-mail address(es)
above. If you are not an addressee, you may not disclose, distribute,
copy or use this e-mail, and we request that you send an e-mail to
ad...@stirling-dynamics.com and delete this e-mail.  Stirling Dynamics
Ltd. accepts