Re: [OpenIndiana-discuss] branding for illumos/openindiana

2011-06-25 Thread Gabriel de la Cruz
Indigo is one of the 7 colors of the rainbow. Quite nice blue. I dont mean
using the name, sgi did once uppon a time, but I just point some "indi
thing" that came to my head.


On Sat, Jun 25, 2011 at 9:18 AM, Benediktus Anindito  wrote:

> On Sat, Jun 25, 2011 at 6:45 AM, Christopher Chan
>  wrote:
> > On Saturday, June 25, 2011 01:21 AM, Mark Humphreys wrote:
> >>
> >> On Fri, Jun 24, 2011 at 9:44 AM, Kent Watsen  wrote:
> >>
> >>>
> >>> "Open Indiana" is a goofy name, even considering its history, but the
> >>> acronym is OK.
> >>>
> >>>
> >> How about just shortening it to "OpenIndy"?  :)
> >>
> >
> > +1
> >
> > :-D
> >
> > ___
> > OpenIndiana-discuss mailing list
> > OpenIndiana-discuss@openindiana.org
> > http://openindiana.org/mailman/listinfo/openindiana-discuss
> >
> OpenIndy looks fancy :))
>
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Brainstorming for OpenIndiana server

2011-06-25 Thread Gabriel de la Cruz
Nice tutorial, thanks



On Sat, Jun 25, 2011 at 4:34 AM, Chris Mosetick  wrote:

> I have OpenIndiana running on several machines. On one of them, I'm using
> OpenIndiana b148 as my main operating system, storage server and virtual
> machine host all in one physical machine. It does have dedicated ZIL and
> L2ARC devices to increase performance. I'm using VirtualBox 4.0.x as my
> virtual machine hypervisor, and *I'm not* using virtualbox default .vdi for
> virtual machines, instead I'm using ZFS zvols for my VM disks in
> conjunction
> with VirtualBox's built-in "raw disk access". (which in this case is not
> technically raw) It only takes a couple command lines to get it going and
> the performance has been great for me so far. I have Linux and Windows host
> running this way, even my firewall, running m0n0wall is running in there.
>
> I highly recommend using OpenIndiana as your VM host in conjunction with
> raw
> disk access. Run your Linux machines as VM's. You will need a lot more ram
> regardless of what you want to do.
>
> Here is an example:
>
> # virtualbox 4.0.x has already been installed.
> # "lift" is the name of a dedicated storage zpool
> # zimbra is the name of the virtual machine in VirtualBox, but will also be
> used as the name for zvol storage purposes.
>
> zfs create -s -V 100G -o volblocksize=128K lift/vboxhosts/zimbra
> zfs set compression=gzip-6 lift/vboxhosts/zimbra
> < out of VirtualBox GUI>>
> chown admin:sysadmin /dev/zvol/rdsk/lift/vboxhosts/zimbra
> VBoxManage internalcommands createrawvmdk -filename
> /lift/vboxhosts/zimbra/zimbra.vmdk -rawdisk
> /dev/zvol/rdsk/lift/vboxhosts/zimbra
> VBoxManage storageattach zimbra --storagectl "SAS Controller" --port 0
> --device 0 --type hdd --medium /lift/vboxhosts/zimbra.vmdk
> < vboxheadless>>
>
>
>
> On Wed, Jun 22, 2011 at 1:50 PM, Alex Smith (K4RNT)
> wrote:
>
> > I'm thinking about setting up a RAID-Z training server, with Linux as
> > a base. Yes, I know it's weird, but hear me out.
> >
> > I have an AMD Opteron server that I'm repurposing from my old
> > workstation. I may be putting a hot-swap cage in here, and using the
> > onboard nforce (ahci) controller with a Linux host, running
> > OpenIndiana in a VirtualBox instance. Could it be possible to pass the
> > raw devices to the VM via a SATA controller, and then run the drives
> > in a RAID-Z configuration? Unless the root pool can be set up on
> > install as RAID-Z, I can use a virtual disk
> >
> > Host will have an AMD Opteron 1352 with 4GB DDR2 RAM.
> >
> > --
> > " ' With the first link, the chain is forged. The first speech
> > censured, the first thought forbidden, the first freedom denied,
> > chains us all irrevocably.' Those words were uttered by Judge Aaron
> > Satie as wisdom and warning... The first time any man's freedom is
> > trodden on we’re all damaged." - Jean-Luc Picard, quoting Judge Aaron
> > Satie, Star Trek: TNG episode "The Drumhead"
> > - Alex Smith (K4RNT)
> > - Sterling, Virginia USA
> >
> > ___
> > OpenIndiana-discuss mailing list
> > OpenIndiana-discuss@openindiana.org
> > http://openindiana.org/mailman/listinfo/openindiana-discuss
> >
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] realpath(3C)

2011-06-25 Thread Frank Lahm
Hi,

on latest Opensolaris snv134b realpath doesn't take NULL as second
arg. On Solaris 11 Express it does, giving the semantics described in
`man realpath`:

...
DESCRIPTION
 The realpath() function derives, from the  pathname  pointed
 to  by  file_name, an absolute pathname that resolves to the
 same directory entry, whose resolution does not involve ".",
 "..",  or  symbolic links. If resolved_name is not null, the
 generated pathname is stored as a null-terminated string, up
 to  a  maximum  of  {PATH_MAX}  (defined in limits.h(3HEAD))
 bytes  in  the  buffer  pointed  to  by  resolved_name.   If
 resolved_name is null, the generated pathname is stored as a
 null-terminated string in a buffer that is allocated  as  if
 malloc(3C) were called.
...

What's the case on Openindiana ?

Thanks!

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Question about drive LEDs

2011-06-25 Thread Fred Liu
> In brief, I cloned a bootable SATA disk from the VMware (now
> discontinued) Sun Unified Storage simulator, and then created the
> graphics and definitions to match a Supermicro 4U chassis and system
> board.
> Everything worked exactly as a real one would, including disk locator
> led's, disk present/absent graphics etc. and was able to be updated
> with
> new firmware.
> 
> Since it was origionally designed as an generic appliance kit and there
> was mention of it being available as Software, this was relatively easy
> to do.
> It is definitely Solaris based, but has some quite different drivers
> etc. than OpenSolaris.
> 
> IPMI is required to make it work.
> 
> There are quite a few xml files to configure, along with graphics.
> It is also very easy to crash the management svc with invalid
> definitions.
> 
> 
> sample definition:
> 
> 
> 
> 
> 
> 
> 
> 
>   
> 
>   
> 
>  name-stability='Private' data-stability='Private' >
> 
> 
>  name-stability='Private' data-stability='Private' >
> 
> 
> 
>   
> 
> 
> 
>   
>  
>   
> 
>
>
>
>
> 
>  
>  
>  
>  
> 
> 
>   
> 
>   
> 
> 
> 
> 
> 
> 
> 
> Mark.
> 

Great! Many thanks.

What is the hardware spec of the Supermicro appliance?


Thanks.

Fred

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] realpath(3C)

2011-06-25 Thread Alan Coopersmith
On 06/25/11 04:33 AM, Frank Lahm wrote:
> Hi,
> 
> on latest Opensolaris snv134b realpath doesn't take NULL as second
> arg. On Solaris 11 Express it does, giving the semantics described in
> `man realpath`:
> 
> ...
> DESCRIPTION
>  The realpath() function derives, from the  pathname  pointed
>  to  by  file_name, an absolute pathname that resolves to the
>  same directory entry, whose resolution does not involve ".",
>  "..",  or  symbolic links. If resolved_name is not null, the
>  generated pathname is stored as a null-terminated string, up
>  to  a  maximum  of  {PATH_MAX}  (defined in limits.h(3HEAD))
>  bytes  in  the  buffer  pointed  to  by  resolved_name.   If
>  resolved_name is null, the generated pathname is stored as a
>  null-terminated string in a buffer that is allocated  as  if
>  malloc(3C) were called.
> ...
> 
> What's the case on Openindiana ?

illumos seems to have inherited the fix from the OpenSolaris sources
that S11X is using:

http://src.illumos.org/source/diff/illumos-gate/usr/src/lib/libc/port/gen/realpath.c?r2=%2Fillumos-gate%2Fusr%2Fsrc%2Flib%2Flibc%2Fport%2Fgen%2Frealpath.c%4013105%3A48f2dbca79a2&r1=%2Fillumos-gate%2Fusr%2Fsrc%2Flib%2Flibc%2Fport%2Fgen%2Frealpath.c%406812%3Afebeba71273d

-- 
-Alan Coopersmith-alan.coopersm...@oracle.com
 Oracle Solaris Platform Engineering: X Window System


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] oracle removes 32bit x86 cpu support for solaris 11 will OI do same?

2011-06-25 Thread Andrew Gabriel

Michael Stapleton wrote:
While we are talking about 32 | 64 bit processes; 
Which one is better?

Faster?
More efficient?
  
Initially, assuming a 32 verses 64 bit build doesn't change any 
algorithms...


On x86, a 64 bit build of the same program will typically run ~50% 
faster if it's CPU-bound, because more registers are available for the 
compiler/optimizer to use. There's a wide variance depending what the 
program does (I have an example which gets much better than 50% gain). 
If it's not CPU-bound (and most things aren't), it makes no difference. 
However, if the larger pointers and data items push the 64 bit program's 
working set size over what fits in the CPU cache whereas the 32 bit 
version does fit in the cache, then you can in theory see the 32 bit 
version winning.


On sparc, a 64 bit build of the same program does not benefit from any 
more registers like on x86, but it does pay the price for a larger 
working set size, and I typically see a 10-14% performance reduction for 
a CPU-bound program which has been just rebuilt 64bit.


However, if you can use the 64 bit address space to change the 
algorithms used by your app, such as mmaping files rather than doing 
loads of lseek/read/write ops, then you may see additional gains from 
this, and on sparc that will often more than cancel out the reduction in 
CPU performance by some way.


I wouldn't personally bother changing anything much which is shipped 
with the OS (very rarely is the performance of things in /usr/bin an 
issue). However, I would suggest taking these factors into account when 
building the key applications your system is going to run, if you are 
CPU-bound.


--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Intel Driver causes a lot of syscalls

2011-06-25 Thread raichoo

 Hi everyone,

I already pasted this in the Illumos Forums but figured that this might 
be a better place.

(I'm new here so you might also direct me to a better ML than this one)

I recently installed OpenIndiana on my Thinkpad x201. Everything works 
fine but Xorg
produces a lot of syscalls (approx. 9000/sec). I dug a little deeper and 
found out that
it's actually the intel driver calling ioctl to commincate with the drm 
driver to do some

compositing stuff.

I tested this under 148 and 151 and the issue is present in both.

Here is some information I have gathered:

root@ayanami:/home/raichoo# mpstat 1
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0  848   0   86   619  200  4737   28   170  30934   3   0  93
  1  616   0   83   382  106  5046   27   170  30124   2   0  94
  2  359   0   72   384  135  4326   24   150  21683   1   0  96
  3  164   0   64   320   69  4115   20   160  20362   1   0  96
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  0   19   0   42   904  416  5810   25   300   6590   4   0  96
  10   0   12   322   93  4910   29   310   1262   0   0  98
  20   0   72   520  167  7783   31   340 105783   3   0  94
  30   0   18   394  127  5414   27   570  11152   1   0  97
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  00   0   18   934  421  6291   1100   6401   4   0  95
  10   03   478  205  4460   1520   2900   1   0  99
  20   00   368  112  4490   1410  87804   3   0  93
  30   09   432   72  7020   10   160  12104   1   0  95
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  00   0   30   787  350  6930   1200   8540   4   0  96
  10   0   34   420  106  5183   1300  65783   2   0  95
  20   0   33   318   94  2732830  23171   1   0  98
  30   0   24   556  177  7980   1450  12942   2   0  96
CPU minf mjf xcal  intr ithr  csw icsw migr smtx  srw syscl  usr sys  wt idl
  00   0   45   840  365  6752   1160  38682   5   0  93
  10   0   39   551  146  7024   2270  62304   2   0  94
  20   0   30   472  173  4430   2110   4561   1   0  98
  30   06   182   39  1501   1020   1520   0   0 100


root@ayanami:/home/raichoo# dtrace -n 'syscall::ioctl:entry/execname == 
"Xorg"/{@[ustack()] = count();} tick-1s{printa(@); trunc(@);}'

[...]
  libc.so.1`ioctl+0xa
  intel_drv.so`i965_composite+0x4b2
  libexa.so`exaTryDriverCompositeRects+0x605
  libexa.so`exaCompositeRects+0x235
  libexa.so`exaGlyphsToMask+0x2a
  libexa.so`exaGlyphs+0x7d1
  Xorg`damageGlyphs+0x256
  Xorg`ProcRenderCompositeGlyphs+0x529
  Xorg`Dispatch+0x3b4
  Xorg`main+0x673
  Xorg`0x46e2dc
 7229



pci bus 0x cardnum 0x00 function 0x00: vendor 0x8086 device 0x0044
 Intel Corporation Core Processor DRAM Controller

pci bus 0x cardnum 0x02 function 0x00: vendor 0x8086 device 0x0046
 Intel Corporation Core Processor Integrated Graphics Controller

pci bus 0x cardnum 0x16 function 0x00: vendor 0x8086 device 0x3b64
 Intel Corporation 5 Series/3400 Series Chipset HECI Controller

pci bus 0x cardnum 0x16 function 0x03: vendor 0x8086 device 0x3b67
 Intel Corporation 5 Series/3400 Series Chipset KT Controller

pci bus 0x cardnum 0x19 function 0x00: vendor 0x8086 device 0x10ea
 Intel Corporation 82577LM Gigabit Network Connection

pci bus 0x cardnum 0x1a function 0x00: vendor 0x8086 device 0x3b3c
 Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host 
Controller


pci bus 0x cardnum 0x1b function 0x00: vendor 0x8086 device 0x3b56
 Intel Corporation 5 Series/3400 Series Chipset High Definition Audio

pci bus 0x cardnum 0x1c function 0x00: vendor 0x8086 device 0x3b42
 Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1

pci bus 0x cardnum 0x1c function 0x03: vendor 0x8086 device 0x3b48
 Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 4

pci bus 0x cardnum 0x1c function 0x04: vendor 0x8086 device 0x3b4a
 Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 5

pci bus 0x cardnum 0x1d function 0x00: vendor 0x8086 device 0x3b34
 Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host 
Controller


pci bus 0x cardnum 0x1e function 0x00: vendor 0x8086 device 0x2448
 Intel Corporation 82801 Mobile PCI Bridge

pci bus 0x cardnum 0x1f function 0x00: vendor 0x8086 device 0x3b07
 Intel Corporation Mobile 5 Series Chipset LPC Interface Controller

pci bus 0x cardnum 0x1f function 0x02: vendor 0

Re: [OpenIndiana-discuss] write speeds faster with no ZIL and L2ARC

2011-06-25 Thread Lucas Van Tol

You might want to look at the a_svct and %w  / %b times on iostat -xn 1 ;
The a_svct should be very low on the intel SSD's; preferably less than one.
You might also want to look with only one of the SSD's working at a time; 
either ARC or ZIL.  

Offhand; this sounds a bit odd, especially since a single set of 12 disks in 
raidz2 isn't particularly fast.
If your F40 is actually honoring cache flushes as expected; it would not 
actually have very fast random IO due to being MLC drive; the X25-e may still 
be faster.
Perhaps putting one or 2 of the F40's as cache; and one/two of the x25-e's as 
logs might work better?
If you have the internal slots and hardware to spare; you can add multiple 
log/cache devices to a single pool.

-Lucas Van Tol

> Date: Fri, 24 Jun 2011 18:19:23 -0700
> From: cmoset...@gmail.com
> To: openindiana-discuss@openindiana.org
> Subject: [OpenIndiana-discuss] write speeds faster with no ZIL and L2ARC
> 
> Hi,
> 
> *Problem:*
> write speeds are faster when no L2ARC or ZIL is configured.
> 
> *Our current setup:*
> We are currently running OpenIndiana b148 (upgraded from b134.) Supermicro
> X8DTH-i/6/iF/6F, single Xeon E5504, 24GB ram. A single, main storage pool is
> running pool version 28, populated with 12 WD RE 7200rpm SATA disks in a
> RAIDZ2. This pool has two 32GB Intel X25-E SSD's for ZIL and L2ARC connected
> directly to SATA ports on the motherboard. The entire system has been in
> operation for about one year with minimal issues. About a week ago we
> started seeing slow write performance so troubleshooting began.
> 
> *What we have done so far / what we know:*
> We removed the ZIL and L2ARC SSD's from the server, zpool remove tank c6t1d0
> c6t0d0
> and connected them to a windows machine and ran Intel SSD
> Toolon
> them. (a windows only application)
> 
> Using Intel SSD Toolbox 2.0.2.000, we see the following values;
> 
> 09 Power-On Hours Count:
>   ZIL:Raw: 6783
>   L2ARC: Raw: 8562
> 
> E9 Media Wearout Indicator:
>   ZIL: Raw: 0  Normalized: 99  Threshold: 0
>   L2ARC:  Raw: 0  Normalized: 99  Threshold: 0
> 
> E1 Host Writes
>   ZIL: Raw: 47 TB  Normalized: 200  Threshold: 0
>   L2ARC:  Raw: 67 TB  Normalized: 199  Threshold: 0
> 
> Looking at this, we can only conclude that either;
>  1) Intel X25-E drives have no "wear" even after ~50-60 TB of writes
>  2) The wearout indicator is broken and unreliable.
> 
> Here are some write tests we have performed using rsync to transfer a 3.5GB
> ISO file from my workstation over Gigabit Ethernet to a file system in this
> server. All tests go to the same file system unless otherwise noted, and
> after each test the already transfered bits were removed from the server.
> 
> TRANSFER AMOUNT  TIME TAKEN  NOTES/CONDITIONS
> 1GB6:00min
> tank/shares/sw which has (compression=gzip-6) X25-E ZIL and L2ARC are
> present
> 1GB50sec
> rpool/home/chris (two sata disks in mirror, no compression)
> 1GB3:30
> tank pool without L2ARC
> 1GB1:30
> tank pool no L2ARC and no ZIL
> 500MB4:30
> tank pool, brand new ZIL, Corsair F40GB2 (40GB)
> 500MB3:50
> tank pool, new ZIL and new L2ARC, both Corsair F40GB2
> 800MB6:00
> tank pool, new L2ARC and new ZIL have now had ~3 hours to "warm up",
> l2arcstat went from 47MB to 22GB
> 
> So our problem is that even with brand new SSD's, that have MUCH higher
> maximum write speeds then are "old" SSD's, transfers happen to the storage
> pool quicker *without* configured log and cache devices, then when they are
> being used. FWIW, looking at iostat -exn while running one of the rsync
> tests above, the time things are taking seem to match up on the kw/s column.
> 
> Can anyone provide insight to this slow write speed situation when ZIL and
> L2ARC is present?
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
  
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss