Re: [OpenIndiana-discuss] Zpool replacing forever

2019-01-21 Thread Andrew Gabriel
On 21/01/2019 02:37, Gary Mills wrote:
> On Sun, Jan 20, 2019 at 08:16:44PM +0000, Andrew Gabriel wrote:
>> zfs is keeping the old disk around as a ghost, in case you can put it
>> back in and zfs can find good copies of the corrupt data on it during
>> the resilver. It will stay in the zpool until there's a clean scrub,
>> after which zfs will collapse out the replacing-0 vdev. (In your case,
>> you know there is no copy of the bad data so this won't help, but in
>> general it can.)
> I see.  That's a good explanation, one that I didn't see anywhere
> else.  I suppose that the man page for `zpool replace' should advise
> you to correct all errors before using the command.  That way, the
> confused status I saw would not arise.


Actually, you want to get that disk replacing ASAP to reduce risk of 
catastrophic data loss, so I would not suggest holding off the replace 
until you sorted out the errors. You could start on sorting out the 
errors if you are waiting for a replacement disk to arrive on site, but 
getting it replacing ASAP is the top priority.

I've had a few cases of a second drive failing during a resilver. If we 
hadn't been some way into the first resilver, we would have lost about 4 
billion files, but in the worst case of the double disk fails on RAIDZ1 
I saw, only 11,000 files out of over a billion were lost IIRC (and in 
most of the RAIDZ1 double disk fails, just a couple of hundred file were 
lost). 11,000 files is about half an hour to restore, verses 2 months to 
restore the whole pool.


-- 

Andrew Gabriel

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Zpool replacing forever

2019-01-20 Thread Andrew Gabriel
On 20/01/2019 15:11, Gary Mills wrote:
> The status of rpool is like this:
>
>  NAMESTATE READ WRITE CKSUM
>  rpool   DEGRADED23 015
>mirror-0  DEGRADED46 030
>  replacing-0 DEGRADED 0 020
>c5t0d0s0/old  FAULTED  0 0 0  corrupted data
>c5t0d0s0  ONLINE   0 020
>  c5t1d0s0ONLINE   0 076
>  
>  errors: 5 data errors, use '-v' for a list
>
> To get here, I initially had a single SSD as rpool.  It had started
> developing errors.  The number of errors was increasing.  That was
> c5t0d0s0 .  I installed a second SSD, c5t1d0s0, and added it to rpool,
> making it a mirror.  The resilver copied all of the data, including
> the errors.
>
> Then, I removed the bad SSD and replaced it with another new one, at
> the same device name.  I started the replacement with the command:
>
>  # zpool replace -f rpool c5t0d0s0
>
> That started another resilver, as I expected.  However, it didn't
> remove the old SSD from the mirror.  As far as I can tell, it did
> copy all of the data.  What do I do now?
>
> All five of the data errors are either in old snapshots or old BEs.
> I can destroy them, if that will help.

You will need to delete the corrupt files (and snapshots if the corrupt 
files are in any) and then run a scrub so zfs knows there is no more 
corrupt data in the zpool.

zfs is keeping the old disk around as a ghost, in case you can put it 
back in and zfs can find good copies of the corrupt data on it during 
the resilver. It will stay in the zpool until there's a clean scrub, 
after which zfs will collapse out the replacing-0 vdev. (In your case, 
you know there is no copy of the bad data so this won't help, but in 
general it can.)

So to fix, delete the corrupt files, and any snapshots they're in.
Then run a scrub.
When the scrub finishes, ZFS will collapse out the replacing-0 vdev.

-- 
Andrew Gabriel


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] failsafe boot howto on older OpenIndiana system

2018-01-24 Thread Andrew Gabriel

On 24/01/2018 22:55, Jerry Kemp wrote:
Had a bad power outage (another story un-into itself) at home, 
apparently took multiple drives out on an older OpenIndiana 
(pre-Hipster) system.  I believe the OS (SSD) drive is OK, and I 
believe that I probably forgot to set the zfs fail=continue switch on 
the other pools.


My yahoo-fu must be off, I've been looking at wiki.openindiana.org and 
docs.openindiana.org .


I'm sure I'm just missing the obvious, but can someone share the 
details of a failsafe boot for an older, pre-Hipster OpenIndiana 
install please?


At the grub menu, move to the BE you want to boot and type 'e' (for edit).
Use cursor keys and add "-m milestone=none" to the boot command line 
options.
Come out of edit mode (can't remember how off-hand, but it probably 
tells you on bottom of screen, maybe Esc or Return).

Type 'b' to boot.

--
Andrew

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] zpool on the second partition of an external disk

2017-09-08 Thread Andrew Gabriel

On 08/09/2017 14:44, James Carlson via openindiana-discuss wrote:

On 09/07/17 16:50, Apostolos Syropoulos via openindiana-discuss wrote:

Ok, but what is the problem?
What is the output and stderr of your zpool create cXtYdZp2 command?
Does it gives any error?

After more searching I concluded that the command should be
# zpool create -f utank c13t0d0s2

The logical Node was  /dev/rdsk/c13t0d0p0and format --> partition --> print 
showed that slice 2 is the one where I can store data.
I have also used fdisk to delete all partitions and thenparted to create the 
NTFS partition. In order to create thefile system I have used
# zfs create utank/External  #chown -R user:group /utank/External

Now I can use the partition!

p2 is, by convention, "whole disk" when using old-style partitioning.
If you're using that and you've partitioned the disk, I think you've
trashed your NTFS partition or (worse) you have an overlap.

Are you sure?  What exactly does "format" say about the partition map?



Um...

On x86, p0 is the whole disk, and p1-4 are the 4 primary FDISK partitions.

s0-15 are slices in the Solaris FDISK partition, with s2 by convention 
being the whole Solaris FDISK partition, overlapping all the other 
slices in the Solaris FDISK partition.



On SPARC, there is no FDISK partitioning. s0-7 are slices on the disk, 
and s2 is the whole disk by convention, overlapping all other slices on 
the disk.



--

Andrew


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS and encryption

2017-07-24 Thread Andrew Gabriel

For OpenZFS, there's a pull request open to add ZFS encryption.
https://github.com/openzfs/openzfs/pull/124
It's had an enormous amount of review over the last year, including 
expert security review. Still going through testing.


So hopefully, not far off.

On 24/07/2017 18:48, Kai Windle wrote:

Hi all,

I'm just making a quick inquiry as to whether ZFS has encryption built into
OI?
I've tried googling around but nothing appears to be giving me a definitive
yes or no answer.

If encryption is not supported via ZFS how would I go about encrypting my
entire hard drive?

Sorry I'm still new to Openindiana

Many thanks


Kai.


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] dell r730xd + SATA

2017-04-28 Thread Andrew Gabriel

On 28/04/2017 22:57, Nikola M wrote:

On 04/28/17 09:09 PM, jason matthews wrote:


Is anyone using the R730XD with its 3x port expanders successfully
with SATA drives? Yes, I am aware of the conventional wisdom.

I'll pass some general knowledge, that may not be suitable for your
exact hardware, but could be used as the pointers when you compare your
specs.

Using SAS port expanders is unadvised because they include chips in them
that can go crazy (with their own small firmwares), and with expanders,
the disk controller is not in direct control of the drives and that can
do really bad for ZFS.
That goes especially if using SAS expanders with SATA drives, like a
nightmarish situation, the worst case, that is actually not supported
under illumos. (And especially mixing SAS and SATA drives on the same
SAS expander)
Even using SATA drives with SAS controller on direct ports, without
expanders, is discouraged and not a good idea.
SAS controller -- SAS drives.  SATA controller - SATA drives.

The controller is best to be working in JBOD mode. Don't use Hardware
RAID levels, use controllers that can do JBOD. (Take care, since JBOD
usually gets intentionally disabled on Hardware RAID controllers, so you
have a vendor lock-in for their hardware..)
Software RAID is what ZFS is for, to elevate one from hardware
constrains. So you can just pop up disks from one machine and pop them
in into another JBOD machine and it just works without any
configuration. (even between x86 and SPARC - ZFS is endian agnostic).
Not possible with hardware RAID controlling drives.


I have a number of R730s working with the 2x expanders and Intel DC
S37{0,1}0 SSDs. It is time to order again and the 24 bay 730XD is a
seductress in terms of storage options. I just dont know if it will
work well mixing expanders and SATA.

No it won't. It might look ok, or you could seem happy, but at the first
sign of trouble or reporting some bug, you could learn the hard way,
that is not supported configuration to use SATA drives with SAS
expanders. And avoid configuration with the expanders at all, but do
direct connecting drives to the controller ports).

Btw, internet is full with expander warnings, hope this helps.


It might help to explain when and why it doesn't work.

SATA drives think nothing of dropping the phy (physical interface) 
whenever they fancy, e.g. drive controller resetting itself, or doing 
some error recovery. Generally, their firmware is pretty grotty in this 
area, and this is expected and ignored. It's not a problem when you have 
a one-to-one link to the host controller.


The problem arises when you go through a SAS expander.

SAS drives don't drop their phy if they can possibly avoid it, as they 
know it's expensive for the SAS fabric. When a SATA drive does this, the 
expander ends up internally re-enumerating all the drives to work out 
who went and who is still there, which [multi-]paths still work, etc. 
This will normally cause all their phys to be dropped.


What this means in practice is that once you get a SATA drive which 
starts going bad, its phy will start going up and down. The expander 
will take all the other SATA drive's phys up and down too as it keeps 
re-enumerating to work out which drives are still there. The host will 
now start to see lots of transport errors and timeouts across all the 
SATA drives. The bottom line is, when one SATA drive starts going bad, 
you will usually find you get errors reported against lots of them, and 
there's no way to find the culprit without taking the storage array down 
and testing access to each disk one at a time. That's a non-starter in 
any Enterprise environment.


There are some limited cases where SATA drives can work, and that's 
where there's only one of them on the SAS expander. Sometimes this was 
used to put a SATA SSD into an otherwise all-SAS array.


If you want to build a cheap array that actually works, you should use 
nearline SAS drives rather than SATA drives.


--
Andrew Gabriel

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS on Openindiana vs ZFS on oracle linux

2017-04-21 Thread Andrew Gabriel
That would be any Illumos-based distro, FreeBSD (and FreeNAS), Ubuntu, 
and maybe Gentoo.


I looked on the Oracle Linux web pages and blog, and there's no mention 
of ZFS anywhere.
The Oracle Linux folks were very anti ZFS because they developed BTRFS 
and saw ZFS as a competitor, although I think the BTRFS development team 
is now mainly at Facebook.


On 21/04/2017 19:12, David Johnson wrote:
I might be using the wrong terminology, but by "native" I meant 
without doing any extra
work. Some of the operating systems in your link are rumored to not 
work out

of the box. An Oracle tech support rep. is the one who told me about the
native support for ZFS in Oracle Linux.

On 04/21/17 11:01, Andrew Gabriel wrote:

On 21/04/2017 18:24, David Johnson wrote:
Does anyone have any editorial comments on how well ZFS is handled 
on Openindiana
vs. Oracle Linux? It looks like these may be my only two choices if 
I want to
have an OS that has native ZFS support, and allow me to get updated 
security patches. 


I'm not aware that Oracle Linux has native ZFS support at all.
However, there are more than you mentioned - see 
http://open-zfs.org/wiki/Distributions





___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss
.




___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS on Openindiana vs ZFS on oracle linux

2017-04-21 Thread Andrew Gabriel

On 21/04/2017 18:24, David Johnson wrote:
Does anyone have any editorial comments on how well ZFS is handled on 
Openindiana
vs. Oracle Linux? It looks like these may be my only two choices if I 
want to
have an OS that has native ZFS support, and allow me to get updated 
security patches. 


I'm not aware that Oracle Linux has native ZFS support at all.
However, there are more than you mentioned - see 
http://open-zfs.org/wiki/Distributions


--
Andrew Gabriel

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ISA card support in OI

2017-04-10 Thread Andrew Gabriel

On 11/04/2017 05:41, STEVENS Nigel wrote:

[@@ THALES GROUP INTERNAL @@]
Hi,

Does OI support using ISA cards?
I have a project porting a legacy system from Solaris 2.6 onto a new h/w 
platform and OI_151a7 (but willing to use any version) and cannot get a 
critical ISA card working. The card is bespoke to the project and cannot be 
changed. The new hardware is a PCI SBC hosted on a mixed PCI/ISA backplane.
I've seen online blogs saying that ISA support was removed from Solaris 8 
onwards so worries me that OI doesn't  support it.


x86 PC's still have a number of built-in peripherals which are on the 
ISA bus

(serial ports, printer port, and sometimes PS/2 keyboard/mice ports) which
are still supported, so generic ISA bus support is still there. 
(Internally, it's

implemented as the LPC - Low Pin Count - bus in motherboards, but that's
transparent to software.)

Solaris removed support for ISA bus cards by removing the drivers for the
individual cards, but I believe the generic ISA bus support is all still 
there,

and some people just copied the removed drivers back from the previous
release and they worked just fine. Sun still had customers running
industrial process control from their own ISA cards after this, and that 
still

had to work.

One other thing that went was support for ISA plug'n'play (auto 
configuration).
That was only in the DCA (Device Configuration Assistant), which was 
replaced
in Solaris 10 U1 if I recall correctly with newboot, and we didn't 
implement any

in-kernel replacement for the DCA's ISA plug'n'play support. This probably
means you would have to explicitly configure any ISA bus card in the 
driver's

.conf file, as it won't be automatically discovered from the system's ACPI
tables unless it's an integrated peripheral on the motherboard.

Having said that, I don't have any system with ISA bus which has anywhere
near enough memory to boot any current Solaris/Illumos software, so I can't
test it.

--
Andrew Gabriel

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The end is near

2017-01-20 Thread Andrew Gabriel

On 20/01/2017 17:29, John D Groenveld wrote:

In message , Fred Liu writ
es:

Nexenta has to co-work with hardware OEMs for nexentastor is just software. As
long as I know, they rolled out they-called software-defined storage applianc
es with Dell, PogoLinux, Supermicro etc. I even installed nexentasor on generi
c x86 hardware(I forgot the detailed configurations..).

With a bit of work, I believe Gea's napp-it will run on top
of OI:
https://www.napp-it.org/downloads/openindiana_en.html>

Not sure if these are good-enough solutions compared to
low-volume, high-margin Oracle ZFS Storage appliances
and if not, what features are missing from OI/Nexenta/illumos.


This all depends what experience you have (Solaris, ZFS), and what level 
of support you need, High Availability or not, etc. Some of the 
solutions will hide it all behind a GUI, and at the other extreme, you 
can roll your own using the standard Solaris/Illumos commands. Some 
allow you to add specific software you might need (e.g. your own 
monitoring), some will turn a blind eye although not really allowing it, 
and others will not allow it at all. Again - depends what you need. If 
you need a storage server as opposed to a storage appliance, then you 
will probably need to roll your own.


The Oracle ZFS Storage did get nearer to realistic pricing a few years 
ago (and might still be - I'm out of touch with it now), but when I was 
working for one of the other ZFS openstorage companies, I never saw 
Oracle try to sell their ZFS Storage Appliance into any of the large 
opportunities which came up, even though it would have been ideal in 
many cases. I think the Storage folks there probably never understood 
it, and back when I was still in Solaris Sales, it was mainly us who 
sold it even though it was supposed to be the responsibility of Storage. 
There's been no Solaris Sales team for years, so probably no one has 
been much pushing it for a long time.


--

Andrew


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] VirtualBox v5.1.0 on hipster lastest

2016-07-13 Thread Andrew Gabriel

On 13/07/2016 22:24, Volker A. Brandt wrote:

Adam Števko writes:

anyone interested in packaging virtualbox? That would help everybody.

I have not followed any of the upstream virtualbox mailing lists.
Does anyone know why they publish only a SVR4 package for Solaris?
That package is IPS-aware and checks several IPS dependencies under
Solaris 11.

As far as I can tell, the SVR4 package would have to be converted
to an IPS package, and the various scripts run during pkgadd would
have to be merged in an SMF assembly service.  Some amount of work,
but certainly doable.


That's partly why it wasn't done - the SVR4 package is a mixture of 
installation and configuration, but for IPS these two have to be 
separated out. Also, they would then have needed two installers, one for 
Solaris 10 and one for Solaris 11. Sticking with SVR4 meant they didn't 
need to resolve either of these issues.


--
Andrew

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] SSD as a dedicated swap device

2015-12-11 Thread Andrew Gabriel

On 11/12/2015 20:45, Ian Collins wrote:

Reginald Beardsley via openindiana-discuss wrote:

I have only occasional need to  run problems larger than main memory

> (16 GB at present), so I can't justify replacing all the DRAM for an
> infrequent need. The drop in SSD prices has me contemplating adding
> a 128 GB SSD as a swap device. The SSD latency and IOPS specs look
> as if they might be a useful compromise.
>
> Does anyone have any experience with this? The sort of jobs I'm
> interested in are batch processes that take several hours, not
> interactive tasks.

Even an SSD will be way slower than RAM.


Yep


At present rpool is a ZFS 3 way  mirror. It's become a bit unclear to

> me if it is still possible to control the swap device independently
> of other parts of the system contained in rpool. Back in SunOS 4.x
> days I ran with a pair of swap partitions spread across two disks
> which got me twice the performance on large array operations.

I don't think you can add the drive as a drive, but you could create a 
pool with a single volume on on it and add that volume with "swap -a 
/dev/zvol/dsk//.




Adding a swap partition or slice from a drive worked just fine last time 
I did it. You don't have to swap on ZFS (and there have been some good 
reasons not to in the past).
I would expect using a whole drive (the p0 device) would also work, 
although the danger with doing that is that many tools which don't find 
FDISK or GPT partitioning on the disk will assume the disk is unused.


--
Andrew

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Kernel panic on hung zpool accessed via lofi

2015-09-16 Thread Andrew Gabriel

On 16/09/2015 19:24, Nikola M wrote:

On 09/11/15 08:57 PM, Watson, Dan wrote:
I'm using mpt_sas with SATA drives, and I_DO_  have error counters 
climbing for some of those drives, is it probably that?

Any other ideas?


It is generally strongly advised to use SATA disks on SATA controllers 
and SAS disks on SAS controllers. And to use controller that can do JBOD.


Also, using SAS to SATA multipliers or using port multipliers at all 
is strongly disadvised too,
because it is usually cheap logic in it, that can go crazy and disk is 
not under direct control of the controller..


A disk interface specialist was telling me earlier today what goes wrong 
here. The problem is that many SATA drives drop the phy interface when 
they have some internal problem, even just retrying transfers. Normally 
that doesn't matter a scrap when they are connected 1-to-1 to a SATA 
controller. However, if they are connected to SAS fabric, it will cause 
the SAS fabric to re-enumerate all the drives at least at that port 
multiplier level, likely losing outstanding IOs on other drives, most 
particularly other SATA drives as implementations of STP (SATA Tunneling 
Protocol) in SAS HBAs/expanders just aren't very good. This often causes 
OS drivers to report errors against the wrong drive - i.e. not 
necessarily the one which is the root cause but others were IOs are 
lost, and you can't necessarily tell which was to blame (and probably 
don't even realise you might be being mislead). It happens again if/when 
the SATA drive recovers and brings its phy back up. This could cause FMA 
to fault out wrong drives in situations were you do genuinely have a 
misbehaving drive, leaving the bad drive online when there's no pool 
redundancy left to fault out any more drives.


Why is this not a problem with SAS drives? Well apparently they don't 
drop their phy interfaces anywhere near as easily when such things 
happen, because they are designed for use with SAS fabric where doing so 
is known to be a problem. Even if they do drop their phy, it doesn't 
result in confusing error reports from other drives on the SAS fabric. 
Some SAS drives can actually reset and reboot their firmware if it 
crashes without the phy interface being dropped.


Also what OI/illumos is that, because I was reading long ago there 
were some bugs solved in illumos for mpt_sas.


Somewhere around 18 months ago IIRC, Nexenta pushed a load of fixes for 
this into their git repo. I don't think I've seen these picked up yet by 
Illumos, although maybe I missed it? The fixes were in mpt_sas and FMA, 
to more accurately determine when disks are going bad by pushing the 
timing of the SCSI commands right down to the bottom of the stack (so 
delays in the software stack are not mistaken for bad drives), and to 
have FMA better analyse and handle errors when they do happen.


--
Andrew Gabriel

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] [discuss] Any interest in Meetups in London?

2015-08-10 Thread Andrew Gabriel

On 10/08/2015 19:38, Peter Tribble wrote:
How much interest is there in a London-based illumos meetup? Enough to 
revive

the now-defunct Solarians meetup group?

http://www.meetup.com/solarians/ 


I would be interested (although I don't remember my Meetup passwd and 
I'm not where it's written down).



(Or in another form? There's also the UKOUG Solaris SIG, which is free
and open to all.)


They've been quite happy for you and I to talk about Illumos and OpenZFS.

--
Andrew
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] SMR disks

2015-05-01 Thread Andrew Gabriel

On 01/05/2015 07:13, Nick Tan wrote:

Hi all,

Has anyone tried using SMR disks with ZFS?  I bought a Seagate 8TB SMR disk
and put it in a esata enclosure for my backups.  I found that zfs send
would cause the disk to go offline.  My guess is that zfs send is too fast
and fills the drive write cache.

I tried again with just rsync and this worked fine.


How did you setup the drive?
What filesystem, or just writing to it serially like a tape drive?

SMR disks have some interesting issues with recording, particularly when 
writing non-serially as most filesystems normally do. Since there's no 
SMR support in Illumos, I presume you ran the drive in Drive Managed 
mode - this makes it look like a standard random access drive. However, 
like a flash drive, it will actually be laying the data on the drive out 
very differently from what the host OS/filesystem imagines. Also like a 
flash drive, it will have to do some housekeeping and move blocks of 
data around on the disk and/or re-record large ranges of data previously 
written, so performance from the host system may appear very mixed, 
including some i/o requests which take long enough that with a standard 
magnetic drive you would assume the drive is dying (probably why you saw 
the drive reported as going offline). This may be fine for 
archival/backup data (providing the host system knows to allow a long 
time for i/o), but is less likely to be good for normal filesystem use 
by applications.


There are better ways of driving SMR drives, but they require support in 
the operating system and/or application. One of the uses they are more 
suitable for is a key/value object store, because drives can implement 
the object store layer entirely in the drive firmware hiding the real 
layout from the system, and data is accessed entirely by using keys.


--
Andrew

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Martin Bochnig

2015-04-15 Thread Andrew Gabriel

On 15/04/2015 22:23, Ivar Janmaat wrote:

Hello,

In my humble opinion there are several questions which Illumos might 
want to answer because they are relevant to others as well.


I supply some answers, but I suspect you are thinking Firefox is part of 
illumos, but it is not, so these questions/answers are not relevant to 
Firefox.


1. Which support levels and services can a distro, distro developer, 
partner or sponsor expect from Illumos?


*Expect*? - none direct from Illumos.
Support is contributed to Illumos by (for the most part) the consumers 
of illumos, i.e. the distros, and other interested individuals.


People looking for supported solutions need to go to one of the distros 
which provides support, such as OmniTI, Nexenta, Delphix, Joyent, etc. 
and check out what's required to get support coverage.


I believe answering this question in a transparent manner on the 
website would be beneficial for all distro providers. It would be the 
basis of trust and a equal playing field. A distro developer might be 
a single person who can add code to Illumos while working for Illumos 
distro partner so these roles might be different.
2. Is distributing binaries of opensource code with distro specific 
patches or configurations in the sourcecode and not providing the this 
source code in violation with the Illumos licensing model?


Illumos is only source code. Binaries are created by the individual distros.
(There were some expections for the closed source binaries supplied by 
Sun, but these are mostly now replaced with opensource. The aim is to 
have no binaries in Illumos.)


You need to check the license which applies to each of the source files 
you use. Some will allow you to modify+distribute and keep the changes 
private (such as the Mozilla public license, I think), others will not 
(such as CDDL, I think).


3. Will Illumos put limitations on a distros business model when using 
Illumos sourcecode?


Yes, as covered by the various licenses in each part of the source code 
(mostly but not exclusively CDDL).



I try to structure this discussion so we can move forward.


As Firefox is not part of Illumos, I don't think these answers will help 
at all for that purpose.


It is up to each distro if they ship Firefox at all (not all do), and if 
so, which Firefox they ship. I can't imagine any distro shipping a 
closed source Firefox binary. They all build all the items they ship 
from source, to ensure they have a completely compatible set of 
parts/versions, and to ensure that other people can take over if one 
person doesn't continue building an item for any reason.


What Martin could do if he wants to ship his binary for OpenIndiana is 
to setup an IPS repo with it in, which people can pull in to their own 
installations if they want to, and/or a simple tar-file package which 
could be used on other distros. That leaves the choice to the end-user, 
which is where is has to be for a binary built and maintained by just 
one person with no sources or build instructions available publically.


He hints that he wants to make money from his firefox builds, but I 
doubt that's viable. The market for the product is too small and in the 
desktop space comprises mostly people who want to run free opensource 
software, which doesn't match what Martin is offering. It would only be 
viable if there was a company which wanted to launch a product based on 
an open Solaris desktop that needed Firefox, and they were willing to 
pay for support, but no company would ever do that based on a 1-man 
support organisation - support needs to be provided by enough people 
that loss of some of them doesn't risk the business venture.


--
Andrew

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] HP Proliant Microserver N54L Upgrade

2015-02-16 Thread Andrew Gabriel
Your drive is claiming to be 4kB/sector, which would increase the max 
drive size up to 16TB, if the 12 byte SCSI command limitation applies 
(which is just my speculation).


So this would indicate a work-a-round for the 2TB limit is to use a 
drive which reports a real 4k sector size.



Reginald Beardsley via openindiana-discuss wrote:

I got a 3 TB Toshiba Canvio USB drive to format using 151a7 on an N40L.

>From my notes:

"The key to formatting the 3 TB drive was to use "format -e", supply LBA pseudo 
geometry and create an EFI label.  NB must run fdisk from within format"

However, it's not clear this was the USB drive rather than a bare 3 TB SATA 
drive.  I was battling several things and a few years later it's not as clear 
as it seemed when I wrote it.  I vividly recall I had a lot of trouble getting 
it to work properly, and considered just giving the drive to a friend to use on 
his Mac.

But yes, it can be made to work.  I use it to save incrementals from  my backup 
server.

The following  is w/ the drive imported to my 151a8 internet access host.

 # zpool get all tosh_pool
NAME   PROPERTY   VALUE  SOURCE
tosh_pool  size   2.72T  -
tosh_pool  capacity   49%-
tosh_pool  altroot-  default
tosh_pool  health ONLINE -
tosh_pool  guid   556785976751203449 default
tosh_pool  version-  default
tosh_pool  bootfs -  default
tosh_pool  delegation on default
tosh_pool  autoreplaceoffdefault
tosh_pool  cachefile  -  default
tosh_pool  failmode   wait   default
tosh_pool  listsnapshots  offdefault
tosh_pool  autoexpand offdefault
tosh_pool  dedupditto 0  default
tosh_pool  dedupratio 1.00x  -
tosh_pool  free   1.36T  -
tosh_pool  allocated  1.36T  -
tosh_pool  readonly   off-
tosh_pool  comment-  default
tosh_pool  expandsize 0  -
tosh_pool  freeing0  default
tosh_pool  feature@async_destroy  enabledlocal
tosh_pool  feature@empty_bpobjenabledlocal
tosh_pool  feature@lz4_compress   disabled   local


and:

format> verify

Volume name = <>
ascii name  = 
bytes/sector=  4096
sectors = 732566645
accessible sectors = 732566640
Part  TagFlag First Sector Size Last Sector
  0 unassignedwm 0   0   0
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  7 unassignedwm 0   0   0
  8 unassignedwm 0   0   0



Which looks wrong, so I'm not sure what's going on.  I had loads of fun w/ the 
sector alignment.  You should look at the ZFS pages.  Klimov and I tried to 
document some of this stuff.  George Wilson's writeup proved quite important, 
so make sure you read that also.

http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks

I did this on the N40L which is powered down at the moment, but there is 
probably more info on that.  The notes quoted above are from my log book.

Reg

--------
On Mon, 2/16/15, Andrew Gabriel  wrote:

 Subject: Re: [OpenIndiana-discuss] HP Proliant Microserver N54L Upgrade
 To: "Discussion list for OpenIndiana" 
 Date: Monday, February 16, 2015, 5:20 AM
 
 Marion Hakanson wrote:

 > Has anyone out there actually got a drive > 2TB
 working via USB on
 > an illumos distribution?  My attempts so far have
 failed (oi151a7, oi151a9,
 > and XStreamOS), with my 3TB drive in a USB-SATA
 enclosure appearing as
 > a 2TB drive.  The same drive/enclosure works
 perfectly when attached
 > to MacOS-X via USB, and the same drive/enclosure works
 fine when attached
 > by its eSATA connection to my illumos-based systems.
 >
 > 
https://www.mail-archive.com/discuss@lists.illumos.org.email.enqueue.archive.li
 > stbox.com/msg00499.html
 [2nd attempt - first vanished into a black hole]
 
 I had a q

Re: [OpenIndiana-discuss] HP Proliant Microserver N54L Upgrade

2015-02-16 Thread Andrew Gabriel

Marion Hakanson wrote:

Has anyone out there actually got a drive > 2TB working via USB on
an illumos distribution?  My attempts so far have failed (oi151a7, oi151a9,
and XStreamOS), with my 3TB drive in a USB-SATA enclosure appearing as
a 2TB drive.  The same drive/enclosure works perfectly when attached
to MacOS-X via USB, and the same drive/enclosure works fine when attached
by its eSATA connection to my illumos-based systems.

https://www.mail-archive.com/discuss@lists.illumos.org.email.enqueue.archive.li
stbox.com/msg00499.html

[2nd attempt - first vanished into a black hole]

I had a quick look through scsa2usb.c (not that it's an area I know), 
and I can only see it using Group 5 (12 byte) SCSI commands. This will 
limit the addressing to 2^32 blocks, which is 2TB for a 512byte/sector disk.


So if I'm right, this is a limitation of USB-connected drives on Illumos.

--
Andrew

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] 32-bit support in OpenIndiana Hipster

2015-02-16 Thread Andrew Gabriel

Alexander Pyhalov wrote:

Hello.

We currently support (in some way) 32-bit systems. We avoid shipping 
64-binaries in default path or use isaexec for such things.
But do we really need it? I haven't seen PC (not speaking about 
server) without 64-bit CPU for at least 8 years.


Dropping support for 32-bit systems will allow us to port Oracle 
sources easier. Potentially, this solves time_t overflow. We could 
think about largefile support less.


What are the cons of keeping support for 32-bit systems? I don't see 
much. If you see them, please, speak now.


Well, I use OI on small low power 32 bit systems.

--
Andrew

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] 64 bit Firefox

2015-02-07 Thread Andrew Gabriel

russell wrote:

Hi,

Looking below I found the official 64 bit Firefox versions for 
Windows, Linux and Mac versions here


http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/latest-trunk/

Has anyone tried to build a 64 bit version on IllumOS/OpenIndiana yet?


Why would you want it?
I find the ability to leak nearly 4Gbytes is quite enough and nicely 
self-limiting - I don't have enough memory for it to leak a whole 64 bit 
address space ;-)


Don't let me put you off, but I'm just wondering what problem it solves?

--
Andrew Gabriel

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] hadoop+jcuda

2015-02-03 Thread Andrew Gabriel

jason matthews wrote:


Hello world,

has anyone tried running hadoop+jcuda for gpu acceleration on their 
hadoop cluster on any illumos derived OS? I am probably one of like 
eight people running such a cluster at scale, but it never hurts to ask.


I'm not aware that there's any gpu driver available giving you cuda 
access on Illumos (or Solaris).


--
Andrew

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] oi or hipster for ultra5?

2015-02-01 Thread Andrew Gabriel

Jacob Ritorto wrote:

On Sun, Feb 1, 2015 at 2:19 PM, Andrew Gabriel   

wrote:



  

Do you have to stick with SPARC? Your Ultra 5 is going to be way slower
than any current (and many old) x86 systems, which are supported by all the
Illumos distributions.




I don't have to; it's just that I have a number of these good old machines
around and they're quite adequate for what I'm working on.  OpenBSD seems
to still support them, so maybe I'll give that a go.


Sure, but before spending much time on them, do bare in mind they are 
about same performance as a 15-20 year old Pentium II system.


You could probably move all their workloads to a single current x86 
system, and have loads of CPU capacity left over, and consume less power 
than a single Ultra 5.


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] oi or hipster for ultra5?

2015-02-01 Thread Andrew Gabriel

Jerry Kemp wrote:

You have taken it a lot further than I would have.

For an Ultra 5 or 10, I probably would not have gone past Solaris 10, 
and due to your ram being well under 4 Gb, I would stay on UFS vs ZFS.


When you say "Solaris 11 install" in reference to this box, do you 
mean an install of Sun OpenSolaris?


snv_65 is an early development build of Solaris 11 - actually more like 
Solaris 10 than even the earliest Solaris 11 releases.


Or Oracle Solaris 11 Express?   By default, and beginning with Solaris 
11 proper, Oracle Solaris 11 will not install on a Sparc system unless 
it is a T series or M series at the low end.


I understand you stating it is a good box, I have an Ultra 10 myself 
that is still chugging along, either way, I would take this time to 
max out the ram on your system.  I believe that the Ultra 5/10 system 
board will hold 1 Gb of RAM.  It seem that you have quite a few more 
years planned into your Ultra 5, and it is available new for 
reasonable prices, or there are a number of old hardware support list 
where I suspect that you could acquire more RAM for the cost of shipping.


Max for Ultra 5 was actually 512Mb. The motherboard will take 1Gb and 
people have done it, but in theory it exceeds max power draw on one of 
the rails and some DIMMs are too tall without taking something out of 
the case (floppy disk drive, and/or the never used smart card reader 
housing, IIRC).


Also note that the boot code in the Ultra 5/10 is exceedingly slow 
reading in the boot archive - it wasn't originally designed for reading 
in files of anything like that size, and is very non-optimal when doing 
so (takes many minutes).


--
Andrew Gabriel

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] SATA Expansion cards

2015-02-01 Thread Andrew Gabriel

cjt wrote:

On 01/31/2015 06:45 PM, Jerry Kemp wrote:

I thought that SATA expansion cards were always bad news when used with
Solaris, and Solaris based distro's.

There are tons of horror stories out there, primarily thru the ZFS
mailing list.

Has this changed?

Are there end users out there using SATA expansion cards with
OpenIndiana, Solaris, etc. with positive and reliable results?

Jerry 


FWIW, I have a server doing video streaming that has a multitude of 
Intel RES2SV240 expanders hanging off of LSI controllers, and have not 
experienced problems except when I first configured it and tried to 
have multiple layers of expander.  I don't recall what exactly the 
problem was (it was quite a while ago), but I no longer try to hang 
expanders off of other expanders.


SAS or SATA drives?
The problems are with the poor implementations of SATA Tunneling 
Protocol in the SAS expanders, so it only impacts SATA drives, not SAS 
drives.


Having said that, I haven't heard of anyone using RES2SV240 before - do 
you know if it uses standard LSI SAS expander chips, or something else?


--
Andrew Gabriel

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] oi or hipster for ultra5?

2015-02-01 Thread Andrew Gabriel

Jacob Ritorto wrote:

Hi,
  My Solaris 11 install is getting a little long in the tooth and I still
use this poor old machine kind of a lot for small development, pdp11
emulation and its real serial ports, etc.  I would like to keep it because
it's pretty low power, reliable as dirt, and still supports the very
comfortable Sun type 4 unix keyboard, which I still feel a little paralyzed
trying to do without.  I'm running into problems with new software (CSW, in
particular) wanting more recent libs than the OS has.  So I guess (*sigh*)
it's time to update the OS bits.

  Is it feasible to install Hipster or OI on such a meagerly appointed
machine?  I don't even have a dvd player; just cd.

SunOS beep 5.11 snv_65 sun4u sparc SUNW,Ultra-5_10
Memory size: 256 Megabytes


An Ultra 5 is a SPARC system.
No one builds OpenIndiana for SPARC.

The two Illumos distributions for SPARC that I know of are OpenSXCE and 
Tribblix.


Another option would be to use Solaris 11 Express if it still exists 
anywhere - it's old, but not as old as snv_65.  I think it still had 
sun4u support, but I could be mistaken. Solaris 11 itself no longer 
supports sun4u systems except the Sun/Fujitsu M-series 
(M3000/4000/5000/8000/9000).


Do you have to stick with SPARC? Your Ultra 5 is going to be way slower 
than any current (and many old) x86 systems, which are supported by all 
the Illumos distributions.


--
Andrew

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] SATA Expansion cards

2015-02-01 Thread Andrew Gabriel

Jerry Kemp wrote:
I thought that SATA expansion cards were always bad news when used 
with Solaris, and Solaris based distro's.


He was really asking for SATA host bus adapters.

There are tons of horror stories out there, primarily thru the ZFS 
mailing list.


The bad news comes with using SATA drives via SAS expanders, i.e. it's 
not a good idea to use SATA drives in SAS JBODs. (A SATA drive directly 
connected to a host bus SAS port is usually fine however.)


--
Andrew




___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] SATA Expansion cards

2015-01-31 Thread Andrew Gabriel

Rainer Heilke wrote:

Greetings;

I need to build a new home server, but I need to get all 8 drives 
internal. Will Illumos/OpenIndiana support the Vantec UGT-ST310R SATA 
card?


This is based on Sil3114 SATA controller chip.
It might work if you reflash with the non-RAID firmware (which has to be 
done on Windows) since that chip makes the drives look like ATA drives 
to the OS, and the 3112 (2-drive version) was supported in Solaris in 
early days of SATA. Since this controller often corrupted data, you 
should never use it for anything other than a redundant ZFS 
configuration, on which you often run scrubs. ;-)


Seriously, keep well away from it. It was for operating systems which 
didn't know what SATA drives were, hence presenting them as ATA drives.


If not, will it support the Asus PIKE Technology 1064E card? I am 
trying to find something that can just present the extra drives as 
SATA drives to use ZFS.


I don't know that one, but it's a hardware RAID card - you want JBOD 
controller, not RAID controller.


--
Andrew

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] forum creation

2015-01-28 Thread Andrew Gabriel

Dmitry Kozhinov wrote:
Privacy is keyword here. I consider constant delivery of discussion to 
my email as intrusion.


That's what auto-foldering is for - have your mail client automatically
move it to an openindiana folder so you don't see it until you want to.

--
Andrew

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Hipster and puppet

2015-01-26 Thread Andrew Gabriel

Grüninger wrote:

Last Friday I made an installation of hipster in our corporate network.
We use puppet for configuration management.
Puppet set facts to classify the system and one of them is "operatingsystem".
The value for operatingsystem is determined by evaluation of 'uname -v'.

# Hipster: uname -v
illumos-f8554bb

# OpenIndiana oi151a9 : uname -v
oi_151a9

and the ruby code of puppet

  def get_operatingsystem
output = Facter::Core::Execution.exec('uname -v')
if output =~ /^joyent_/
  "SmartOS"
elsif output =~ /^oi_/
  "OpenIndiana"
elsif output =~ /^omnios-/
  "OmniOS"
elsif FileTest.exists?("/etc/debian_version")
  "Nexenta"
else
  "Solaris"
end
  end

IMHO this is a wrong classification as hipster will be handled like Oracle 
Solaris.
And we manage OpenIndiana a151a9 and Oracle Solaris systems with puppet.
And the test system was handled as an Oracle Solaris system.

I will make a pull request to change the value but I am unsure which value to 
choose.
What do you prefer?
"OpenIndiana" as before and as recognition of a special distribution.
"IllumOS" as a more general specification.


I would say it should be "OpenIndiana".
"Illumos" would cover several different distros with potentially 
different sysadmin interfaces, which is not helpful.


FYI, here's a similar selection from Chef, to narrow down the type of 
Solaris platform, which is all driven from contents of /etc/release. I 
added some of this for local use, but I didn't test all the existing 
options.


File.open("/etc/release") do |file|
while line = file.gets
  case line
  when /^.*(SmartOS).*$/
platform "smartos"
  when /^\s*(OmniOS).*r(\d+).*$/
platform "omnios"
platform_version $2
  when /^\s*(OpenIndiana).*oi_(\d+).*$/
platform "openindiana"
platform_version $2
  when /^\s*Open Storage Appliance\s+(.*)$/
platform "nexentastor"
platform_version $1
  when /^\s*(OpenSolaris).*snv_(\d+).*$/
platform "opensolaris"
platform_version $2
  when /^\s*(Oracle Solaris) (\d+)\s.*$/
platform "solaris2"
  when /^\s*(Solaris)\s.*$/
platform "solaris2"
  when /^\s*(NexentaCore)\s.*$/
platform "nexentacore"
  end
end
end

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] forum creation

2015-01-25 Thread Andrew Gabriel

Reginald Beardsley via openindiana-discuss wrote:

I asked what was broken, so the text below misquotes me entirely.  I do not 
think anything needs to be changed.  I think the OI/Illumos lists work pretty 
well.  We seem to have weathered the DMARC issue w/o too much trouble.

The archive provides a good interface for older discussions.  I'm also of the 
opinion that part of the price of getting help from others is providing it when 
the opportunity presents itself.

Gratuitous graphics, how many posts you've made and how long you've been a 
member are not things I care about.  If your mailbox gets too full, get an 
email address just for your mailing list subscriptions.
Or set up to auto-folder or auto-delete the emails during periods you 
aren't following the list.


The archives are not a good interface for older discussions - I use my 
email client on a dedicated folder to search the archives.


Although I personally hate forums, I am also aware that use of email by 
those currently of university age has plummeted, with many not using it 
at all. If someone with an interest in forums came forward with a 
proposal that would bi-directionally mirror the mailing list discussion 
in a forum, that would open the discussions up to some younger people 
who won't follow a project via a mailing list.


There is an IRC channel, #openindiana on chat.freenode.net

--
Andrew Gabriel

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] rpool defragmentation

2015-01-20 Thread Andrew Gabriel
Your question about fragmentation brings up the same question as before 
- what are you trying to defragment?
The FRAG column in zpool status relates to fragmentation of free space. 
If you take a disk which is 80% full and replace it with a disk which is 
twice the size, the pool's freespace will become 6 times bigger with 
newly allocated unfragmented free space, so the FRAG figure will drop to 
1/6th of whatever it was before, without you needing to do anything else.


You would only copy the filesystem to defragment the layout of files (or 
more strictly blocks), and I suspect that will only have become an issue 
if you have been writing to the filesystem for some time with a highly 
fragmented spacemap. In many cases, the majority of the files in rpool 
would have been written during installation when the spacemap would not 
have been fragmented, and those files have not been modified so will not 
themselves be fragmented. Any files which have become fragmented are 
those you write to, and in many cases these will defragment when you 
next write them with the spacemap defragmented. The only case where this 
won't happen and might matter would be for files written when the 
spacemap was badly fragmented which are not modifed again, but are often 
read (but not often enough to stay in the ARC). In most rpool cases, it 
doesn't sound to me like this case is likely to be worth worrying about.


I have copied a boot environment for another reason (to force copies=2 
on a boot environment of a single-disk system).


BEs are normally clones of course, so only changed blocks are newly laid 
down. In my case, I want them all laid down again with the copies=2 rule 
inplace. I did this by first creating a new BE with beadm create. Then I 
used zfs destroy to  blow away the new clone, and used zfs send/recv to 
create a new filesystem of the same name that the clone had been. 
Slightly to my surprise, beadm was perfectly happy with the result, and 
could activate and boot from the new (non-cloned) filesystem just fine.


Nikolam wrote:

As I understand, one can first migrate to bigger drives for rpool
(being tight on rpool is not very healthy, anyway)
and then do zfs send of BE on disk themselves, or copying to new BE.

Where I am not sure if zfs send also does defragmentation (I suppose
it does since it see file system layout) and where copying files
surely would do defragmentation.

It is also a question does really all data on rpool is system-related
and could it be migrated elsewhere or there are many snapshots that
use space.




___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] rpool defragmentation

2015-01-18 Thread Andrew Gabriel

Floris van Essen ..:: House of Ancients Amstafs ::.. wrote:

Hi All,

If i read this correct, and one would have a heavily fragmented zpool ( doesn't 
matter if it's the rpool , or the data pool), and wanted to defrag  , provided 
you have mirrors, remove the mirrored disks, remove the partition on it, and 
read it...
After that is done resilvering, do the same for the other disks
  


No, I suspect I was wrong. I have seen a performance gain after 
resilvering, but I don't currently understand where it comes from. I'll 
need to do some investigating, and see if I can reproduce it with 
spacemap histograms.


--
Andrew

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] rpool defragmentation

2015-01-16 Thread Andrew Gabriel

Timothy Coalson wrote:

On Fri, Jan 16, 2015 at 11:47 AM, Andrew Gabriel <
illu...@cucumber.demon.co.uk> wrote:

  

On 01/16/15 03:47 PM, Gary Gendel wrote:



On 01/16/2015 10:22 AM, Andrew Gabriel wrote:

  

On 01/16/15 02:37 PM, Gary Gendel wrote:



 I thought about creating a new BE and then sending the current BE to
it, but there doesn't seem to be enough room.

  

Since rpool can only be either a single disk or a mirror, the easiest
way to defrag it is to attach another mirror side and let it resilver. The
new mirror side will be defragged. Make sure the new disk is bootable (has
grub etc on it), and then zpool split off the old disk. It would be a good
opportunity to move to a bigger rpool disk too.

 Yes, this is the rpool and is mirrored.  Would resilvering really


defragment it?

  

Yes, space is allocated afresh when resilvering (which is a significant
difference from traditional RAID).




I find that surprising.  Does the metadata on the older drive(s) manage to
refer to the new, independent location of the blocks on the new mirror, and
if so, how?


Good question - I'm going to have to think about that.
I have done the mirror side attach and old side detach a few times and 
got significantly better performing pool in the case of an old pool, but 
maybe my rationale for the reason is not correct. I haven't done it 
since the spacemap histograms were introduced, so I haven't seen if the 
spacemap is significantly less fragmented as a result. Something to try out.


--
Andrew

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] rpool defragmentation

2015-01-16 Thread Andrew Gabriel

On 01/16/15 03:47 PM, Gary Gendel wrote:

On 01/16/2015 10:22 AM, Andrew Gabriel wrote:

On 01/16/15 02:37 PM, Gary Gendel wrote:

# zpool list
NAME  SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP HEALTH 
ALTROOT

rpool  68G  49.5G  18.5G -50%72%  1.00x ONLINE -
users 928G  72.4G   856G - 1% 7%  1.00x ONLINE -

# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  48.9G  16.9G82K  /rpool
rpool/ROOT 44.9G  16.9G22K  legacy
rpool/ROOT/hipster-17  44.9G  16.9G  34.9G  /
rpool/dump 1.97G  16.9G  1.97G  -
rpool/export 32K  16.9G32K  /export
rpool/swap 2.01G  16.9G  2.01G  -
users  72.4G   827G  72.3G  /export/home

How does one defragment this?


I presume you are referring to rpool, and not users?

What makes you think you need to?
What do you use the root filesystem for?

I thought about creating a new BE and then sending the current BE to 
it, but there doesn't seem to be enough room.


Since rpool can only be either a single disk or a mirror, the easiest 
way to defrag it is to attach another mirror side and let it 
resilver. The new mirror side will be defragged. Make sure the new 
disk is bootable (has grub etc on it), and then zpool split off the 
old disk. It would be a good opportunity to move to a bigger rpool 
disk too.


Yes, this is the rpool and is mirrored.  Would resilvering really 
defragment it?


Yes, space is allocated afresh when resilvering (which is a significant 
difference from traditional RAID).


I figured that with this high a fragmentation there would be some 
penalty in memory consumption and possible disk access.


The FRAG figure there is a measure of free space fragmentation in the 
form of how many metaslabs have no blocks of space bigger than 8Mbyte 
free, and also takes into account how small their biggest free block is.


50% FRAG can mean something between the following two extremes:
 1.   half the metaslabs being completely fragmented (no blocks bigger 
than 1kbyte available) and half the metaslabs having 16Mbyte blocks, or

 2.   all the metaslabs having 128Kbyte free blocks available.

I wouldn't worry about 50% on an rpool, although I haven't done any 
detailed performance metrics since the metaslab histograms appeared.


Specifically, FRAG does not tell you anything about how fragmented the 
layout of your existing data is on the disk, only how easily zfs can 
find free space for you to write new data into.


The rpool has gotten worse slowly.  This started with somewhere around 
OpenSolaris SNV_124 when the disk requirements were much smaller and 
has gone though 100s of updates since then.


I'm hanging on to this SunFire v20z until I can figure out a cheap, 
less power hungry replacement.  It replaced my Sparc SunFire 150 that 
started running OpenSolaris around SNV_62!  In my SOHO, the V20z runs 
the network:


* Firewall and router WAN to LAN.
*** It would be nice to get a DHCPv6 PD client working so I don't have 
to have a 4-6 tunnel.

* Web server (Web pages, Wiki, Owncloud, etc.)
* File server (user pool and archive pool).
* Archive server.
* Mail server
* DCHP 4 and 6 server.
* Software testing platform.

Most of the time this machine is lightly loaded.  Only when software 
testing goes on does it ever sweat.  It's kind of a kludge setup as I 
have the disks off of sata cables directly from the controller to the 
disks in an external cabinet because of the lack of sata multiplexing.


That's much better than trying to use SATA drives via port expanders!

--
Andrew

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] rpool defragmentation

2015-01-16 Thread Andrew Gabriel

On 01/16/15 02:37 PM, Gary Gendel wrote:

# zpool list
NAME  SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP HEALTH 
ALTROOT

rpool  68G  49.5G  18.5G -50%72%  1.00x ONLINE -
users 928G  72.4G   856G - 1% 7%  1.00x ONLINE -

# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool  48.9G  16.9G82K  /rpool
rpool/ROOT 44.9G  16.9G22K  legacy
rpool/ROOT/hipster-17  44.9G  16.9G  34.9G  /
rpool/dump 1.97G  16.9G  1.97G  -
rpool/export 32K  16.9G32K  /export
rpool/swap 2.01G  16.9G  2.01G  -
users  72.4G   827G  72.3G  /export/home

How does one defragment this?


I presume you are referring to rpool, and not users?

What makes you think you need to?
What do you use the root filesystem for?

I thought about creating a new BE and then sending the current BE to 
it, but there doesn't seem to be enough room.


Since rpool can only be either a single disk or a mirror, the easiest 
way to defrag it is to attach another mirror side and let it resilver. 
The new mirror side will be defragged. Make sure the new disk is 
bootable (has grub etc on it), and then zpool split off the old disk. It 
would be a good opportunity to move to a bigger rpool disk too.


--
Andrew Gabriel

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] A ZFS related question: How successful is ZFS, really???

2015-01-12 Thread Andrew Gabriel

Schweiss, Chip wrote:

On Mon, Jan 12, 2015 at 8:17 AM, Andrew Gabriel <
illu...@cucumber.demon.co.uk> wrote:

  

Since you mention Sun/Oracle, I don't see them pushing ZFS very much
anymore, although I am aware their engineers still work on it.



Oracle pushes ZFS hard and aggressively.   I dare you to fill out their
contact form or download their virtual appliance demo.  Their sales people
will be calling within the hour.
  


OK, that's good to hear. During a year of working with very many 
customers in the UK tendering for storage (including some former Sun 
openstorage customers), I only saw Oracle bidding their ZFS storage 
appliance in one case.


I did come across several more cases of customers building their own 
systems using Oracle Solaris 11 on third party hardware, but that's not 
something Oracle pushes.


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] A ZFS related question: How successful is ZFS, really???

2015-01-12 Thread Andrew Gabriel
Nexenta alone is probably around an Exabyte of licensed installations, 
and that's a mix of displaced traditional storage vendors, and new 
growth in old and new companies. There are many ZFS-based storage 
vendors in addition to Nexenta. The traditional 'big 8' storage vendors 
charged $9B for 9EB storage in 2012, which averages $1000/TB, and they 
have really struggled to reduce costs - instead they've lost market 
share to many of the new storage providers who produce products costing 
only a small fraction of that.


Can you think of any other filesystem which is being adopted by OS and 
appliance distributions at anything like the rate of ZFS?


Since you mention Sun/Oracle, I don't see them pushing ZFS very much 
anymore, although I am aware their engineers still work on it.



Hans J Albertsson wrote:

Thanks for your views, the serial storage (tape mostly?) problem is news to
me but otherwise I concur.

I was mostly asking about success and market presence, i e is ZFS being
widely used in any non-Sun/Oracle part of the workplace?

Hans J. Albertsson
  


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] 32-bit and/or 64-bit programs (Was: Bash bug issue)

2014-11-17 Thread Andrew Gabriel

Bruce Lilly wrote:

On Tue, Nov 11, 2014 at 7:21 AM, Andrew Gabriel <
illu...@cucumber.demon.co.uk> wrote
  

We currently support 32 bit and 64 bit kernel. A bug fix needs to work on
both 32 and 64 bit versions, so that would not be acceptable as a bug fix
for this issue (if it was something you were trying to fix in the distro).




There isn't anything specific in the program in question that would
definitively qualify as a "bug".
The program operates correctly e.g. even when compiled as a 32-bit
application on NetBSD:

# file grap
grap: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically
linked (uses shared libs), for NetBSD 6.1.5, not stripped
# ./grap -d grap.defines test.grap | grep aligned
line invis "2038-01-20T23:59:59" aligned from Frame.Bottom.end + (0, -0.4)
to Frame.Bottom.start + (0, -0.4)

There are two parts of the problem, neither of which is specific to
application code:
1. 32-bit time_t provided by Solaris-derived kernel, run-time, and build
system is inadequate (well-known, long-standing issue)
2. default build on Solaris-derived 64-bit systems is 32-bits.  Even though
the 64-bit version works as expected on OI, it isn't built as 64-bit
without special effort; i.e. by default anything using time_t is broken,
even on 64-bit OI.

Other OSes (e.g. NetBSD as shown above) have solved issue #1.
If issue #1 is solved on illumos, issue #2 becomes strictly a performance
tweak.
If there's a "bug" involved here, it is issue #1.

Issue #2 means that without special Solaris-specific effort, applications
using time_t or any library functions that directly or indirectly use
time_t may exhibit anomalous behavior when built for Solaris or illumos,
even on 64-bit systems.

Applications which need to handle dates outside of the operating system
  

time (such as dates of births, deaths, marriages, retirements, etc)
shouldn't be using time_t -- that's very well established.




You have an alternative?
Note that every standard time-based function eventually involves time_t.
That includes strftime() and strptime() as in the example, time(),
clock_gettime(), all of the ctime functions (localtime(), gmtime(), etc.) ,
mktime(), difftime(), and so on.
  


These are all for handling times related to the operating system - 
time_t is not for handling arbitrary dates/times.
No banking/finance, spreadsheet, statistical or other application 
handling general dates goes anywhere near these functions. It's a bug in 
grap if it does. I just checked the AT&T grap, and it doesn't (although 
I'm not sure if it has any support for dates).



Generally, 32 bit apps which are simply rebuilt as 64 bit (without being
modified to explicitly make use of larger address space) run faster on x86
because of the extra registers available for compiler optimisation, and run
slower on sparc because of the larger working set size.




That sounds like a good reason (in addition to the time_t correctness
issue) for making 64-bit builds the default, at least on 64-bit x86.
  


The Solaris philosphy is that the default build is to build something 
that works on all supported x86 platforms.
If you want to build for specific architectures, instruction set 
features, or even for the very specific features of processor you are 
currently running on, those options exist, but selecting them 
unwittingly gets people into trouble when they build a binary and then 
find it won't work on some other system. If you then start building 
multiple binaries, the testing complexity multiplies considerably. The 
performance difference is almost always never worth the effort. There 
are some other reasons which make the effort necessary, such as a 
debugger needing to be 64 bit to debug a 64 bit process, or a large 
application which can make good use of more than 4GB address space.


If openindiana drops support for 32bit kernel at some point, then it 
might make sense.


--
Andrew

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] 32-bit and/or 64-bit programs (Was: Bash bug issue)

2014-11-11 Thread Andrew Gabriel

Bruce Lilly wrote:

I ran a test by building both 32-bit and 64-bit versions of Ted Faber's

[...]

Simple test input to demonstrate 32-bit time_t issue:

# cat test.grap
.G1
frame invis
label bot strftime("%Y-%m-%dT%H:%M:%S", strptime("%Y-%m-%dT%H:%M:%S",
"2038-01-20T23:59:59"))
.G2

64-bit results unremarkable:
# grap test.grap | grep aligned
line invis "2038-01-20T23:59:59" aligned from Frame.Bottom.end + (0, -0.4)
to Frame.Bottom.start + (0, -0.4)

32-bit results show the critical time_t overflow issue:
# grap test.grap | grep aligned
line invis "1969-12-31T18:59:59" aligned from Frame.Bottom.end + (0, -0.4)
to Frame.Bottom.start + (0, -0.4)

That's broken, and is sufficient reason to build as 64-bits on 64-bit
hardware.



We currently support 32 bit and 64 bit kernel. A bug fix needs to work 
on both 32 and 64 bit versions, so that would not be acceptable as a bug 
fix for this issue (if it was something you were trying to fix in the 
distro).


Applications which need to handle dates outside of the operating system 
time (such as dates of births, deaths, marriages, retirements, etc) 
shouldn't be using time_t -- that's very well established.



I ran 3 passes with 32- and 64-bit versions using the distributed example /
regression input ( examples/example.ms in the source distribution) and
timed the runs; there was a fairly consistent speed benefit (real and user)
to the 64-bit build.
Some details of the version of grap built, the build platform, test
command, and results follow.



Generally, 32 bit apps which are simply rebuilt as 64 bit (without being 
modified to explicitly make use of larger address space) run faster on 
x86 because of the extra registers available for compiler optimisation, 
and run slower on sparc because of the larger working set size.


However, there is very little in /usr/bin which is performance critical 
on any system, because these are not normally a significant part of any 
application, so optimizing for the performance of /usr/bin/* gets you 
almost no gains, and a lot of pain. The general philosophy has been to 
provide separate 64 bit versions only when essential for operation on 64 
bit kernel or there's a performance or data capacity gain which is 
important for some specific application. In all other cases, 32 bit 
versions are used on both 32 bit and 64 bit kernels so we don't need to 
do two builds, two lots of testing, install two binaries, etc.


--
Andrew Gabriel

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Where Do I Get Source To Pic? ; Mona Eyes

2014-11-09 Thread Andrew Gabriel

j...@m5.chicago.il.us wrote:

Centuries ago, Nostradamus predicted that Andrew Gabriel would write on Sat Nov 
 8 12:30:47 2014:

  
I looked around a bit further, and found that AT&T have opensourced 
their last unix system V version:


http://www2.research.att.com/~astopen/cgi-bin/download.cgi?action=intro 
and select dwb





Thank you for this link.  I obtained pic from it, and it is now
working to my satisfaction.


I built it all too.
Only needed a few changes IIRC... removing the prototypes for the malloc 
library functions, removing UNANSI, and renaming a function called 
inline(). (May have forgotten one or two more.)


--
Andrew

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Where Do I Get Source To Pic?

2014-11-08 Thread Andrew Gabriel

Andrew Gabriel wrote:

Alan Coopersmith wrote:

On 11/ 7/14 08:58 AM, j...@m5.chicago.il.us wrote:


I need to obtain source to pic, because my Schillix system does not
have it.  http://src.illumos.org/source/xref/illumos-gate/usr/src/cmd/
does not have a pic subdirectory.  It has eqn and tbl, and of course
troff, but no pic.  Where do I get source to pic?  Thank you in
advance for any and all replies.


https://gnu.org/software/groff/

Solaris never included pic in it's nroff packages.


The Solaris nroff/troff come from AT&T Documentors Workbench version 2 
(which also included pic and grap).
The groff equivalents (at least for nroff and troff) have never been 
completely compatible.


If you want the AT&T Documentors Workbench source, the last version 
was opensourced as part of Plan 9.
Some of the utilities (I forget which) had been converted to Plan 9's 
stdio replacement, but converting back to use Unix stdio is not 
difficult.


BTW, the omission of pic from Solaris was a long standing accidental 
oversight - there was a Sun bugid to fix it, but it never got fixed. 
The Sun documentation team used pic internally in the early days.


I looked around a bit further, and found that AT&T have opensourced 
their last unix system V version:


http://www2.research.att.com/~astopen/cgi-bin/download.cgi?action=intro 
and select dwb


This is version 3.3, which is significantly newer than the version 
Solaris has.
In addition to pic, I note it also includes batch mode picasso, which is 
an equivalent for pic, but directly produces postscript output.
(There was also a GUI mode picasso, but that's not included - I suspect 
it might have only worked with Openlook.)


--
Andrew Gabriel

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Where Do I Get Source To Pic?

2014-11-08 Thread Andrew Gabriel

Alan Coopersmith wrote:

On 11/ 7/14 08:58 AM, j...@m5.chicago.il.us wrote:


I need to obtain source to pic, because my Schillix system does not
have it.  http://src.illumos.org/source/xref/illumos-gate/usr/src/cmd/
does not have a pic subdirectory.  It has eqn and tbl, and of course
troff, but no pic.  Where do I get source to pic?  Thank you in
advance for any and all replies.


https://gnu.org/software/groff/

Solaris never included pic in it's nroff packages.


The Solaris nroff/troff come from AT&T Documentors Workbench version 2 
(which also included pic and grap).
The groff equivalents (at least for nroff and troff) have never been 
completely compatible.


If you want the AT&T Documentors Workbench source, the last version was 
opensourced as part of Plan 9.
Some of the utilities (I forget which) had been converted to Plan 9's 
stdio replacement, but converting back to use Unix stdio is not difficult.


BTW, the omission of pic from Solaris was a long standing accidental 
oversight - there was a Sun bugid to fix it, but it never got fixed. The 
Sun documentation team used pic internally in the early days.


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] 2x Xeon XS8600

2014-08-10 Thread Andrew Gabriel

Harry Putnam wrote:

Murman, can you coach me a bit about how the bios settings regarding
sata ports, and the LSI utility, effects the actual ports themseleves?

There are three options at bios main menu `Storage' and then the
`options', then the SATA Emulation [...] heading item:
  
  SATA Emulation—Sets the SATA emulation mode with the following options:


  RAID + AHCI–both the RAID and AHCI OPROMs execute. This
  emulation mode is the default and offers the best performance
  and most functionality.

  Separate IDE Controller–offers standard SATA supports (four
  ports only).

  Combined IDE Controller–makes the SATA controller look like an
  IDE controller and offers best IDE compatibility (two ports
  only).
---   ---   ---=---   ---   --- 


Here. I am running a version (Openindiana) of solaris 11.

Far as I can see, my setup will only boot with options 2 or 3.  I've
been using 2 (Separate IDE) which is recommended in some of the oracle
documentation. 


WIth that setting, the two RED ports on the upper (main) row of sata
ports are unusable for sata disks.

Further, attempting to plug a pair or wd black, 1tb drives (new) into
the two right most SAS/Sata ports on the bottom row fails.  The bios
fails to recognize them.

So far I've only been able to use the first 4 Sata ports on top row,
beginning with the left most port.

I really need to be able to use all the available Sas/Sata ports for
Sata discs.


What you really want is AHCI and no RAID.
Running it as an emulated IDE is rather sub-optimal, but maybe your only 
choice unless you go and buy a separate disk controller.


--
Andrew Gabriel

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Issue adding a zone to an older OI system

2014-07-29 Thread Andrew Gabriel

On 07/29/14 10:57 PM, Jason J. W. Williams wrote:

Hello,

We have an older system running oi_151a with several zones no problem
(machine went up around November 2011).

Installed a new zone with no issue, but when I boot it for the first
time it never brings up the the configuration wizard…never gets past
this:

  https://gist.github.com/anonymous/578560879bd586917860

Have waited upwards of 30 minutes for "zlogin -C" to show the system
setup wizard but no dice. Tried halting/uininstalling/reinstallling
the system several times.

Any ideas are greatly appreciated.


The nice thing about it being a zone is you can look in to it from the 
global zone and see what it's doing.

I would start by getting a list of the processes:
ps -fz newzone

Also, you might be able to zlogin (without -C) and do svcs -a to see 
where startup has got to. The zlogin might not work if it hasn't got far 
enough.


--
Andrew Gabriel

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Exclusive IP zones under VMware

2014-04-21 Thread Andrew Gabriel

Sounds like similar problem as under Virtualbox.
The emulated e1000g claims to support 15 mac addresses, but actually 
only the first mac address works, so frames addressed to the other mac 
addresses are not passed up.
If the emulated e1000g correctly claimed to support only 1 mac address, 
then the driver would put it into promiscuous mode when it needs more 
than one mac address.
I know yours looks like it is in promiscuous mode, but try seeing if it 
works whilst you snoop the interface (in promiscuous mode), and it would 
be interesting to know if snoop can see the frames send to the second 
mac address.


The other possibility is that the driver has done the right thing 
putting the emulated e1000g into promiscuous mode, but some VMware 
configuration may not be allowing it to really work this way.



On 21/04/2014 16:20, Christopher X. Candreva wrote:


Now that I've copied the zone into the new master zone running under VMware,
I've run into a problem with IP networking.  I've set up the vnic, assigned
the IP, and can ping/access the IP from the global zone. However I can't
access that IP from the LAN. Oddly, the arp table in the external machine
I'm pining from DOES have an entry with the correct MAC address.

The only solution I found in my searches was people who needed to put the
interface on the global zone into permiscuous mode, however my is already in
PROMSIC mode:


Is there anything different that needs to be done with exclusive IP
networking in a VMware guest ?



chris@Zeb:~$ dladm show-link
LINKCLASS MTUSTATEBRIDGE OVER
e1000g0 phys  1500   up   -- --
vnic2   vnic  1500   up   -- e1000g0
vnic1   vnic  1500   up   -- e1000g0

chris@Zeb:~$ /sbin/ifconfig
lo0: flags=2001000849 mtu 8232 
index 1
 inet 127.0.0.1 netmask ff00
e1000g0: flags=1000943 mtu 1500 
index 2
 inet 216.187.52.36 netmask ff00 broadcast 216.187.52.255
lo0: flags=2002000849 mtu 8252 
index 1
 inet6 ::1/128



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] zpool errors a bit confusing

2014-04-06 Thread Andrew Gabriel

On 06/04/2014 15:50, Harry Putnam wrote:

One unexpected result is that vbox seems to create the same named
discs under the normal directory for your vm discsbut they never
grow beyond 4mb.  All the while the same named discs out on the ext
drive may grow to whatever size was set when created.
So, end result is, you have disc-10 under your vm disc directory and
disc-10 out on the external the only one growing is on the
external.  This all seems to work seamlessly with the oi OS.

Must be something like a place marker that vbox finds necessary to
create.


If they are VMDK files, these can contain a redirection to one or more 
VMDK extent files. The first 4kB (less the magic header) contains the 
VMDK descriptor file in plain text (NUL padded).


dd if=xxx.vmdk count=7 skip=1

and look under the Extent description section, which will be pointing to 
the real file(s). (In the simple case of a single VMDK file, it points 
to itself.)


--
Andrew


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] sd-config-list= vid/pid string syntax in sd.conf

2014-03-30 Thread Andrew Gabriel

On 29/03/2014 21:15, Reginald Beardsley wrote:

The string matching logic in sd_sdconf_id_match() in sd.c is odd and seems to 
cause a good bit of confusion.  In particular it does not conform to shell 
wildcard rules which I think is what most people would expect.

Can anyone point to why the current syntax and semantics were chosen?

Does anyone know a reason why it should not conform to shell wildcard rules? 
The alternative is full regular expressions, but that seems a bit overkill and 
possibly confusing as well.


Usually done to limit the potential stack usage in the kernel, which is 
a much more limited resource than in user space where RE and shell 
matching are found.


--
Andrew


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Recovering from power loss on USB ZFS pool?

2014-03-28 Thread Andrew Gabriel

In my experience, one USB stick usually works OK.
Multiple USB sticks all accessed together, and you usually find the 
transport to all of them goes up and down like a yoyo, as though 
something was getting the threading of the connections across USB 
screwed up. (This is a shame because I'd love to use it to mock up a 
storage array for ZFS demos.)


IIRC, this issue started with a new USB framework early in Solaris 10, 
and is still like that in current Solaris 11.


USB disks can work better than USB sticks, but I haven't ever tried a 
direct substitution in the same environment where USB sticks fail, so I 
don't know if that's universally true.


Some USB sticks have only limited command support. If you get hangs when 
trying to use only one USB stick at a time, add the following to 
/kernel/drv/scsa2usb.conf:


attribute-override-list="vid=* reduced-cmd-support=true";

and then "update_drv -f scsa2usb" (or reboot). However, this does 
nothing to fix the multiple USB sticks issue above. (You can make this 
option selective for specific USB stick models only - see the comments 
in the file.)



On 28/03/2014 15:04, Michael Stapleton wrote:

I'm not sure when things changed, but way back in the OpenSolaris days,
I had the root drive in my laptop mirrored to an external USB drive.
I never had problems back then. I would do a demonstration where I would
remove the USB drive while the laptop was up and running, and then plug
the USB submirror into another laptop and boot from it.
Never had a problem. I could even reattach the USB drive to my laptop
and it would resilver automatically.

Is the problem ZFS or USB or FMA? no idea. But the was a regression of
sorts.
I don't think Solaris11 suffers from this.


Mike


--
Andrew Gabriel


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Using large (3-4 TB) USB disks for backups

2014-03-26 Thread Andrew Gabriel

Have you got any other disk which can format as ashift=12?
(You could even use an iSCSI LUN from another system with blocksize set 
to 4k.)


If so, start by creating a zpool on that. Then attach your USB drive as 
a mirror, and it will have ashift=12. Then detach the original disk from 
the mirror, and expand the zpool to fill the whole drive.


Yes, we should have a command line option to set the ashift given how 
much trouble the failure to autodetect is causing in many cases (and you 
may want to override even a correct autodetect).


ashift is really a per-top level vdev property rather than a pool 
property, and we don't currently have any top level vdev properties 
handled by the zpool command.


However, for your usage case of just storing large send streams, I 
wouldn't go to any extra bother just to create ashift=12. There are 
other situations where it might actually make a noticeable difference, 
but I would be surprised if you see any in this case. You also reduce 
the scope for rescue if the pool gets damaged, as ashift=12 has only 32 
previous uberblocks available to work back through (versus 128 for 
ashift=9).


--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Using large (3-4 TB) USB disks for backups

2014-03-25 Thread Andrew Gabriel

On 26/03/2014 03:07, Reginald Beardsley wrote:

Has anyone been able to  ZFS format a 3-4 TB  USB drive w/ OI so that it worked 
properly?  I was defeated by a 3 TB Toshiba drive, but am hoping some other 
make might work  w/o generating misaligned messages.


It helps if you paste in the terminal window showing the exact commands 
you issued and their responses.


--
Andrew


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Recommendations for fast storage

2013-04-16 Thread Andrew Gabriel

Mehmet Erol Sanliturk wrote:

I am not an expert of this subject , but with respect to my readings in
some e-mails in different mailing lists and from some relevant pages in
Wikipedia about SSD drives , the following points are mentioned about SSD
disadvantages ( even for "Enterprise" labeled drives ) :


SSD units are very vulnerable to power cuts during work up to complete
failure which they can not be used any more to complete loss of data .


That's why some of them include their own momentary power store, or in
some systems, the system has a momentary power store to keep them powered
for a period after the last write operation.


MLC ( Multi-Level Cell ) SSD units have a short life time if they are
continuously written ( they are more suitable to write once ( in a limited
number of writes sense ) - read many )  .

SLC ( Single-Level Cell ) SSD units have much more long life span , but
they are expensive with respect to MLC SSD units .

SSD units may fail due to write wearing in an unexpected time , making them
very unreliable for mission critical works .


All the Enterprise grade SSDs I've used can tell you how far through their
life they are (in terms of write wearing). Some of the monitoring tools pick
this up and warn you when you're down to some threshold, such as 20% left.

Secondly, when they wear out, they fail to write (effectively become write
protected). So you find out before they confirm committing your data, and
you can still read all the data back.

This is generally the complete opposite of the failure modes of hard drives,
although like any device, the SSD might fail for other reasons.

I have not played with consumer grade drives.


Due to the above points ( they may be wrong perhaps ) personally I would
select revolving plate SAS disks and up to now I did not buy any SSD for
these reasons .

The above points are a possible disadvantages set for consideration .


The extra cost of using loads of short stroked 15k drives to get anywhere
near SSD performance is generally prohibitive.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Write protected USB stick

2013-04-13 Thread Andrew Gabriel

Brogya'nyi Jo'zsef wrote:

Hi

I use a USB stick on my Openindiana and there was a power failure. After 
this my USB stick not work properly.
When I tried to format it I received an error message. The fdisk and 
rmformat tell me the stick is write protected.

How to change this status on my stick?
I don't believe it. It hasn't got any switch on it. So do you know any 
useful trick? I tried on win7,Linux, and OI.

The "dd" command is not working on linux.
The stick is Kingston DT111. Now I'm waiting for the support answer but 
it takes a long time. Thanks in advance.


You didn't give the results of reading and writing on other OS's, but
it sounds like it might be a protective measure by the USB stick in the
event of unrepairable damage, giving only the possibility to get the
data off it. Kingston's FAQ hints at this behavior.
You would probably do better asking this on a hardware forum - it's
unlikely to be anything related to OpenIndiana.

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] 3737 days of uptime

2013-04-07 Thread Andrew Gabriel

Edward Ned Harvey (openindiana) wrote:

From: Ben Taylor [mailto:bentaylor.sol...@gmail.com]

Patching is a bit of arcane art.  Some environments don't have
test/acceptance/pre-prod with similar hardware and configurations, so
minimizing impact is understandable, which means patching only what is
necessary.


This thread has long since become pointless and fizzled, but just for the fun 
of it:

I recently started a new job, where updates had not been applied to any of the 
production servers in several years.  (By decree of former CIO).  We recently 
ran into an obstacle where some huge critical deliverable was not possible 
without applying the updates.  So we were forced, the whole IT team working 
overnight on the weekend, to apply several years' backlog of patches to all the 
critical servers worldwide.  Guess how many patch-related issues were 
discovered.  (Hint:  none.)

Patching is extremely safe.  But let's look at the flip side.  Suppose you encounter the rare situation where patching *does* cause a problem.  It's been known to happen; heck, it's been known to happen *by* *me*.  You have to ask yourself, which is the larger risk?  Applying the patches, or not applying the patches?  


First thing to point out:  Suppose you patch something and it goes wrong ...  
Generally speaking you can back out of the patch.  Suppose you don't apply the 
patch, and you get a virus or hacked, or some data corruption.  Generally 
speaking, that is not reversible.

For the approx twice in my life that I've seen OS patches cause problems, and then had to reverse out the patches...  I've seen dozens of times that somebody inadvertently sets a virus loose on the internal network, or a server's memory or storage became corrupted due to misbehaving processes or subsystem, or some server has some kind of instability and needs periodic rebooting, or becomes incompatible with the current release of some critical software or hardware, until you apply the patches.  


Patches are "bug fixes" and "security fixes" for known flaws in the software.  You can't say 
"if it ain't broke, don't fix it."  It is broke, that's why they gave you the fix for it.  At best, you can 
say, "I've been ignoring it, and we haven't noticed any problems yet."


10 years ago, it was the case that something like half the support calls would 
have never arisen if the system was patched up to date. (I don't know the 
current figure for this.)

OTOH, I have worked in environments where everything is going to be locked down 
for 6-10 years. You get as current and stable as you can for the final testing, 
and then that's it - absolutely nothing is allowed to change. As someone else 
already hinted earlier in the thread, the security design of such 
infrastructure assumes from the outset that the systems are riddled with 
security holes, and they need to be made secure in some other (external) way.

--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] stty source code

2012-09-10 Thread Andrew Gabriel

James Carlson wrote:

Reginald Beardsley wrote:
  

I don't want any inbound connections.  But the documentation I read suggested that one 
had to setup ttymon on the port.  Possibly for no reason other than, "This is what I 
did when it finally worked."



Where'd you read that?  Even when it's active, ttymon just camps out on
the /dev/term/ (dial-in only) nodes, meaning that it'll stay asleep
while you do your work on /dev/cua/.

And it's not active on any of the normal serial ports by default and I
believe that no reasonable person should make it active.

The main blocking item in removing it is making sure that the system
console service still works right after removal.


The console uses ttymon -g anyway (i.e. ttymon pretending to be getty, 
and ignoring all the SAC/SAF stuff).

On rare occations I setup another login port, I use ttymon -g on that too.

--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] stty source code

2012-09-06 Thread Andrew Gabriel
The stty changes will be lost when the last stream closes and the port 
settings are reset, which is probably the very instant that the stty 
command which makes the changes exits in your example. So this will only 
work if something else is holding the port open. That's why I said try 
it when tip is running.


Alternatively, just to check you can change the parameter, use some 
other command in another window to keep it open, such as

sleep 1 < /dev/cua/0

Reginald Beardsley wrote:

Andrew,

There are no serial ports and hence no /dev/term/b on this system.  In fact if the 
USB<->RS-232 adapter is not plugged in, there is no /dev/term or /dev/cua 
either. Which may explain some of the weirdness when I was setting up the port as I 
don't think I had plugged in the Keyspan adapter when I started configuring it. 
/dev/{term,cua}/0 don't get created until you plug in the adapter and disappear when 
you unplug it.

ttymon holds /dev/term/0, so even root or uucp cannot open /dev/term/0.  stty 
just hangs until interrupted.  Everything I've read suggests that it is not 
possible to have a port be outbound only and that is must be bidirectional.  
However, I've not attempted to test that. I've got enough annoyance as it is.  
However, I  can see lots of opportunity for trouble w/ ttymon running on a port 
that goes away when the USB-serial adapter is unplugged.

I can open /dev/cua/0 w/ stty, but do not seem to be able to make any changes.
  


--
Andrew


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] stty source code

2012-09-05 Thread Andrew Gabriel

Reginald Beardsley wrote:

Having established that the stty behavior is a  red herring produced by 
/bin/tcsh here is a precis of the situation (pwd is /etc):

oi%rhb {180} /app/bin/rcsdiff remote ttydefs
===
RCS file: RCS/remote,v
retrieving revision 1.1
diff -r1.1 remote
5a6
  

u0::dv=/dev/cua/0:br#9600:el=^C^S^Q^U^D:ie=%$:oe=^D:


===
RCS file: RCS/ttydefs,v
retrieving revision 1.1
diff -r1.1 ttydefs
63a64
  

msp430:9600 -parenb cs8 -cstopb ixon opost olcuc onlcr:9600 hupcl sane::msp430


oi%rhb {181} pmadm -l
PMTAG  PMTYPE SVCTAG FLGS ID   
zsmon  ttymon tty0   uroot /dev/term/0 b - 
/usr/bin/login - msp430 ldterm,ttcompat login:  - - -  -S n #MSP430
oi%rhb {182} /bin/sh
rhb@openindiana:/etc$ tip u0

connected
  ok.
   ok.
ok.
   ~
[EOT]
rhb@openindiana:/etc$ stty -a 


You're running stty on your terminal, not on /dev/term/b.
Try running "stty -a < /dev/term/b" whilst you have the tip running on it.
(or I'm misunderstanding what you're trying to do.)

--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] BTRFS for OI, anyone?

2012-05-09 Thread Andrew Gabriel

James Carlson wrote:

Each currently has advantages over the other in different departments.
On balance, from what I've seen of BTRFS in Fedora, ZFS has much more
over BTRFS than the reverse.  But that's to be expected; ZFS is much
more mature.

Frankly, I don't expect that to be a useful comparison unless someone is
planning to build a distribution where the default file system is
changed from ZFS to BTRFS.  If someone is going to do that, then there
are a lot of other things that have to change -- the packaging system
and boot sequence all currently depend heavily on ZFS in OpenIndiana.
It'd be a lot of work to change all of that.
  


I went to a btrfs presentation a few weeks ago (in Holland).
There are already package install changes in some linux distros to support
btrfs snapshots, and also btrfs boot environments is under development, so
changes in either direction may quickly become less than you might imagine.

What did surprise me a bit was, in an audience of what I expect were
mainly Linux users, almost no one had used btrfs (and no one at all had
it on a Production system which was less of a surprise), but when asked,
probably 2/3rds of them had used ZFS.

--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] BTRFS for OI, anyone?

2012-05-09 Thread Andrew Gabriel

Hans J. Albertsson wrote:

Would BTRFS be a viable FS for Openindiana?


I would be interested to know what feature(s) of it you want that you 
think are missing from OI?


--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Question about halting a zone

2012-05-08 Thread Andrew Gabriel

Jeppe Toustrup wrote:

On Mon, May 7, 2012 at 3:57 PM, Mark Creamer  wrote:
  

I was reading about how to update non-global zones, and found a
Solaris document which says the following:

1. Update the Global Zone
2. Reboot
3. Halt the non-global zone   (zoneadm -z myzone halt)
4. Detach the zone  (zoneadm -z myzone detach)
5. Re-attach the zone with -u  (zoneadm -z myzone attach -u)

In my testing, this seems to go fine. My question is, what happens
when you halt a zone - for example, if MySQL is running on that
non-global zone, should you stop it first before halting the zone to
avoid the risk of corrupting data? Or is a halt safe without stopping
any running services first?



I normally do:
zlogin myzone init 5
  


Solaris 11 has "zone shutdown" to do this.
I don't recall if that was added before or after the fork.

--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Install / Partition Help

2012-03-02 Thread Andrew Gabriel

Wallbank, Mark wrote:

Hi
I am trying to install openindiana (x86) and would like to customise the 
partitions/slices. I would like to have a solaris2 fdisk for the whole of the 
disk but when it comes to the format partitions I would like to have about 10 
to 30 Gig at the start of the disk for the os; then setup another slice for the 
data to be shared, however the installer only lets me set one value. I have 
tried setting up the partitions first by hand but there doesn't appear to be an 
option to leave the fs intact and point it at a slice (format partition). Hope 
this makes sense. Any ideas..?
  


I think the installer will use all of the Solaris2 primary fdisk 
partition for the rpool, without going you any ability to configure that.


What you can do (at least in the Oracle installer, which I suspect will 
be the same in the OI installer) is to create additional primary fdisk 
partitions, and these can be used directly for additional zpools, or 
swap devices, or dump devices, etc. (which you'll have to create after 
installation). You should not have more than one primary fdisk partition 
of the same type on a disk, so choose any partition type which Solaris 
and all other software on the system isn't going to treat specially 
("other" is OK, but if you want more than one extra, you may have to 
pick another one too). When specifying the device nodes, the four 
primary fdisk partition device names are *p[1-4] (with *p0 being the 
whole disk irrespective of any partitioning - don't use that by mistake, 
and don't use the Solaris2 one which has the VToC slices in it (normally 
*p1)).


There's a separate discussion to be had about how sensible it is (or 
isn't) to have more than one zpool on a disk, and it certainly defeats 
some of the aims of ZFS.


--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] access to partition table inside zvol

2011-12-29 Thread Andrew Gabriel

Evgenii Lepikhin wrote:

2011/12/23 Evgenii Lepikhin :

  

Looping back through iSCSI is the way.
  

But it's a bit complicated. No other way?



I have patched lofi subsystem to support offsets. If somebody
interested, patch is attached.
  


There's been a long standing RFE outstanding for lofi to become aware of 
disk labeling, so it can give device access to individual FDISK 
partitions and slices (SunOS or GPT/EFI). Now that the driver disk 
labeling code has been commoned up 
(http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/cmlb.c), 
this should be significantly easier to do than it was originally when 
this code was sprinkled all over the various target drivers. I talked 
about this at a LOSUG meeting probably a couple of years ago when 
someone asked a question about doing something similar to what you want 
to do. Proper label support would enable you do to access your EFI/GPT 
slices using device nodes possibly something like:

/dev/lofi/1.d/s0, /dev/lofi/1.d/s1, etc

IIRC, the suggestion which was floated at the LOSUG discussion for a 
labeled loopback block device was to have something like:

/dev/lofi/1 -> 1.d/p0
/dev/lofi/1.d/p0
/dev/lofi/1.d/p1
/dev/lofi/1.d/p2
/dev/lofi/1.d/p3
/dev/lofi/1.d/p4
/dev/lofi/1.d/s0
/dev/lofi/1.d/s1
/dev/lofi/1.d/s2
/dev/lofi/1.d/s3
...etc
and likewise for /dev/rlofi/...

Offset support in lofi might be a good idea too for other reasons, but 
in my view, something more automatic like the above should be used to 
handle the more common cases of partitions/slices.


--
Andrew Gabriel

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] access to partition table inside zvol

2011-12-22 Thread Andrew Gabriel

Evgenii Lepikhin wrote:

Hello,
I'm new to OpenIndiana/Solaris and I have the question without answer:
I exported ZFS volume with iSCSI and installed Windows on the exported
volume. Now I have zvol with partition table inside (EFI/GPT) having
two partitions created. How can I mount one of that partitions
locally? mount knows nothing about "-o offset" (Linux style), ntfs-3g
doesn't know this also.
  


Looping back through iSCSI is the way.


By the way, I imported this volume back to OpenIndiana machine just
for testing. format/fdisk utility sees this disk, but shows strange
information about partitions:
Total disk size is 65270 cylindersCylinder
size is 224910 (512 byte) blocks
  
Cylinders 
Partition StatusType  Start   End   Length%

= ==  =   ===   ==   ===
 1  EFI   0  6526965270100
  


That looks correct. An EFI/GPT partitioned disk is defined to have just 
one single FDISK partition of type EFI which must encompass the whole 
disk. This was done specifically so that an EFI/GPT partitioned disk is 
visible to users of FDISK, so they won't accidentally blow it away by 
thinking the disk isn't formatted.


If you use prtconf(1M), I think you'll see the GPT partitioning inside 
the EFI FDISK partition (which is where your two GPT partitions are). It 
works a bit like DOS logical disks inside an Extended DOS FDISK 
partition (except that in the case of an EFI/GPT partitioned disk, no 
other FDISK partition is permitted on the disk).


--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Zone Privileges for a Normal User

2011-11-07 Thread Andrew Gabriel
I think "manage" is for starting, stopping, etc (zoneadm) the zone, not 
for configuring it (zonecfg).
If "manage" allowed the user to configure the zone, they could also 
change who could login and manage the zone, remove IP address 
restrictions, etc, which is not desirable.



Deniz Rende wrote:

Hello,

The link provided below is a very good source

http://trochejen.blogspot.com/2010/06/zones-delegated-administration.html


 but it still does not answer my question why even though I set
specifically user to manage in the regarding file:

solaris.admin.wusb.read,solaris.device.cdrw,solaris.device.mount.removable,solaris.mail.mailq,solaris.profmgr.read,solaris.zone.login/zdev2,solaris.zone.manage/zdev2

the user is unable to zonecfg zdve2.


So I am wondering if this entry:

solaris.zone.manage/zdev2

has some problems in openindiana or does this only apply to Solaris 11?


On Fri, Nov 4, 2011 at 6:21 PM, Deniz Rende  wrote:

  

Hello,

I am using openindiana 151a server edition in VirtualBox.

root@oi151a:~# uname -a
SunOS oi151a 5.11 oi_151a i86pc i386 i86pc Solaris

I have the following zones in the system:

root@oi151a:~# zoneadm list -civ
  ID NAME STATUS PATH   BRAND
 IP
   0 global   running/  ipkg
shared
   1 zdev running/zones/zdevipkg
shared
   2 zdev2running/zones/zdev2   ipkg
shared

I have a user called macuser1 with the following auths and profiles:

macuser1@oi151a:~$ auths

solaris.admin.wusb.read,solaris.device.cdrw,solaris.device.mount.removable,solaris.mail.mailq,solaris.profmgr.read,solaris.zone.login/zdev2,solaris.zone.manage/zdev2


macuser1@oi151a:~$ profiles
Zone Management
ZFS File System Management
Basic Solaris User
All

What I am trying to do is to dedicate the zdev2 zone to the macuser1 but
also let this user to manage it.

I got the first part successfully:

macuser1@oi151a:~$ pfexec zlogin zdev2
[Connected to zone 'zdev2' pts/3]
Last login: Fri Nov  4 17:22:49 on pts/3
OpenIndiana (powered by illumos)SunOS 5.11oi_151aSeptember 2011
root@zdev2:~#

and as intended the user is not able to login to zdev zone:

macuser1@oi151a:~$ pfexec zlogin zdev
zlogin: macuser1 is not authorized  to login to zdev zone.

which is good, but I can't get the user to configure it's own zone, ie:

macuser1@oi151a:~$ pfexec zonecfg -z zdev2
WARNING: you do not have write access to this zone's configuration file;
going into read-only mode.
zonecfg:zdev2>exit

which is giving me read-only mode.

How do I let this user to manage ( i,e use zonecfg ) zdev2 zone? I
appreciate the feedback.

Regards,

Deniz Rende


--
Deniz Rende



--
Andrew Gabriel

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Openindiana in VMWare and usb devices

2011-06-26 Thread Andrew Gabriel

Gabriele Bulfon wrote:

No way.USB CDC modem/fax under VMWare Esxi 4.1 does not work
Everything looks fine under the OS (devices, dev/cua, messages from the USB 
management),
but then tip does not attach as if the dev file is missing, or maybe just not 
responding
Anyone does have the ability to check this why?
  


Try running this dtrace script while doing the failing tip, to see
where in the kernel the ENXIO is coming from. Drop it into a
file with the #! as the first line and with +x access, and just run
the file.

#!/usr/sbin/dtrace -Fs
#pragma D option bufsize=1m
#pragma D option specsize=1m

syscall::open:entry
/execname == "tip"/
{
   /*
* The call to speculation() creates a new speculation.  If this 
fails,
* dtrace(1M) will generate an error message indicating the 
reason for
* the failed speculation(), but subsequent speculative tracing 
will be

* silently discarded.
*/
   self->spec = speculation();
   speculate(self->spec);

   /*
* Because this printf() follows the speculate(), it is being
* speculatively traced; it will only appear in the data buffer 
if the

* speculation is subsequently commited.
*/
   printf("%s", stringof(copyinstr(arg0)));
}

fbt:::entry
/self->spec/
{
   /*
* A speculate() with no other actions speculates the default 
action:

* tracing the EPID.
*/
   speculate(self->spec);
   printf("%x %x %x %x %x", arg0, arg1, arg2, arg3, arg4);
}

fbt:::return
/self->spec/
{
   /*
* A speculate() with no other actions speculates the default 
action:

* tracing the EPID.
*/
   speculate(self->spec);
   printf("%x errno=%d", arg1, errno);
}

syscall::open:return
/self->spec/
{
   /*
* To balance the output with the -F option, we want to be sure that
* every entry has a matching return.  Because we speculated the
* open entry above, we want to also speculate the open return.
* This is also a convenient time to trace the errno value.
*/
   speculate(self->spec);
   trace(errno);
}

syscall::open:return
/self->spec && errno == ENXIO/
{
   /*
* If errno is ENXIO, we want to commit the speculation.
*/
   commit(self->spec);
   self->spec = 0;
}

syscall::open:return
/self->spec && errno != ENXIO/
{
   /*
* If errno is not ENXIO, we discard the speculation.
*/
   discard(self->spec);
   self->spec = 0;
}

--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] oracle removes 32bit x86 cpu support for solaris 11 will OI do same?

2011-06-25 Thread Andrew Gabriel

Michael Stapleton wrote:
While we are talking about 32 | 64 bit processes; 
Which one is better?

Faster?
More efficient?
  
Initially, assuming a 32 verses 64 bit build doesn't change any 
algorithms...


On x86, a 64 bit build of the same program will typically run ~50% 
faster if it's CPU-bound, because more registers are available for the 
compiler/optimizer to use. There's a wide variance depending what the 
program does (I have an example which gets much better than 50% gain). 
If it's not CPU-bound (and most things aren't), it makes no difference. 
However, if the larger pointers and data items push the 64 bit program's 
working set size over what fits in the CPU cache whereas the 32 bit 
version does fit in the cache, then you can in theory see the 32 bit 
version winning.


On sparc, a 64 bit build of the same program does not benefit from any 
more registers like on x86, but it does pay the price for a larger 
working set size, and I typically see a 10-14% performance reduction for 
a CPU-bound program which has been just rebuilt 64bit.


However, if you can use the 64 bit address space to change the 
algorithms used by your app, such as mmaping files rather than doing 
loads of lseek/read/write ops, then you may see additional gains from 
this, and on sparc that will often more than cancel out the reduction in 
CPU performance by some way.


I wouldn't personally bother changing anything much which is shipped 
with the OS (very rarely is the performance of things in /usr/bin an 
issue). However, I would suggest taking these factors into account when 
building the key applications your system is going to run, if you are 
CPU-bound.


--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Advice for building SMF service for non-priviledged processes

2011-06-18 Thread Andrew Gabriel

Andrew Gabriel wrote:

Blake wrote:

I am working on an SMF script to allow a non-root user to managed the
Unicorn Ruby/Rails application server via SMF. But I'm having problems.

We are also using RVM to manage rubies, so I need a way for the method
script to simulate an interactive login so that RVM works properly.

Any ideas/suggestions much appreciated.


I run mpd (music player daemon) as my userid via SMF.

I use the following start method to emulate enough of my login for it
to work:


Sorry, Thunderbird lost the formatting. Also, I thought afterwards that
the dependences might be useful for you too, so I'll try again, and include
the whole manifest... (I don't guarantee it's a perfect example)





  
  
  

  
  

  
  

  
  

  
  
  
  
  


  
  
  

  
  
  

  Music Player Daemon (mpd)

  




You may need to add more envvars for your app, or start it via a shell
script which does that for you.


--
Andrew




___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Advice for building SMF service for non-priviledged processes

2011-06-18 Thread Andrew Gabriel

Blake wrote:

I am working on an SMF script to allow a non-root user to managed the
Unicorn Ruby/Rails application server via SMF.  But I'm having problems.

We are also using RVM to manage rubies, so I need a way for the method
script to simulate an interactive login so that RVM works properly.

Any ideas/suggestions much appreciated.
  


I run mpd (music player daemon) as my userid via SMF.

I use the following start method to emulate enough of my login for it
to work:

timeout_seconds='60'>









You may need to add more envvars for your app, or start it via a shell
script which does that for you.

--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Nokia N8-00

2011-06-04 Thread Andrew Gabriel

Olaf Bohlen wrote:

Maxim  writes:

Hi Maxim,

  

I've tried to set up mobile internet using Nokia N8-00 under OI b148,
but looks like system doesn't see my device...
I've expected that new device will be in /dev/N8 or smth like this as it
described in
http://blogs.oracle.com/jameslegg/entry/mobile_internet_under_opensolaris.
But looks like something wrong..

prtconf -v shows only Nokia device as storage, but probably I missed
something.



Have you selected "PC Suite Connection" on your N8 when attaching
the USB cable? If you selected "Storage" you will never see a
serial device. (This may be configured as your default USB
connection see in the settings of your Nokia)

If it is there, you should see a /dev/cua/0 (or 1, 2,...) which
you can use for dialing.


In my experience...

Some E61i work, others don't.
E71 works.
E5 doesn't work.

The ones which don't work still result in a serial port being created 
under /dev/cua/... but they behave like someone forgot to plug any modem 
into them.


--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NTPD and PPM issues?

2011-05-27 Thread Andrew Gabriel

Dan Swartzendruber wrote:

Andrew Gabriel wrote:

Dan Swartzendruber wrote:


I have an openindiana virtual machine running under ESXi. The vmware 
tools are installed and timesync is disabled. I have ntpd configured 
and working with several servers in the us.pool.ntp.org subdomain. 
It all seems to work - comparing the time with 2 other virtual 
machines on the same hypervisor indicates a match to the second. 
Yet, on OI, I see these messages every few minutes:


May 27 14:17:25 nas ntpd[282]: [ID 702911 daemon.notice] frequency 
error -512 PPM exceeds tolerance 500 PPM


Neither of the other VMs running ntpd complain this way. I'm not 
even sure this is VM related or maybe some Opensolaris-related 
thing, so forgive the possible waste of bandwidth (heck, I'm not 
even sure this is really an issue, but I tend to be concerned when I 
see a lot of messages like this...)


Any thoughts on this? Thx!


Don't expect realtime or low latency software to run correctly in a 
hypervisor-virtualized OS.


Andrew, did you read where I mention that OI is the only VM that has 
this issue?


It may be the only one which _notices_ it's having this issue.
Like it may be the only one which notices if a disk returns a corrupted 
block.

You didn't say what the other OS's are.

--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] NTPD and PPM issues?

2011-05-27 Thread Andrew Gabriel

Dan Swartzendruber wrote:


I have an openindiana virtual machine running under ESXi. The vmware 
tools are installed and timesync is disabled. I have ntpd configured 
and working with several servers in the us.pool.ntp.org subdomain. It 
all seems to work - comparing the time with 2 other virtual machines 
on the same hypervisor indicates a match to the second. Yet, on OI, I 
see these messages every few minutes:


May 27 14:17:25 nas ntpd[282]: [ID 702911 daemon.notice] frequency 
error -512 PPM exceeds tolerance 500 PPM


Neither of the other VMs running ntpd complain this way. I'm not even 
sure this is VM related or maybe some Opensolaris-related thing, so 
forgive the possible waste of bandwidth (heck, I'm not even sure this 
is really an issue, but I tend to be concerned when I see a lot of 
messages like this...)


Any thoughts on this? Thx!


Don't expect realtime or low latency software to run correctly in a 
hypervisor-virtualized OS.


--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Setup serial console on ttyc (the third serial port)

2011-03-03 Thread Andrew Gabriel

wessels wrote:


Can you confirm a few thing for me?
-consplat.c is the only file which needs patching?
-kernel/misc/consconfig and kernel/misc/amd64/consconfig are the only
two binaries which need to be updated


Looks more like it should be
/platform/i86pc/kernel/dacf/consconfig_dacf
/platform/i86pc/kernel/dacf/amd64/consconfig_dacf

--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Setup serial console on ttyc (the third serial port)

2011-03-03 Thread Andrew Gabriel

wessels wrote:

while waiting for the build to complete I took some time to really
look at the code, as I should have done in the first place. I think
quite a bit more patching needs to be done for example boot_console
also needs to be updated. Any more hints would be more than welcome.



http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/i86pc/io/consplat.c#107>.


Just noticed the url I pasted has a line number. I didn't mean that was 
the only area in the file which needs changing - you will need to look 
through the whole file for relevant code.


Again, I suggest you put up a webrev for comments.

--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Setup serial console on ttyc (the third serial port)

2011-03-03 Thread Andrew Gabriel

wessels wrote:

Goodmorning,

The output you requested is below.
As stated before grub works fine on the third port with these lines:
serial --unit=2 --speed=9600 --word=8 --parity=no --stop=1
terminal --timeout=30 serial


There are 3 ways (that I can think of) for an OS to identify if a system 
has com1-4, and grub probably doesn't do it exactly the same as Solaris, 
so grub seeing it is no guarantee Solaris will.


However, your ls and prtconf -v shows that Solaris has found it in the 
ACPI tables, so that part is OK.




Can you confirm a few thing for me?


I can't because the work was never done before.
They were what I was aware needed doing.


-consplat.c is the only file which needs patching?


It would be useful to see the webrev, to see if something doesn't look 
right.



-kernel/misc/consconfig and kernel/misc/amd64/consconfig are the only
two binaries which need to be updated
-for testing only the two consconfig files can be replaced by copying
them from the proto area. No additional steps need to be done, like
creating a new BE. Nor do any other files need to be updated.


You will probably need to rebuild the boot archive.


-the correct kernel line in menu.lst should look like this:
kernel$ /platform/i86pc/kernel/$ISADIR/unix -k -B $ZFS-BOOTFS,console=ttyc

Lastly can you help me build consconfig without running nightly.sh?
That will save quite some time.


I don't understand what you did if you haven't built it already. Did you 
patch the binary? If so, in what way?


I'm out of date on the current build process. Hopefully someone else can.

--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Setup serial console on ttyc (the third serial port)

2011-03-03 Thread Andrew Gabriel

I'm wondering if your system has actually identified com3 at all.
Is it configured in the BIOS on i/o address 3E8 (as com3 should be)?
When you have the system booted, is there actually a /dev/ttyc
and a /dev/term/c, and if so, do the links point to
/devices/pci@0,0/isa@1/asy@1,3e8:c ?

If it's not there, then Solaris has not found com3 in the ACPI
tables, and won't know the system has a com3 port.

wessels wrote:

hi,

I setup the build environment, what a pita, patched the file. Did a
build. Only replaced both consconfig files but no console. Are there
other files as well which need patching to make this work ?

tia

On Wed, Mar 2, 2011 at 9:50 PM, Andrew Gabriel
 wrote:

I suspect it would be very easy to fix this in the kernel.
I fixed the missing bits in the asy(7D) driver 7 or 8 years ago when I was
adding 16650/16750 support. I think all that's left to fix is adding ttyc
and ttyd support to usr/src/uts/i86pc/io/consplat.c
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/i86pc/io/consplat.c#107>.

The other bit that was outstanding at that time was the Device Configuration
Assistant (DCA), which was a complete swine to build. However, that vanished
in Solaris 10 Update 1, being replaced by grub and some device enumeration
support in the kernel, and I would guess grub probably does support com3 and
com4.


--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Setup serial console on ttyc (the third serial port)

2011-03-02 Thread Andrew Gabriel

I suspect it would be very easy to fix this in the kernel.
I fixed the missing bits in the asy(7D) driver 7 or 8 years ago when I 
was adding 16650/16750 support. I think all that's left to fix is adding 
ttyc and ttyd support to usr/src/uts/i86pc/io/consplat.c 
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/i86pc/io/consplat.c#107>.


The other bit that was outstanding at that time was the Device 
Configuration Assistant (DCA), which was a complete swine to build. 
However, that vanished in Solaris 10 Update 1, being replaced by grub 
and some device enumeration support in the kernel, and I would guess 
grub probably does support com3 and com4.


wessels wrote:

ough, that's a real bummer and waste of time. This restriction is not
very well documented. Everybody talked about ttya and ttyb but no ever
mentioned that other ports are not supported apart from usb-serial.

Now I've to figure out how the ports will get reshuffled if I change
the SOL port from com3 to com2. Both com1 and com2 are in use. I
wonder what's going to happen to them... If I get this working, that's
all three ports, I'll put up a page on the wiki.
Thanks for the notice sofar. Anything else I should worry about?

On Wed, Mar 2, 2011 at 7:44 PM, Andrew Gabriel
 wrote:

wessels wrote:

Perhaps all described procedures work on ttya but I can't get a console on
ttyc.

ttyc (the third serial port) is a SerialOverLan port

Sorry, but the kernel only recognizes ttya and ttyb (and usb-serial) as
console= values, not ttyc.

It's been a very long missing feature on Solaris x86, but became less
important as the number of motherboard serial ports has reduced over the
years, and ISA-bus serial cards became unusable (no ISA-bus slots anymore).

Can you disable ttya (com1) in the BIOS, and then reconfigure ttyc (com3) to
have the i/o address and irq which were associated with com1, so it looks
like com1 (ttya) to the OS? (IIRC, 0x3f8 and IRQ 4)

--
Andrew


--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Setup serial console on ttyc (the third serial port)

2011-03-02 Thread Andrew Gabriel

wessels wrote:

Perhaps all described procedures work on ttya but I can't get a console on ttyc.

ttyc (the third serial port) is a SerialOverLan port


Sorry, but the kernel only recognizes ttya and ttyb (and usb-serial) as 
console= values, not ttyc.


It's been a very long missing feature on Solaris x86, but became less 
important as the number of motherboard serial ports has reduced over the 
years, and ISA-bus serial cards became unusable (no ISA-bus slots anymore).


Can you disable ttya (com1) in the BIOS, and then reconfigure ttyc 
(com3) to have the i/o address and irq which were associated with com1, 
so it looks like com1 (ttya) to the OS? (IIRC, 0x3f8 and IRQ 4)


--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] zpool command: some questions.

2010-11-21 Thread Andrew Gabriel

Anthony Renaud wrote:

I am not sure of the consequences of the following zpool command:

zpool create mypool /dev/dsk/c5t0d0p2


It turns the second primary FDISK partition on that disk into a zpool.


(c5t0d0p2 is where I have the unbootable OpenSolaris partition from where I 
want to recover some files)


It's not clear from what you've said if that was a root zpool from what 
you've said, or some other zpool. zfs root pools have to be on a slice 
which is part of an SMI labeled partition (on x86). The device name for 
that will end in s0, not p2. If it wasn't a root pool, then it depends 
how you created it.



Does it prepare the partition for the mount commands?Or does it erase and 
format the partition?


Effectively, erase and format.


Or is it better to use: zpool import ?


Yes.


and after:

zfs set mountpoint=legacy mypool
mount -F zfs mypool /mnt


Can't think why you'd want to do that.

--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS pool slow as molasses

2010-10-05 Thread Andrew Gabriel

Julian Wiesener wrote:

On 10/ 5/10 10:13 PM, Paul Johnston wrote:

Er what sort of name would you expect to see for a sata device?


SATA devices normaly have a Taget, so it's c4t0d0 instead of c4d0. 
However, these are names, if you want to know what interface type is 
used, you should look on the device path:


This is what an SATA device looks like with Native AHCI disabled:

$ ls -la /dev/dsk/c7d0s0
lrwxrwxrwx   1 root root  51 May 23  2009 /dev/dsk/c7d0s0 -> 
../../devices/p...@0,0/pci-...@1f,2/i...@0/c...@0,0:a


This is what an SATA device looks like with Native AHCI enabled:

$ ls -la /dev/dsk/c3t0d0s2
lrwxrwxrwx   1 root root  49 Jan  6  2009 /dev/dsk/c3t0d0s2 
-> ../../devices/p...@0,0/pci1028,2...@1f,2/d...@0,0:c


Many especially older systems have it disabled by default because 
Windows was not able to boot from SATA devices in the past. Also some 
vendors disabled the Native AHCI switch in their BIOS. If you're lucky 
an BIOS update will make it available, if not, you're out of luck (or go 
to the fency BIOS hackers crowd and possible trash your BIOS).


If you upgraded from a build which didn't have a native sata driver for 
your chipset (and hence drove it as ATA), to a later build with a native 
sata driver, the old device name with a 't' is preserved but now points 
to the new sata driver, so any /dev/dsk/... entries in vfstab and the 
like didn't get screwed up. If you then add another sata disk to the 
system, that gets a new device name with a 't'.


Confused when I first saw this, but it makes sense.

--
Andrew

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss