Re: [OpenIndiana-discuss] OT: T3-1 ILOM complaining about unrecognized chassis

2013-11-07 Thread Bernd Helber

Hi,

firstly, open a Case at Oracle.

Secondly check the Oracle Document Page for Hardware Isssues regarding 
T4-1

if possible, login as eis installer or root.

Reset the SP, if thats not succesful, run the SP in Service mode, 
Oracle/SUN will provide a Password to get Access to the Service Mode.
Cleanup the Warnings  reset the SP again. If thats not the solution, 
check for an Update for the LOM,

flash the SP and clean this up again.



Kind regards.



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] OT: T3-1 ILOM complaining about unrecognized chassis

2013-11-07 Thread Bernd Helber

Do you have access to MOS (My Oracle Support)

If not call the Oracle Support Hotline +49 0180.2000.170  provide the 
Serial Number and if you are aware of your Oracle Support Contract 
Support Number, provide this too.


Cheers and Good luck.

Bernd Helber




___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Oracle Solaris on x86 going to be killed?

2012-01-06 Thread Bernd Helber
Sorry mate, 

but i think this is pure nonsense and fud too.
Who told you this?

Would be quit interesting to know  about the source.
Why should Oracle do this?
To sell more Sparc Systems?
Nobody of the Customer Base will buy more Sparc Systems in case of EOL'd
x86 Solaris


Cheers

Bernd


On Fri, 6 Jan 2012 13:22:54 +0100, Gabriele Bulfon gbul...@sonicle.com
wrote:
 I heard rumors of Oracle going to dismiss the intel version of
Solaris...
 Is this true?!
 
 Inviato da iPad
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] nagios/nrpe?

2011-10-14 Thread Bernd Helber

Hi Dan,

i built Nagios and NRPE few Month ago for Open Solaris

maybe its what you need

http://unixhaus.de/wp-content/Solaris/x86/nagios/

All packages have been built with Sun Studio againgst Webstack.
If you would prefer to build the packages by yourself, i could send you
my Documentation, unfortunately, at the Moment its written in german.

But i would be able to transfer it from german into English. But it will
take a little bit time. ;)

Those are the packages

nagios-base-x86-3.2.3.pkg   Minimal Build
nagios-config-x86-3.2.3.pkg --- Configfiles
nagios-html-x86-3.2.3.pkg   --- HTML and other...
nagios-x86-3.2.3.pkg
nagios-cgi-x86-3.2.3.pkg    CGI Stuff
nagios-full-x86-3.2.3.pkg   --- everyyng
nagios-plugins-x86-1.4.15.pkg  - Plugins
nrpe-x86-2.12.pkg



Am 14.10.11 17:27, schrieb Dan Swartzendruber:
 I monitor a bunch of servers and such on my home system using nagios
 (icinga) on a ubuntu VM.  Each target has nrpe to handle the network
 requests and run the nagios plugins.  Has anyone gotten the nrpe/nagios
 stuff working on OI?  Google hasn't turned up anything useful, I'm
 afraid...
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


Cheers. :)

-- 
with kind regards

 Bernd Helber


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Making an iSCSI target redundant

2011-10-10 Thread Bernd Helber
Hi

For High Availability you have to stick to Open HA Cluster and
OpenSolaris 2009.6

Open HA Cluster only builds on an Open Solaris from build 111b to build
117.

But there is no other Cluster Solution out there.


kind regards

Am 10.10.11 14:47, schrieb Jeppe Toustrup:
 Hi
 
 I am trying to set up a storage system which is going to expose it's
 storage through iSCSI. The storage system is consisting of two storage
 nodes connected to SAS switches which then are connected to SAS JBOD
 arrays containing the disks which will hold the data.
 
 Everything is up and running in my setup now, except redundancy on the
 storage node part. I don't know how to get around to set up redundancy
 of the iSCSI targets in case one of the storage nodes goes down, or is
 taken out for maintenance. Can anybody help me out here? Are there any
 best practices or software which can handle this?
 
 I have had a look at Nexenta, but it seems expensive considering I
 only am missing the redundancy of the storage nodes.
 
 --
 Venlig hilsen / Kind regards
 Jeppe Toustrup (aka. Tenzer)
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS problem hangs machine

2011-10-08 Thread Bernd Helber
Dear Michelle,

i assume you have physical access to the Box.

Are you able to boot the box?

Are you able to check if there are any Core Dumps left ond the Box?

Only to clarify, we're not talkin about a complete Zpool Corruption?
 ZFS hang the system completely, after zpool import on the reboot?


Regarding the message it looks to me that the disk is affected. Are you
able to give us a little bit more Informations regarding the issue?

With kind regards


Am 08.10.11 08:25, schrieb Michelle Knight:
 Hi Folks,
 
 I'm on 151a x86
 
 Gigabyte mother board with on board eSATA 3 which works.
 An eSATA Rocket Raid card which has no drivers.
 
 An external twin hard drive toaster
 
 In order to get around the ROcket Raid card, I connect to one drive using the 
 on board eSATA 3 and to the other using USB.
 
 The two 2Tb drives are connected in a mirror configuration.
 
 I started a large copy job to them at about 7pm last night. It got to about 
 1am and gave this in the messages...
 
 Oct  8 01:05:54 jaguar scsi: [ID 107833 kern.warning] WARNING: 
 /pci@0,0/pci1458,5006@1a,7/hub@1/storage@2/disk@0,0 (sd12):
 Oct  8 01:05:54 jaguar  drive offline
 
 ...that is the last thing in the messages file before the reboot.
 
 The terminal on the server had frozen. There was still one terminal session 
 open from another PC, but on attempting a ZFS STATUS, the process hung and 
 wouldn't return. 
 
 I couldn't get another terminal connection.
 
 On the face of it, this ZFS failure critically crippled the machine.
 
 I'm not sure whether I had problems doing a ZFS copy to drives in this manner 
 before, one on USB and one on eSATA3 ... but if I did have problems I must 
 have got arund it because the last backup resulted in a set of disks that I 
 took off site.
 
 Anyone have any ideas please?
 
 Many thanks,
 
 Michelle Knight
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards

 Bernd Helber


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS problem hangs machine

2011-10-08 Thread Bernd Helber


Hi Michelle,

please check also the Fault Management for possbile fmdump messages.




At example...

fmdump -v

Mar 10 2011 08:29:15 5bf06367-2d12-cff3-a0a3-97ffc2026177 ZFS-8000-FD
  100%  fault.fs.zfs.vdev.io

Problem in: zfs://pool=datenhalde/vdev=26a958afba327639
   Affects: zfs://pool=datenhalde/vdev=26a958afba327639
   FRU: -
  Location: -


If you recive a ID from fmdump then...

Take the ID

and check the Faultmanagement ID in verbose Mode


 bernd@kobold:/home/bernd$ fmdump -vV -e -u
5bf06367-2d12-cff3-a0a3-97ffc2026177
TIME   CLASS
Mar 10 2011 08:28:59.942090976 ereport.fs.zfs.probe_failure
nvlist version: 0
class = ereport.fs.zfs.probe_failure
ena = 0x7984174a631
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0x4ccfc15ab4863bfb
vdev = 0x26a958afba327639
(end detector)

pool = datenhalde
pool_guid = 0x4ccfc15ab4863bfb
pool_context = 0
pool_failmode = wait
vdev_guid = 0x26a958afba327639
vdev_type = disk
vdev_path = /dev/dsk/c4t3d0s4
vdev_devid = id1,sd@SATA_WDC_WD20EARS-00M_WD-WMAZA0389285/e
parent_guid = 0x9dc7f893d61f9303
parent_type = mirror
prev_state = 0x0
__ttl = 0x1
__tod = 0x4d787dbb 0x38272ae0

bernd@kobold:/home/bernd$


Thanks in Advance.



Am 08.10.11 09:54, schrieb Bernd Helber:
 Hi Michelle,
 
 
 1.
 
 please have a look at
 /var/crash
 /var/cores
 
 in general have a look for files like core.* vmcore.* unix.*
 
 2. collect all those Data put it on a USB Stick, for later forensics if
 possible.
 
 Peronally i assume it would be the best idea to collect all possible
 Data... we can get.
 
 All the junk from
 /var/adm messages
 /var/log
 
 too.
 
 Do you have the SUNWscat installed on any of your Sun Boxes?
 
 
 If possible and the Box won't break
 
 please try the followin Command
 
  zdb -eubbcsL mypoool  /tmp/poolissue.txt
 
 zbd == ZDB Debugger
 
 If possible collect it too, if Z Debugger will hang the box, we should
 dig deeper.
 
 Thanks in advance
 
 
 
 
 Additionally, it would make sense to report it as a bug report, to the
 illumos Guys.
 
 Question besides @all who is responsibel for ZFS or even Kernel Bugs
 within the Community?
 
 
 
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards

 Bernd Helber



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS problem hangs machine

2011-10-08 Thread Bernd Helber
Check Faultmanagement..


#fmadm faulty -a
-



Do you got any output backup from the fmdump commands i sent?
Please have a look at the examples i posted before.

Without Output, we're not able to dig into the issue.


How to report Problems


https://www.illumos.org/projects/illumos-gate/wiki/How_To_Report_Problems





Cheers


 Hi Bernd,
 
 /var/crash didn't exist.
 /var/cores was empty
 
 I have a copy of the message and log files. The messages only contains that 
 one notice of the drive not responding, and that was it. The messages after 
 that were when I had to restart the system, so there was nothing useful in 
 there.
 
 I've got to work out how to get the GUID of the zpool backup, and then I'll 
 try those commands.
 
 I don't know how to check fault management.
 
 The good news is that the restart of the copy is continuing. It is currently 
 around the 241G mark transferred. If it makes it to 300G, then it will have 
 passed the point where it died the first time.
 
 Michelle.
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS problem hangs machine

2011-10-08 Thread Bernd Helber
It might be a good idea to enable Coreadm for further events and report
it to Illumos Gate, or any other Bugtracker.

Please follow the Description this Procedure.

https://www.illumos.org/projects/illumos-gate/wiki/How_To_Report_Problems

Another Option could be a migration to Samba, if those Problems ocurre
again.

Sorry for the late feedback iw was busy.

Cheers


Am 08.10.11 17:10, schrieb Michelle Knight:
 It looks like there was some process or change with the smb service that 
 wasn't happy until it had received another reboot.
 
 It crosses my mind that if smb wasn't happy ... might this have caused an 
 upset with zfs because one of the pools is shared via smb?
 
 Incidentally, it looks like backup has transferred 624G and is still running 
 so it is well beyond the 200-ish G when it froze.
 
 I really hope that the R620 is supported at some point. It would halve this 
 backup transfer time.
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards

 Bernd Helber



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Help with website

2011-10-08 Thread Bernd Helber
Hi List,

firstly people tend to socialise thats a point why people like to
use Blogs, Forums you name it.

Wikis in general are great for Documentation. But it has not the same
appeal to End Users.

Have a look at Ubuntu Forums, and you know why.
People like to have a Conversation, also sharing the  Experience. The
Forum was one of the important points why Linux Distributions like
Ubuntu, OpenSuSE had appeal to Endusers in the past few years.

Mailing Lists, are great for System Engineers, System Administrators and
Devs. But Mailing Lists aren't very sexy. :)

But i would tend to say building a Forum or a Social Network, for
Solaris/OpenIndiana Nexenta Delphix Users, should be done in a proper
kind of manner.

Every popular Linux Distribution, also the FreeBSD Community is running
Forums, or Webboards in Addition to the Mailing lists.

Maybe it would make sense to talk to the guys who are in Charge for
Public Relations at Nexenta, Joyent Belenix, Openindia. A Webforum for
Users of Solaris Distributions, could make sense.  But as we all know it
takes time to build a community.

Have a nice weekend Guys



Am 08.10.11 20:34, schrieb Josef 'Jeff' Sipek:
 I forgot to add... I generally search the web for opensolaris or solaris
 11 along with what I want to do.  Sometimes, pre-osol ways of doing things
 still apply.
 
 Jeff.
 
 On Sat, Oct 08, 2011 at 02:32:59PM -0400, Josef 'Jeff' Sipek wrote:
 On Sat, Oct 08, 2011 at 11:56:42AM -0600, LinuxBSDos.com wrote:
 Hi,

 There seems to be plenty of resources for devs, but very little for
 end-users, especially those new to the Solaris way of doing stuff.

 There is a general discussion list, but I think it would be better if all
 that discussion takes place in a forum-like setting, instead of via a
 mailing list.

 I can volunteer to help set up and maintain one, and take care of other
 basic web-related chores.

 First things first, why do you thing a web-forum is better than a mailing
 list?

 Regardless, I agree that there is a bit of a lack of documentation.  Do you
 think a forum/mailing list would work better than a wiki?

 Either way, we could use all the help we can find.

 Jeff.

 -- 
 I'm somewhere between geek and normal.
  - Linus Torvalds

 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss
 


-- 
with kind regards

 Bernd Helber


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] smb keeps failing?

2011-09-24 Thread Bernd Helber
Do you have any Coredumps left?
Possibly in /var/crash or /var/core ?

If not you should invest five minutes of your spare time to get it up an
running.


First take a look about the current status of coreadm  and dumpadm

# coreadm

you will recive an output like this.


 global core file pattern:
 global core file content: default
   init core file pattern: /var/core/core.%n.%f.%u.%p
   init core file content: default
global core dumps: disabled
   per-process core dumps: enabled
  global setid core dumps: disabled
 per-process setid core dumps: disabled
 global core dump logging: disabled

Now enable logging, for coreadm
coreadm -e log
and global core Dumps
 coreadm -e global

Now take a look at coreadm again


Second

Also think about a dedicated Dump Device for Crashdumps.

dumpadm -s /var/crash/whateverdeviceyouhaveleft ;)
The Crashdump Device should match your RAM
If you use 8GB RAM, the Dump Device should be equal.


If the issue with the SMB won't go away and you receive
Dumpfiles,consider to forward the Dumps to the Illumos Guys, or the
maintainers. if possible.


Am 23.09.11 14:21, schrieb Dan Swartzendruber:
 Sorry for the vague subject line.  I notice that every few days, my windows
 7 pro box can't connect to an SMB share.  After trying everything, I finally
 restarted the smb server service and it was suddenly there again.  This has
 happened several times in the past few weeks.  Any hints as to where to
 look?  Thanks...
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards

 Bernd Helber


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS gui

2011-09-23 Thread Bernd Helber
Hi,

have a look at this. ;)
http://www.napp-it.de/index_en.html



On Fri, 23 Sep 2011 00:48:13 -0600, LinuxBSDos.com
fi...@linuxbsdos.com wrote:
 Folks,
 
 Other than the Time Slider gui, is there any other graphical interface
for
 managing ZFS available?
 
 TIA,
 
 
 --
 Fini D.
 LinuxBSDos.com
 
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] oracle gives openoffuce to apache

2011-06-02 Thread Bernd Helber
Am 02.06.11 13:08, schrieb Gabriel de la Cruz:
 -The plan was so simple; we purchase sun, kick out the hippies, and make
 real business.
 -well...
 -Does this mean that we cannot sell corporate licenses of openoffice
 anymore?
 -well..
 -Did anyone wrote a plan B? anyone?...
 -...
 -What were those indians called?
 -Sir, do you mean Apaches?
 -Whatever, give it back to them...
 -...
 
 :P

Thanks Mate!

You saved my day. :-)

Cheers

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Linux vs. Solaris (Oh please...)

2011-03-30 Thread Bernd Helber
Do we really need this kind of discussion?

To be honest is sounds like Kindergarten.

With kind regards


On Wed, 30 Mar 2011 13:19:52 +0800, Christopher Chan
christopher.c...@bradbury.edu.hk wrote:
 On Wednesday, March 30, 2011 11:42 AM, Gordon Ross wrote:
 Oh please,

 Have some sense, and don't bother with this debate on this list.

 [  Solaris vs. Linux blah blah blah, gnome vs kde, blah blah blah! ]

 
 I'd just like to see a real operating system become more mainstream. I 
 wish I had the skills to contribute...
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] 2 Node ZFS ISCSI failover Cluster with one SAS storage attached

2011-02-20 Thread Bernd Helber
Dear Denny.

For the moment

the only solution for this issue is to install Opensolaris 2009.6 and to
build Open HA Cluster from source. It's comparable to Sun Cluster 3.2

For the future You may have a look onto
http://www.illumos.org/projects/ihac

The other Alternative could be the FreeBSD HAST Project as you mentioned.



Am 20.02.11 17:22, schrieb Denny Schierz:
 hi,
 
 we want to build a two node failover cluster for exporting ZFS Volumes via 
 ISCSI. The nodes are connected both to one 90TB SAS storage. We had success 
 with Solaris 10 and the HA Storage plus Cluster package (3.x), but after 
 Oracle has changed the license with Sol10/U9, we can't use it anymore for 
 free :-(
 So, does OpenIndiana has everything, what we had under Solaris? Otherwise, 
 can we use other tools, without heavy scripting?
 I looked also for FreeBSD (x.14) (carp + devd), but the ZFS version is a bit 
 old and there is no ISCSI-Target on board.
 
 any suggestions?
 
 cu denny
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards

 Bernd Helber



 _.-|-/\-._
  \-'  '-.
 //\/\\/
   \/  ../.  \/
   _   //___\ |.
 . \ /   /\ ( #) |#)
   | |   /\   -.   __\
\V   )./_._(\
   .)/\   .- /  \_'_) )-..
   \ ./  /  /._./
   /\ '-' /
 '-._  v   _.-'
   / '-.__.·' \
 \/

 ***


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] 2 Node ZFS ISCSI failover Cluster with one SAS storage attached

2011-02-20 Thread Bernd Helber
Am 20.02.11 18:31, schrieb Denny Schierz:
 hi Bernd,

 
 yeah, but the problem is, OpenSolaris is death and we don't get any
 (security?) updates more.

I'm fully aware about this issue. In Fact its dead.

But is it necessary for you to get every update  for a Failover Solution
in your scenario?

I assume if those boxes are locked up in a data center, without access
to the Internet you could live very long with those kind of solution.

Or buy Solaris 10 Licences and Sun Cluster 3.3

If its necessary for your business you should think about it.


 For the future You may have a look onto
 http://www.illumos.org/projects/ihac
 
 hmm, no release date ..
 
 The other Alternative could be the FreeBSD HAST Project as you mentioned.
 
 HAST is only for create a mirror, what we don't need, 'cause the nodes
 have access to all disks at the same time, through the SAS HBA.
 

 Also the ZFS Version is bit old (14). It's all bad :-/

There's nothing wrong with an old ZFS Version, and FreeBSD is rock
stable. They tend to be conservative, and you don't get every neat
feature, which could possibly be handy.

But the implemented Releases tend to work very well, and i think thats
the most important point.


 But, we need only a few components of a cluster, like carp (in BSD
 words) for a global IP and detecting if the master node isn't there
 anymore and take over the IP + importing the pool. Maybe there is a
 chance under OI.
 

Not yet, maybe you check out if Veritas Cluster will run under OI, but
it will cost a few bucks.

Personally if i had to run a business... i would stick to the Original.
Also think about, you get what you pay for.

 cu denny
 

Cu and take care :-)
 
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards

 Bernd Helber

 _.-|-/\-._
  \-'  '-.
 //\/\\/
   \/  ../.  \/
   _   //___\ |.
 . \ /   /\ ( #) |#)
   | |   /\   -.   __\
\V   )./_._(\
   .)/\   .- /  \_'_) )-..
   \ ./  /  /._./
   /\ '-' /
 '-._  v   _.-'
   / '-.__.·' \
 \/

 ***

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Status about Open HA Cluster (OHAC)

2011-02-04 Thread Bernd Helber
Hi Mathias.
 In Case its only a solution for failover ZFS Pools.
 Not for applications, unfortunately.
 
 So, the services NFS, iSCSI and CIFS based on ZFS pools can be clustered
 currently, maybe even combined? That's all we need, no applications.
 
Maybe you would like to check out those Sources, the FreeBSD Developer
use the CARP and GEOM Framework for HAST.

http://wiki.freebsd.org/HAST
http://svn.freebsd.org/viewvc/base/head/share/examples/hast/?pathrev=204076
http://freebsdfoundation.blogspot.com/2010/02/hast-project-is-complete.html


 Maybe someone is successfully running OI_b148 with IHAC or OHAC (or at
 least tried) and can report here ;)

I assume it's not possible at the moment, cause OHAC/IHAC/Sun Cluster is
a Kernel Based Cluster, if there are changes in the Kernel and no Change
in the Cluster Source, massive Changes in the Kernel will break the Cluster.

OHAC was not maintained and  the Development stopped, i assume the IHAC
Guys have a lot of Bug fixing to do, before it really gets usable.

If you really need the Cluster  stick to Opensolaris 2009.6 Build 111b


 Cheers
 Mathias
 
Cheers

Bernd

 
 
 
 Am 03.02.2011 16:23, schrieb Bernd Helber:
 Am 03.02.11 16:09, schrieb Mathias Tauber:
 Hm, that sounds like it is far away from stable :)

 Exactly, but as you know hope dies at last. :D
 We'll wait for some progress...

 What other choice do we have?
 There's no comparable solution.

 The FreeBSD Guys worked on a Solution comparable to HAStorage Plus, but
 its far away from Sun Cluster based technologies.

 In Case its only a solution for failover ZFS Pools.
 Not for applications, unfortunately.


 Cheers
 Mathias


 Cheers Bernd  ;)




 Am 03.02.2011 11:42, schrieb MATTHEW WILBY:
 Have a look here -

 http://www.illumos.org/projects/ihac

 Cheers,

 Matt





 
 From: Mathias Taubertaube...@hdpnet.de
 To: Discussion list for
 OpenIndianaopenindiana-discuss@openindiana.org
 Sent: Thursday, 3 February, 2011 10:37:15
 Subject: [OpenIndiana-discuss] Status about Open HA Cluster (OHAC)

 Hello all,

 I was searching a lot but haven't found much information about
 clustering
 OpenIndiana. Most threads aren't related to current releases or
 finished like
 this one:


 http://www.mail-archive.com/openindiana-discuss@openindiana.org/msg00315.html



 Are there some hidden information? ;) Or is OHAC from here:

 http://dlc.sun.com/osol/ohac/downloads/current/

 still the way to choose? How are the plans for the near future?

 Cheers
 Mathias


 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards

 Bernd Helber


 _.-|-/\-._
  \-'  '-.
 //\/\\/
   \/  ../.  \/
   _   //___\ |.
 . \ /   /\ ( #) |#)
   | |   /\   -.   __\
\V   )./_._(\
   .)/\   .- /  \_'_) )-..
   \ ./  /  /._./
   /\ '-' /
 '-._  v   _.-'
   / '-.__.·' \
 \/

 ***
 *This message has been scanned by DrWeb AV and Spamassassin

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Status about Open HA Cluster (OHAC)

2011-02-03 Thread Bernd Helber
Am 03.02.11 16:09, schrieb Mathias Tauber:
 Hm, that sounds like it is far away from stable :)
 
Exactly, but as you know hope dies at last. :D
 We'll wait for some progress...

What other choice do we have?
There's no comparable solution.

The FreeBSD Guys worked on a Solution comparable to HAStorage Plus, but
its far away from Sun Cluster based technologies.

In Case its only a solution for failover ZFS Pools.
Not for applications, unfortunately.

 
 Cheers
 Mathias
 

Cheers Bernd  ;)

 
 
 
 Am 03.02.2011 11:42, schrieb MATTHEW WILBY:
 Have a look here -

 http://www.illumos.org/projects/ihac

 Cheers,

 Matt





 
 From: Mathias Taubertaube...@hdpnet.de
 To: Discussion list for OpenIndianaopenindiana-discuss@openindiana.org
 Sent: Thursday, 3 February, 2011 10:37:15
 Subject: [OpenIndiana-discuss] Status about Open HA Cluster (OHAC)

 Hello all,

 I was searching a lot but haven't found much information about clustering
 OpenIndiana. Most threads aren't related to current releases or
 finished like
 this one:


 http://www.mail-archive.com/openindiana-discuss@openindiana.org/msg00315.html


 Are there some hidden information? ;) Or is OHAC from here:

http://dlc.sun.com/osol/ohac/downloads/current/

 still the way to choose? How are the plans for the near future?

 Cheers
 Mathias


 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss

 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards

 Bernd Helber

 _.-|-/\-._
  \-'  '-.
 //\/\\/
   \/  ../.  \/
   _   //___\ |.
 . \ /   /\ ( #) |#)
   | |   /\   -.   __\
\V   )./_._(\
   .)/\   .- /  \_'_) )-..
   \ ./  /  /._./
   /\ '-' /
 '-._  v   _.-'
   / '-.__.·' \
 \/

 ***
 *This message has been scanned by DrWeb AV and Spamassassin

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Proposal: OpenIndiana Stable Branch

2011-01-25 Thread Bernd Helber
Am 25.01.11 19:58, schrieb Alan Coopersmith:
 On 01/25/11 10:50 AM, Ken Gunderson wrote:
 As for the MTA discussion, Postfix is pretty much a drop in replacement
 for Sendmail, and my vote would be to replace Sendmail entirely. 
 
 I still don't understand this subthread - if someone wants to start working
 on postfix as a development project for a future release, that makes sense,
 but doing it as a bug fix in a stable branch that's just supposed to be
 providing fixes for the b148 already shipped?   That just seems to violate
 the definition of a stable branch.   At the very least it should go into
 the development branch first to get some testing before you even consider
 backporting it to stable.
 
 (Not that I get a vote - that's up to the developers who actually do the
  work, not those of us just here to provide color commentary.)

I'm not an developer nor an commiter. ;)

But fully agreed, i also don't understand this discussion.
In Case Sendmail works, it's in the base System and does a proper Job
and is integrated in the base system.

It will take time and manpower to replace Sendmail. From my perspective
its a useless additional construction site.

Only to mention, The Free and OpenBSD Guys also rely on Sendmail as  MTA
in their base systems.


If somebody is in desperate need of Postfix... have a look at Ishan
Dogan's third party packages. http://ihsan.dogan.ch/postfix/

just my 2 cents


-- 
with kind regards

 Bernd Helber


 _.-|-/\-._
  \-'  '-.
 //\/\\/
   \/  ../.  \/
   _   //___\ |.
 . \ /   /\ ( #) |#)
   | |   /\   -.   __\
\V   )./_._(\
   .)/\   .- /  \_'_) )-..
   \ ./  /  /._./
   /\ '-' /
 '-._  v   _.-'
   / '-.__.·' \
 \/

 ***

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] help: ssh won't start this morning

2011-01-24 Thread Bernd Helber
Hi,

your last change, before you rebooted the Box?

As a temporary workaround...
start ssh manually

/usr/lib/ssh/sshd

that should do the trick.

Am 24.01.11 15:53, schrieb ann kok:
 Hi
 
 My ssh service can't start this morning when I reboot the box
 
 I try to enable it in the console but it won't work
 
 svcs -a |grep ssh
 offline time   svc:/network/ssh:default
 
 svcadm enable ssh
 
 svcs -a |grep ssh
 offline  time   svc:/network/ssh:default
 
 dmesg is showing nothing.
 
 How can I fix it? Any trouleshooting command
 
 I also have question that
 
 ls solaris services depended each other?
 
 Mean S65 can't start. S66.. all services after S65  also won't start
 
 Thank you 
 
 
 
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards

 Bernd Helber
 _.-|-/\-._
  \-'  '-.
 //\/\\/
   \/  ../.  \/
   _   //___\ |.
 . \ /   /\ ( #) |#)
   | |   /\   -.   __\
\V   )./_._(\
   .)/\   .- /  \_'_) )-..
   \ ./  /  /._./
   /\ '-' /
 '-._  v   _.-'
   / '-.__.·' \
 \/

 ***
 *This message has been scanned by DrWeb AV and Spamassassin

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS single drive CKSUM errors

2011-01-22 Thread Bernd Helber
Hi Michelle,

i assume you ran into Zpool Corruption, we experienced thos kind of
issues, when we migrated raw devices from UFS Volumes to ZFS Volumes

Another possible issue could be.. that the data was corrupted before you
copied it into your Zpool.

Do yourself a favor... try to reproduce it and have a look in
bugs.solaris.com or defect.solaris and look after Zpool Corruption.

If possible and you have time..

have a read about ZDB the Zfs Debugger... and search for the ZFS Überblock.

or a defected vdev.

Maybe its a controller issue too?

Hard to tell.



Am 22.01.11 23:08, schrieb Michelle Knight:
 Hi Folks,
 
 Something I don't understand.
 
 A single drive with a ZFS partition on it. No mirror, no raid, no nothing.
 
 I copied a load of files to it and did a scrub.
 
 It encountered six checksum errors and was able to recover from them ... 
 without having any mirror or other redundant reference ... it didn't lose a 
 single file.
 
 Now ... am I mad, or does this mean I've got corruption somewhere, or how did 
 ZFS manage to recover from cksum errors that it detected on a single drive?
 
 I haven't managed to read anything yet which goes to the depth of explaining 
 this.
 
 Can someone help me on this please?
 
 Many thanks,
 
 Michelle.
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards

 Bernd Helber

 _.-|-/\-._
  \-'  '-.
 //\/\\/
   \/  ../.  \/
   _   //___\ |.
 . \ /   /\ ( #) |#)
   | |   /\   -.   __\
\V   )./_._(\
   .)/\   .- /  \_'_) )-..
   \ ./  /  /._./
   /\ '-' /
 '-._  v   _.-'
   / '-.__.·' \
 \/

 ***
 *This message has been scanned by DrWeb AV and Spamassassin

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] svc and services

2011-01-21 Thread Bernd Helber
Hi Ann ;)



The Service Management Facilty works a little bit different to the
Legacy Init System

In Case you have different Properties and groups


svc:Service type
/system/system-log  Name
:default Instance

you will find your repository

at
bernd@kobold:/home/bernd$ ls /etc/svc
repository.db

its a sqlite based Database




The repository is the source for all known services on the system,
it imports the service manifest into the database and then never
references the manifest again.

The SMF Manifests are XML Scripts
with following content

Name of service
Number of instances
Start, stop and refresh methods
Property groups
Service model
Fault handling
Documentation template

 milestones are predefined sets of capabilities for a few set of
services, comparable  to  legacy run levels


If you want to know more about the services you could ask the Service
Management Facility about a detailled Description...

At example..


svcs -H -o FMRI,STATE,NSTATE,STIME,DESC -SSTATE svc:/*
svc:/application/management/common-agent-container-1:default
uninitialized  - 10:37:29 -
svc:/system/cluster/cznetd:default maintenance-
10:38:40 Sun Cluster virtual cluster network daemon
svc:/system/cluster/cl-svc-cluster-milestone:default offline-
  10:38:40 Synchronizing the cluster userland services
svc:/system/boot-archive:default   online -
10:37:28 check boot archive content
svc:/system/device/local:default   online -
10:37:41 Standard Solaris device configuration.
svc:/milestone/devices:default online -
10:37:46 device configuration milestone
svc:/system/identity:domainonline -
10:37:42 system identity (domainname)
svc:/system/identity:node  online -
10:37:21 system identity (nodename)
svc:/system/filesystem/local:default   online -
10:38:00 local file system mounts
svc:/system/manifest-import:defaultonline -
10:37:45 service manifest import
svc:/system/filesystem/minimal:defa
--


If you would like to look on your legacy Services ==



lrc:/etc/rc2_d/S20sysetup  legacy_run -
10:38:25 -
lrc:/etc/rc2_d/S47pppd legacy_run -
10:38:25 -
lrc:/etc/rc2_d/S72autoinstall  legacy_run -
10:38:25 -
lrc:/etc/rc2_d/S72sc_update_hosts  legacy_run -
10:38:25 -
lrc:/etc/rc2_d/S72sc_update_ntplegacy_run -
10:38:25 -
lrc:/etc/rc2_d/S73cachefs_daemon   legacy_run -
10:38:25 -
lrc:/etc/rc2_d/S74xntpd_clusterlegacy_run -
10:38:25 -
lrc:/etc/rc2_d/S77scpostconfig legacy_run -
10:38:26 -
lrc:/etc/rc2_d/S81dodatadm_udaplt  legacy_run -
10:38:26 -
lrc:/etc/rc2_d/S89PRESERVE legacy_run -
10:38:26 -
lrc:/etc/rc2_d/S95SUNWmd_binddevs  legacy_run -
10:38:27 -
lrc:/etc/rc2_d/S98deallocate   legacy_run -
10:38:27 -
lrc:/etc/rc


If you would check your milestones. ;)



svcs -H -o FMRI,STATE,NSTATE,STIME,DESC -SSTATE svc:/milestone/*

 -
svc:/milestone/devices:default online -
10:37:46 device configuration milestone
svc:/milestone/multi-user:default  online -
10:38:27 multi-user milestone
svc:/milestone/name-services:default   online -
10:37:59 name services milestone
svc:/milestone/single-user:default online -
10:37:58 single-user milestone
svc:/milestone/multi-user-server:default   online -
10:38:34 multi-user plus exports milestone
svc:/milestone/network:default online -
10:37:45 Network milestone
svc:/milestone/sysconfig:default   online -
10:38:05 Basic system configuration milestone
---



If you want to check on service/daemon

in this example you see the IPSec Daemon..

svcs -H -oDESC svc:/network/ipsec/ike:default ; svcs -H -oSTATE
svc:/network/ipsec/ike:default ; svcs -H -oFMRI,STATE -D
svc:/network/ipsec/ike:default ; svcs -l svc:/milestone/network:default ;

IKE daemon
disabled
svc:/milestone/network:default online
fmri svc:/milestone/network:default
name Network milestone
enabled  true
stateonline
next_state   none
state_time   21 January 2011 10:37:45 CET
logfile  

Re: [OpenIndiana-discuss] Zpool upgrade didn't seem to upgrade

2011-01-20 Thread Bernd Helber
Good Morning Michelle.

Am 20.01.11 08:05, schrieb Michelle Knight:
 hi Folks, 
 
 OI 148.
 
 Three 1.5tb drives were replaced with three 2tb drives. They are here on 
 internal SATA channels c2t2d0, c2t3d0 and c2t4d0. One is a Seagate Barracuda 
 and the other two are Western Digital Greens.
 
 mich@jaguar:~# cfgadm -lv
 Ap_Id  Receptacle   Occupant Condition  
 Information
 When Type Busy Phys_Id
 Slot8  connectedconfigured   ok Location: 
 Slot8
 Jan  1  1970 unknown  n/devices/pci@0,0/pci8086,3b4a@1c,4:Slot8
 sata0/0::dsk/c2t0d0connectedconfigured   ok Mod: 
 INTEL 
 SSDSA2M040G2GC FRev: 2CV102HB SN: CVGB949301PH040GGN
 unavailable  disk n/devices/pci@0,0/pci1458,b005@1f,2:0
 sata0/1::dsk/c2t1d0connectedconfigured   ok Mod: 
 INTEL 
 SSDSA2M040G2GC FRev: 2CV102HB SN: CVGB949301PC040GGN
 unavailable  disk n/devices/pci@0,0/pci1458,b005@1f,2:1
 sata0/2::dsk/c2t2d0connectedconfigured   ok Mod: 
 ST32000542AS FRev: CC34 SN: 5XW17ARW
 unavailable  disk n/devices/pci@0,0/pci1458,b005@1f,2:2
 sata0/3::dsk/c2t3d0connectedconfigured   ok Mod: WDC 
 WD20EARS-00MVWB0 FRev: 51.0AB51 SN: WD-WMAZA075
 unavailable  disk n/devices/pci@0,0/pci1458,b005@1f,2:3
 sata0/4::dsk/c2t4d0connectedconfigured   ok Mod: WDC 
 WD20EARS-00MVWB0 FRev: 51.0AB51 SN: WD-WMAZA0484508
 
 A zpool export and subsequent import, which should have taken the set to 4tb 
 overall storage in the raidz, appears to have not worked despite the import 
 taking what must have been about ten to fifteen minutes to do the import. 
 (during which time the drives were silent and the zpool process was mostly 0% 
 very occasionally peaking to 25%, and the system being very slow to respond 
 during that period)

Personally i assume the peaks were triggerd by  resilvering the pool.
Its not uncommon that you have a high load, if your pool is resilvering.


Best Practice in this case would have been creating a new Zpool,  e.g
raidz..

zfs send from $oldpool  zfs receive $newpool... :-(



I have three Questions for you.

First.. is this a production Box?

Second..  could you provide  the output of zpool history?

Third, no offence, but do you have proper Literature for ZFS?
If not please have a look at Solarisinternals

http://www.solarisinternals.com/wiki/index.php/Solaris_Internals_and_Performance_FAQ
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

Or have a look at  the Open Solaris Bible.

Fourth... what would you like to achieve with OI ?

Sorry now i made four Questions out of it.  ;)




Cheers  :-)



 Any ideas please? Or is there still some process running in the background 
 that I can't see?
 
 mich@jaguar:~# zfs list
 NAME USED  AVAIL  REFER  MOUNTPOINT
 data2.27T   401G  2.27T  /mirror
 rpool   7.69G  28.7G45K  /rpool
 rpool/ROOT  3.70G  28.7G31K  legacy
 rpool/ROOT/openindiana  3.70G  28.7G  3.59G  /
 rpool/dump  1.93G  28.7G  1.93G  -
 rpool/export5.22M  28.7G32K  /export
 rpool/export/home   5.19M  28.7G32K  /export/home
 rpool/export/home/mich  5.16M  28.7G  5.16M  /export/home/mich
 rpool/swap  2.05G  30.7G   126M  -
 
 
 mich@jaguar:~# zpool status
   pool: data
  state: ONLINE
  scan: resilvered 1.13T in 12h26m with 0 errors on Wed Jan 19 23:42:23 2011
 config:
 
 NAMESTATE READ WRITE CKSUM
 dataONLINE   0 0 0
   raidz1-0  ONLINE   0 0 0
 c2t2d0  ONLINE   0 0 0
 c2t3d0  ONLINE   0 0 0
 c2t4d0  ONLINE   0 0 0
 
 errors: No known data errors
 
That took a very long time, for only 1 TB



 
 last pid:  1802;  load avg:  0.61,  0.56,  0.61;  up 0+19:55:20
 07:06:01
 74 processes: 73 sleeping, 1 on cpu
 CPU states: 99.8% idle,  0.0% user,  0.3% kernel,  0.0% iowait,  0.0% swap
 Kernel: 375 ctxsw, 653 intr, 120 syscall
 Memory: 3959M phys mem, 401M free mem, 1979M total swap, 1979M free swap
 
PID USERNAME NLWP PRI NICE  SIZE   RES STATETIMECPU COMMAND
   1197 gdm 1  590   95M   28M sleep1:23  0.04% gdm-simple-gree
922 root3  590  102M   51M sleep0:37  0.02% Xorg
   1801 root1  590 4036K 2460K cpu/30:00  0.01% top
   1196 gdm 1  590   80M   13M sleep0:00  0.00% metacity
   1190 gdm 1  590 7892K 6028K sleep0:00  0.00% at-spi-registry
640 root   16  590   14M 9072K sleep0:09  0.00% smbd
   1737 mich1  590   13M 5392K sleep0:00  0.00% sshd
148 root1  590 8312K 1608K sleep0:00  0.00% dhcpagent
672 root   26  590   27M   15M 

Re: [OpenIndiana-discuss] zpool and nfs

2011-01-20 Thread Bernd Helber
Try to fix the Label again...


then try

zpool add -n the Box should give you a response.

also have a look at the man page
man zpool

you could also try to force it with

zpool add -f

But please keep in mind not every device could be forced. ;)

please have also a look at blogs.sun.com  its a very useful ressource.
Also the Solarisinternals Wiki.


Cheers


Am 20.01.11 16:34, schrieb ann kok:
 Hi Bernd
 
 SMI is selected now. But zpool add is still in problem. The 
 /var/adm/message is showing nothing!
 
 Any idea? Thank you
 
 root@opensolaris:~# format -e
 Searching for disks...done
 
 
 AVAILABLE DISK SELECTIONS:
0. c8t0d0 DEFAULT cyl 2607 alt 2 hd 255 sec 63
   /pci@0,0/pci15ad,1976@10/sd@0,0
1. c8t1d0 DEFAULT cyl 1303 alt 2 hd 255 sec 63
   /pci@0,0/pci15ad,1976@10/sd@1,0
2. c8t2d0 DEFAULT cyl 1534 alt 2 hd 128 sec 32
   /pci@0,0/pci15ad,1976@10/sd@2,0
 Specify disk (enter its number): 2
 selecting c8t2d0
 [disk formatted]
 No Solaris fdisk partition found.
 
 
 FORMAT MENU:
 disk   - select a disk
 type   - select (define) a disk type
 partition  - select (define) a partition table
 current- describe the current disk
 format - format and analyze the disk
 fdisk  - run the fdisk program
 repair - repair a defective sector
 label  - write label to the disk
 analyze- surface analysis
 defect - defect list management
 backup - search for backup labels
 verify - read and display labels
 save   - save new disk/partition definitions
 inquiry- show vendor, product and revision
 scsi   - independent SCSI mode selects
 cache  - enable, disable or query SCSI disk cache
 volname- set 8-character volume name
 !cmd - execute cmd, then return
 quit
 format fdisk
 No fdisk table exists. The default partition for the disk is:
 
   a 100% SOLARIS System partition
 
 Type y to accept the default partition,  otherwise type n to edit the
  partition table.
 y
 format label
 [0] SMI Label
 [1] EFI Label
 Specify Label type[0]: 0
 Ready to label disk, continue? 
 Ready to label disk, continue? y
 
 format quit
 root@opensolaris:~# zpool add rpool c8t2d0
 cannot label 'c8t2d0': EFI labeled devices are not supported on root pools.
 
 root@opensolaris:~# tail /var/adm/messages 
 Jan 20 09:07:58 opensolaris pseudo: [ID 129642 kern.info] pseudo-device: 
 winlock0
 Jan 20 09:07:58 opensolaris genunix: [ID 936769 kern.info] winlock0 is 
 /pseudo/winlock@0
 Jan 20 09:07:58 opensolaris pseudo: [ID 129642 kern.info] pseudo-device: pm0
 Jan 20 09:07:58 opensolaris genunix: [ID 936769 kern.info] pm0 is /pseudo/pm@0
 Jan 20 09:07:59 opensolaris pseudo: [ID 129642 kern.info] pseudo-device: nsmb0
 Jan 20 09:07:59 opensolaris genunix: [ID 936769 kern.info] nsmb0 is 
 /pseudo/nsmb@0
 Jan 20 09:07:59 opensolaris pseudo: [ID 129642 kern.info] pseudo-device: 
 lx_systrace0
 Jan 20 09:07:59 opensolaris genunix: [ID 936769 kern.info] lx_systrace0 is 
 /pseudo/lx_systrace@0
 Jan 20 09:17:48 opensolaris mDNSResponder: [ID 702911 daemon.error] ERROR: 
 getOptRdata - unknown opt 4
 Jan 20 09:18:16 opensolaris last message repeated 5 times
 
 Thank you
 
 --- On Thu, 1/20/11, Bernd Helber be...@helber-it-services.com wrote:
 
 From: Bernd Helber be...@helber-it-services.com
 Subject: Re: [OpenIndiana-discuss] zpool and nfs
 To: openindiana-discuss@openindiana.org
 Received: Thursday, January 20, 2011, 10:13 AM
 Sorry i forgot,

 you're on x86

 formatfdisk

 y

 label
 select SMI Label

 thanks in advance.



 Am 20.01.11 15:22, schrieb ann kok:
 Hi Bernd

 The label won't work and I also provide
 /var/adm/message

 Thank you

 root@opensolaris:~# format -e
 Searching for disks...done


 AVAILABLE DISK SELECTIONS:
 0. c8t0d0 DEFAULT cyl
 2607 alt 2 hd 255 sec 63

/pci@0,0/pci15ad,1976@10/sd@0,0
 1. c8t1d0 DEFAULT cyl
 1303 alt 2 hd 255 sec 63

/pci@0,0/pci15ad,1976@10/sd@1,0
 2. c8t2d0 DEFAULT cyl
 1534 alt 2 hd 128 sec 32

/pci@0,0/pci15ad,1976@10/sd@2,0
 Specify disk (enter its number): 2
 selecting c8t2d0
 [disk formatted]
 No Solaris fdisk partition found.


 FORMAT MENU:
  disk 
  - select a disk
  type 
  - select (define) a disk type
  partition 
 - select (define) a partition table
  current 
   - describe the current disk
  format 
- format and analyze the disk
  fdisk 
 - run the fdisk program
  repair 
- repair a defective sector
  label 
 - write label to the disk
  analyze 
   - surface analysis
  defect 
- defect list management
  backup 
- search for backup labels
  verify 
- read and display labels
  save 
  - save new disk/partition
 definitions
  inquiry

Re: [OpenIndiana-discuss] PCIe card problem

2011-01-20 Thread Bernd Helber
Hi Michelle,

please have a look at the bug id and the thread.
Seems not to be supported.

On the other hand it may be possible that there is a proprietary driver
for SOL 10x86 out there.

Additionally go into your the BIOS options and set the SATA controller
mode to AHCI, hopefullly the device will be detected properly.

But now clue if it works, regardingt the driver issue. :-(


devfsadm -Cv

or  reconfigure reboot

reboot -- -r

-


cfgadm -al

 enable them

the targets and device names will be different...


  cfgadm -xsata_port_activate sata1/2


Activate the port: /devices/pci@0,0/pci1025,183@1f,2:2
This operation will enable activity on the SATA port
Continue (yes/no)? y


cfgadm -c configure  sata1/2

   cfgadm -al

sata1/2::dsk/c9t2d0disk connectedconfigured   ok


Am 20.01.11 17:07, schrieb Michelle Knight:
 Hi Folks,
 
 In my effort to save money, I bought a PCIe card that gave me two reasonably 
 fast e-sata ports.
 
 I've got a feeling that OI can't see the card and therefore won't use it. Am 
 I 
 right? Is there anything I can do about this?
 
 The chipset is Marvell 9128

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6967157
http://opensolaris.org/jive/thread.jspa?messageID=491082


Good luck and fingers crossed.



 _.-|-/\-._
  \-'  '-.
 //\/\\/
   \/  ../.  \/
   _   //___\ |.
 . \ /   /\ ( #) |#)
   | |   /\   -.   __\
\V   )./_._(\
   .)/\   .- /  \_'_) )-..
   \ ./  /  /._./
   /\ '-' /
 '-._  v   _.-'
   / '-.__.·' \
 \/

 ***
 *This message has been scanned by DrWeb AV and Spamassassin

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] sun people that has left oracle after acquisition

2011-01-19 Thread Bernd Helber
Am 19.01.11 13:13, schrieb Jeppe Toustrup:
 2011/1/19 Edward Martinez mindbende...@live.com:
 On 01/19/11 03:46, Michelle Knight wrote:

 This long standing Sun customer will likely be evaluating RHEL on HP with
 virtualisation instead of Zones for the next server choice.

   why not OpenIndiana? :'(

 
 Because there are no commercial support or education options.
 
 --
 Venlig hilsen / Kind regards
 Jeppe Toustrup (aka. Tenzer)
 
And also no Cluster Solution...
The Open HA Cluster aka Colorado is dead.

Unfortunately. :-(

 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards


 _.-|-/\-._
  \-'  '-.
 //\/\\/
   \/  ../.  \/
   _   //___\ |.
 . \ /   /\ ( #) |#)
   | |   /\   -.   __\
\V   )./_._(\
   .)/\   .- /  \_'_) )-..
   \ ./  /  /._./
   /\ '-' /
 '-._  v   _.-'
   / '-.__.·' \
 \/

 ***
 *This message has been scanned by DrWeb AV and Spamassassin

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] sun people that has left oracle after acquisition

2011-01-19 Thread Bernd Helber
Am 19.01.11 13:22, schrieb Piotr Jasiukajtis:
 On Jan 19, 2011, at 1:20 PM, Bernd Helber wrote:
 
 Am 19.01.11 13:13, schrieb Jeppe Toustrup:
 2011/1/19 Edward Martinez mindbende...@live.com:
 On 01/19/11 03:46, Michelle Knight wrote:

 This long standing Sun customer will likely be evaluating RHEL on HP with
 virtualisation instead of Zones for the next server choice.

  why not OpenIndiana? :'(


 Because there are no commercial support or education options.

 --
 Venlig hilsen / Kind regards
 Jeppe Toustrup (aka. Tenzer)

 And also no Cluster Solution...
 The Open HA Cluster aka Colorado is dead.

 Unfortunately. :-(
 
 Fortunately there is an open source fork of OHAC called IHAC:
 https://www.illumos.org/projects/ihac
 

Hi Piotr.

Sounds good. :)

But I cant'see the sources, am i blind?
Do you know more about it?

Cheers :D

 
 --
 Piotr Jasiukajtis | estibi | SCA OS0072
 http://estseg.blogspot.com
 
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards

 Bernd Helber


 _.-|-/\-._
  \-'  '-.
 //\/\\/
   \/  ../.  \/
   _   //___\ |.
 . \ /   /\ ( #) |#)
   | |   /\   -.   __\
\V   )./_._(\
   .)/\   .- /  \_'_) )-..
   \ ./  /  /._./
   /\ '-' /
 '-._  v   _.-'
   / '-.__.·' \
 \/

 ***
 *This message has been scanned by DrWeb AV and Spamassassin

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] sun people that has left oracle after acquisition

2011-01-19 Thread Bernd Helber
Am 19.01.11 13:56, schrieb Apostolos Syropoulos:
   why not OpenIndiana? :'(


 Because there are no commercial support or education options.

  
 What is an education option? You mean you can't use the software 
 in a classroom or what?
 
 A.S.
 
 --
 Apostolos Syropoulos
 Xanthi, Greece
 


There is no commercial support for Training like the Sun Microsystems Days.

Thats the issue, we spoke about.

Or that's what i understood. ;)


 
   
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards

 Bernd Helber


 _.-|-/\-._
  \-'  '-.
 //\/\\/
   \/  ../.  \/
   _   //___\ |.
 . \ /   /\ ( #) |#)
   | |   /\   -.   __\
\V   )./_._(\
   .)/\   .- /  \_'_) )-..
   \ ./  /  /._./
   /\ '-' /
 '-._  v   _.-'
   / '-.__.·' \
 \/

 ***
 *This message has been scanned by DrWeb AV and Spamassassin

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] sun people that has left oracle after acquisition

2011-01-19 Thread Bernd Helber
Am 19.01.11 14:13, schrieb Apostolos Syropoulos:
 There is no commercial support for Training like the Sun Microsystems Days.

 Thats the issue, we spoke about.

 Or that's what i understood. ;)
  
 In different words, people want to pay to get trained? And what about
 all these online resources, the thousands of  printed pages or books,
 papers, articles, etc.?
 
 A.S.

Yes People want to get trained by certified trainers, its necessary
especially if you have to run a production environment. In most  cases
those trainers were very well experienced in the different products.

From my experience you could lern faster, if you get educated by trainers.
The training Sun provided to customers was a got investment, they had
labs with dedicated machines you could play with. They also provided
Documentation for the courses.


Online ressources like docs.sun.com, blogs.sun.com or third party Blogs
and Pages are great and absolutely useful, but you could share
experience with other guys in the classroom too.

Especially training in Stuff like Sunstoragetek Backup (Legato
Networker) or Sun Cluster was very helpful.

It defintely makes sense.

just my 2 cents

Cheers. ;)


@All sorry for off topic

 
 
 --
 Apostolos Syropoulos
 Xanthi, Greece
 
 
 
   
 
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards

 Bernd Helber


 _.-|-/\-._
  \-'  '-.
 //\/\\/
   \/  ../.  \/
   _   //___\ |.
 . \ /   /\ ( #) |#)
   | |   /\   -.   __\
\V   )./_._(\
   .)/\   .- /  \_'_) )-..
   \ ./  /  /._./
   /\ '-' /
 '-._  v   _.-'
   / '-.__.·' \
 \/

 ***
 *This message has been scanned by DrWeb AV and Spamassassin

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] sun people that has left oracle after acquisition

2011-01-19 Thread Bernd Helber
Am 19.01.11 14:39, schrieb Michelle Knight:
 why not OpenIndiana? :'(
 
 Official support. I mean, you've seen some of the questions I've been asking 
 this last week!!!

Do you got your zpool issue resolved?


 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards


 _.-|-/\-._
  \-'  '-.
 //\/\\/
   \/  ../.  \/
   _   //___\ |.
 . \ /   /\ ( #) |#)
   | |   /\   -.   __\
\V   )./_._(\
   .)/\   .- /  \_'_) )-..
   \ ./  /  /._./
   /\ '-' /
 '-._  v   _.-'
   / '-.__.·' \
 \/

 ***
 *This message has been scanned by DrWeb AV and Spamassassin

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Sun Exlorer Replacement

2011-01-19 Thread Bernd Helber
Hi Michael

yes there were several tools, mostly available on my beloved EIS-CD ;)

 actually, there were (are?) several such tools written by various
 engineers in various places - in one case, some of my co-workers in my
 team while I was at Sun - trying to automate what you describe, and
 lots more above that, from the ever-growing amount of information the
 explorer provided.
 
 I'm not aware of any of those tools being available outside of Oracle,
 let alone freely distributable, but would welcome being proven wrong
 here :-)
 
Not actually, i miss those tools, especially the *miner. :-(

 HTH
 Michael

Cheers.  ;)

PS: Miss my badge


-- 
with kind regards

 _.-|-/\-._
  \-'  '-.
 //\/\\/
   \/  ../.  \/
   _   //___\ |.
 . \ /   /\ ( #) |#)
   | |   /\   -.   __\
\V   )./_._(\
   .)/\   .- /  \_'_) )-..
   \ ./  /  /._./
   /\ '-' /
 '-._  v   _.-'
   / '-.__.·' \
 \/

 ***
 *This message has been scanned by DrWeb AV and Spamassassin

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS Pool configuration hanging around

2011-01-17 Thread Bernd Helber
Am 17.01.11 13:01, schrieb Michelle Knight:
 Good suggestions, but now I've got both backup pools in trouble :-)
 
 I've also got another issue. Because they're degraded, I can't import them, 
 which means I can't destroy them, either.
 
 I can import by ID number, but I can't destroy by ID number.
 
 A reboot didn't help, either.
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss
Dear Michelle...

please be so kind and try

zpool import

then please

zpool list
zpool status -v


please check also

 ls -alt /dev/zvol/rdsk/

and provide us with output. ;)


 ls -alt /dev/zvol/rdsk/*/*|awk '{print $11}'|sort|uniq -c|grep -v ^1

You should get a numeric result about current zvols

Please check  zfs Cachefiles again..


strings /etc/zfs/*.cache |grep Mypoolname Mypoolname2

Thanks in Advance. :-)


If you would like to destroy your pools write a new label onto the Disk,
that should help.


Cheers

-- 
with kind regards

 Bernd Helber


 _.-|-/\-._
  \-'  '-.
 //\/\\/
   \/  ../.  \/
   _   //___\ |.
 . \ /   /\ ( #) |#)
   | |   /\   -.   __\
\V   )./_._(\
   .)/\   .- /  \_'_) )-..
   \ ./  /  /._./
   /\ '-' /
 '-._  v   _.-'
   / '-.__.·' \
 \/

 ***
 *This message has been scanned by DrWeb AV and Spamassassin

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS Pool configuration hanging around

2011-01-17 Thread Bernd Helber
Sorry i forgot

zdb -eubbcsL mypool /root/zfs_corruption.txt

less /root/zfs_corruption.txt

and provide the output.

Thanks in Advance. :-)


Am 17.01.11 13:01, schrieb Michelle Knight:
 Good suggestions, but now I've got both backup pools in trouble :-)
 
 I've also got another issue. Because they're degraded, I can't import them, 
 which means I can't destroy them, either.
 
 I can import by ID number, but I can't destroy by ID number.
 
 A reboot didn't help, either.
 ___
 OpenIndiana-discuss mailing list
 OpenIndiana-discuss@openindiana.org
 http://openindiana.org/mailman/listinfo/openindiana-discuss


-- 
with kind regards

 Bernd Helber

 _.-|-/\-._
  \-'  '-.
 //\/\\/
   \/  ../.  \/
   _   //___\ |.
 . \ /   /\ ( #) |#)
   | |   /\   -.   __\
\V   )./_._(\
   .)/\   .- /  \_'_) )-..
   \ ./  /  /._./
   /\ '-' /
 '-._  v   _.-'
   / '-.__.·' \
 \/

 ***
 *This message has been scanned by DrWeb AV and Spamassassin

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss