With a split pci-config you could assign a pci bus to a domain and
configure an hba on that bus and possibly SAN boot that domain. In
that case the domain would be called an I/O domain rather than a guest
but it would be created the same way a guest would be created.

Additionally, with LDoms, a guest is supposed to be able to survive
quick reboots of the control domain. I/O in the guest should block
until the virtual I/O device returns to online status or until a
timeout occurs if you have configured a timeout when you exported the
vdisk to the guest.

ldm add-vdisk [timeout=<seconds>] <disk_name>
<volume_name>@<service_name> <ldom>

Regards,
Vic


On Fri, Apr 4, 2008 at 10:58 AM, Steffen Weiberle
<Steffen.Weiberle at sun.com> wrote:
> Maciej Browarski wrote:
>  > Steffen Weiberle wrote:
>  >> Maciej Browarski wrote:
>  >>
>  >>> Hello,
>  >>> Is possible that  OS GUEST  can  run  independently of OS HOST ?
>  >>> because I saw that when my OS HOST crashed and after reboot I see that
>  >>> OS GUEST has normal Uptime.
>  >>> What conditions must occur  to  have OS GUEST  indepent of OS HOST ?
>  >>>
>  >>> Regards
>  >>>
>  >>>
>  >> Not sure if I understand the question, and i'll try to answer what I
>  >> think it is.
>  >>
>  >> With LDom later than 1.0.0, if the service domain goes down, the guest
>  >> LDoms stay running but can't use the downed service domain's I/O. If
>  >> there are two service domains configured, and the second is running, and
>  >> the guest is (or guests are) configured to fail over I/O, they will not
>  >> only continue to run, but also continue to have access to the outside 
> world.
>  >>
>  >> I don't know how long a guest can stay running with its boot device not
>  >> accessible. I can't think of a way to test this with only one service
>  >> domain, since you will loose access to the guest's console as well.
>  >>
>  >> I ran a quick test. Using 1.0.2 (which is not available yet) and Solaris
>  >> 10 update 5 build 10 in both the service and guest LDoms. A ping to the
>  >> guest failed for about a minute, and started up again early during the
>  >> reboot of the service domain. An ssh session stayed up.
>  >>
>  >> mtllab150 is the guest LDom. I did an 'init 6' in the service LDom.
>  >>
>  >> 11 bytes from mtllab150 (10.1.14.150): icmp_seq=7.
>  >> 11 bytes from mtllab150 (10.1.14.150): icmp_seq=8.
>  >> 11 bytes from mtllab150 (10.1.14.150): icmp_seq=9.
>  >> 11 bytes from mtllab150 (10.1.14.150): icmp_seq=65.
>  >> 11 bytes from mtllab150 (10.1.14.150): icmp_seq=66.
>  >> 11 bytes from mtllab150 (10.1.14.150): icmp_seq=67.
>  >>
>  >> Steffen
>  >>
>  >> _______________________________________________
>  >> ldoms-discuss mailing list
>  >> ldoms-discuss at opensolaris.org
>  >> http://mail.opensolaris.org/mailman/listinfo/ldoms-discuss
>  >>
>  > Thanks for reply,
>  > So, if I can configure  guest  LDOMS to use direct hdd and network there
>  > is chance that these LDOMS will be independent  from  service domain
>  > ldoms. But  question is:
>  > Can I use  direct access to hardware without service domain like in
>  > Dynamic Domains ?
>
>  No. I/O must go through a service domain.
>
>  See page 17 of the Beginners Guide at
>  http://www.sun.com/blueprints/0207/820-0832.html
>
>  > Now I use nv_85 with LDOMS 1.0.2
>  >
>  > Regards
>  > Maciej
>  >
>  > _______________________________________________
>  > ldoms-discuss mailing list
>  > ldoms-discuss at opensolaris.org
>  > http://mail.opensolaris.org/mailman/listinfo/ldoms-discuss
>
>  _______________________________________________
>  ldoms-discuss mailing list
>  ldoms-discuss at opensolaris.org
>  http://mail.opensolaris.org/mailman/listinfo/ldoms-discuss
>

Reply via email to