Thanks a lot for the quick replies. According to Update manager, my system is
up to date,
so it seems the latest fixes you mentioned did not make it (yet) into
OpenIndiana.
Is there a chance this will happen in the neat future?
Best regards,
Oliver
From
Hi,
could you start system terminal and send output of:
uname -a
and
pkg publisher
to this list?
Best regards,
Milan
On 25.06.2012 09:17, Weiergräber, Oliver H. wrote:
Thanks a lot for the quick replies. According to Update manager, my
system is up to date,
so it seems the latest fixes yo
Hi folks,
I have a basic newbie question: can somebody help me to understand how
exactly the boot environments created by 'pkg image-update' work?
Lets say I start with the BE 'mysystem'. My initial expectation -
obviously incorrect - was that performing the update would take a
snapshot (call it
2012-06-25 18:18, Aneurin Price wrote:> Hi folks,
>
> I have a basic newbie question: can somebody help me to understand how
> exactly the boot environments created by 'pkg image-update' work?
Hello, I might make a few mistakes (and welcome corrections then),
but here's the way I see it (and in s
Hi Aneurin,
I'd expect one of the design goals of the whole image-update process was to
work with as little interruption as possible (we had this in live upgrade
as well, so the historical precedent is fairly clear, IMO anyway): you
could run your update, watch it finish, analyse logs etc., all wh
On 25 June 2012 15:44, Michael Schuster wrote:
> Hi Aneurin,
>
> I'd expect one of the design goals of the whole image-update process was to
> work with as little interruption as possible (we had this in live upgrade
> as well, so the historical precedent is fairly clear, IMO anyway): you
> could
OK - my back is up against the wall.
mich@jaguar:~# zpool status
pool: data
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted
On 06/25/12 08:08 PM, michelle wrote:
OK - my back is up against the wall.
mich@jaguar:~# zpool status
pool: data
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning
in a
d
On 06/25/2012 11:08 AM, michelle wrote:
The cables appear fine, so I'm dealing with either a controller issue,
or a hard drive issue, I don't know how to interpret those earlier
"messages" messages.
Appers it is HD. Have you looked at this doc titled:
Too many I/O errors on ZFS de
On 06/25/2012 08:08 PM, michelle wrote:
> OK - my back is up against the wall.
>
> mich@jaguar:~# zpool status
> pool: data
> state: DEGRADED
> status: One or more devices are faulted in response to persistent errors.
> Sufficient replicas exist for the pool to continue functioning in a
Well, right now I can't do anything.
I asked it to unmount the data set and it said it was busy.
I checked all my client links and everything was closed, so I asked it
to export again. It still said it was busy.
I decided to use -f and then it froze.
I logged in to another terminal session a
I did a hard reset and moved the drive to another channel.
The fault followed the drive so I'm certain it is the drive, as people
have said.
The thing that bugs me is that this ZFS fault locked up the OS - and
that's a real concern.
I think I'm going to need to have a hard think about my op
On 6/25/2012 3:31 PM, michelle wrote:
I did a hard reset and moved the drive to another channel.
The fault followed the drive so I'm certain it is the drive, as people
have said.
The thing that bugs me is that this ZFS fault locked up the OS - and
that's a real concern.
I think I'm going t
On 06/25/2012 09:31 PM, michelle wrote:
> I did a hard reset and moved the drive to another channel.
>
> The fault followed the drive so I'm certain it is the drive, as people
> have said.
Then return it for warranty repairs or get a new one. SMART data should
help you get a clearer picture of ex
Sure, here we go:
(~): uname -a
SunOS mymachine 5.11 oi_151a4 i86pc i386 i86pc Solaris
(~): pkg publisher
PUBLISHER TYPE STATUS URI
openindiana.org origin online
http://pkg.openindiana.org/dev/
opensolaris.org (non-sticky, disa
On 06/25/2012 03:31 PM, michelle wrote:
> I did a hard reset and moved the drive to another channel.
>
> The fault followed the drive so I'm certain it is the drive, as people
> have said.
>
> The thing that bugs me is that this ZFS fault locked up the OS - and
> that's a real concern.
>
> I think
> Date: Mon, 25 Jun 2012 17:06:07 -0400
> From: Ray Arachelian
> OpenPGP: id=E556D4A0
>
> On 06/25/2012 03:31 PM, michelle wrote:
> > I did a hard reset and moved the drive to another channel.
> >
> > The fault followed the drive so I'm certain it is the drive, as people
> > have said.
Feel free to test the included 32-bit build of openusb 1.1.6 for oi_151a at:.
https://www.illumos.org/issues/2934
An update (or patch) may resolve a few reported issues bugs in USB io transfers.
The original openusb 1.0.1 userland port is maintained upstream.
~ Ken Mays
_
I have observed the same SATA hard disk error wait behavior over many
operating systems. It's a SATA hardware issue. I have even observed it
on expensive high end storage servers. (HP, IBM, etc.) The SATA disk or
subsystem is trying to correct/recover errors, it should not and just
return the fault
On Jun 25, 2012, at 2:06 PM, Ray Arachelian wrote:
> On 06/25/2012 03:31 PM, michelle wrote:
>> I did a hard reset and moved the drive to another channel.
>>
>> The fault followed the drive so I'm certain it is the drive, as people
>> have said.
>>
>> The thing that bugs me is that this ZFS faul
Apologies,
This went to an individual rather than back to the group.
Thanks for the response.
The thing that set of major alarms in my head is the fact that these
errors caused OI to freeze up to the degree where it needed to be
powered off. It would acknowledge the power switch instruc
Hello all,
I'm wondering what options are available for root filesystem in OI? By
default, install uses ZFS and creates a rpool. But If I'm a ZFS hacker
and made some changes to some core structures, how does one go about
debugging that? Is dropping to kmdb and debugging the only available
(
On Mon, Jun 25, 2012 at 6:55 PM, Vishwas Durai wrote:
> I'm wondering what options are available for root filesystem in OI? By
> default, install uses ZFS and creates a rpool.
I was about to say, "use UFS", but I found this:
http://openindiana.org/pipermail/openindiana-discuss/2011-June/004488.
UFS root should still work, also NFS root (convenient for ZFS debug work:)
On Mon, Jun 25, 2012 at 9:00 PM, Jan Owoc wrote:
> On Mon, Jun 25, 2012 at 6:55 PM, Vishwas Durai wrote:
>> I'm wondering what options are available for root filesystem in OI? By
>> default, install uses ZFS and creates
UFS root certainly works, but not sure if the OI installer makes it easy?
-- richard
On Jun 25, 2012, at 7:37 PM, Gordon Ross wrote:
> UFS root should still work, also NFS root (convenient for ZFS debug work:)
>
> On Mon, Jun 25, 2012 at 9:00 PM, Jan Owoc wrote:
>> On Mon, Jun 25, 2012 at 6:55
On Mon, Jun 25, 2012 at 10:37 PM, Gordon Ross wrote:
> UFS root should still work, also NFS root (convenient for ZFS debug work:)
>
As concepts within illumos, yes.
With the full OI distribution, doubtful.
-- Rich
___
OpenIndiana-discuss mailing list
On Mon, Jun 25, 2012 at 11:12 PM, Richard Lowe wrote:
> On Mon, Jun 25, 2012 at 10:37 PM, Gordon Ross wrote:
>> UFS root should still work, also NFS root (convenient for ZFS debug work:)
>>
>
> As concepts within illumos, yes.
>
> With the full OI distribution, doubtful.
>
> -- Rich
Last I looke
The error reported task_file_status = 0x4041 on port 3 is a result of an
ATA response where the PxIS.TFES bit was set. The ahci driver must do a
port reset at that point. The SATA disk failed to perform an operation
and reported it. I suspect this is not a data error and may be more on
the lines of
Many thanks to all.
I now have a better understanding of what is happening, why OI locked up
in the way it did, and that with the exception of FreeNAS (ZFS 15) and
NAS4Free, (claiming ZFS 28) it will be the same thing with Schillix and
NexcentaStor.
I've sold some of my possessions in order
29 matches
Mail list logo