Thanks Mike. On our Prod boxes we use labels, so I will implement the
same on the RHEL 53 host.
Incidentally, we have been looking at the SM2 server logs and we think
that we have identified a driver / NIC issue that could well be
impacting the RHEL 53 server and the RHEL 52 (prod box). We have
bigcatxjs wrote:
>>> Mar 17 18:27:59 MYHOST53 kernel: scsi 2:0:0:0: rejecting I/O to dead
>>> device
>> It looks like one of the following is happening:
>>
>> 1. were using RHEL 5.2 and the target logged us out or dropped the
>> session and when we tried to login we got what we thought was a fatal
Thanks Mike...
On Mar 18, 5:45 pm, Mike Christie wrote:
> bigcatxjs wrote:
> > Hi,
> > We have encountered this error below. This is the first time I have
> > seen this before;
>
> This is with the noop settings set to 0 right? Was this the RHEL 5.3 or
> 5.2 setup?
It is our RHEL 5.3 host.
>
bigcatxjs wrote:
> Hi,
> We have encountered this error below. This is the first time I have
> seen this before;
This is with the noop settings set to 0 right? Was this the RHEL 5.3 or
5.2 setup?
Could you do
rpm -q iscsi-initiator-utils
>
>
> Mar 17 12:40:47 MYHOST53 kernel: Vendor: D
On Mar 17, 5:06 pm, Mike Christie wrote:
> bigcatxjs wrote:
> > Thanks Mike...
>
> > On Mar 13, 8:45 pm, Mike Christie wrote:
> >> bigcatxjs wrote:
> At these times is there lots of disk IO? Is there anything in the target
> logs?
> >>> It is fair to say that all these volumes take a
On Mar 17, 5:06 pm, Mike Christie wrote:
> bigcatxjs wrote:
> > Thanks Mike...
>
> > On Mar 13, 8:45 pm, Mike Christie wrote:
> >> bigcatxjs wrote:
> At these times is there lots of disk IO? Is there anything in the target
> logs?
> >>> It is fair to say that all these volumes take a
bigcatxjs wrote:
> Thanks Mike...
>
> On Mar 13, 8:45 pm, Mike Christie wrote:
>> bigcatxjs wrote:
At these times is there lots of disk IO? Is there anything in the target
logs?
>>> It is fair to say that all these volumes take a heavy hit, in terms of
>>> I/O. Each host (excluding th
Thanks Mike...
On Mar 13, 8:45 pm, Mike Christie wrote:
> bigcatxjs wrote:
> >> At these times is there lots of disk IO? Is there anything in the target
> >> logs?
> > It is fair to say that all these volumes take a heavy hit, in terms of
> > I/O. Each host (excluding the RHEL 5.3. test host) r
bigcatxjs wrote:
>> At these times is there lots of disk IO? Is there anything in the target
>> logs?
> It is fair to say that all these volumes take a heavy hit, in terms of
> I/O. Each host (excluding the RHEL 5.3. test host) run two Oracle
> databases, of which some have intra-database replica
bigcatxjs wrote:
> UPDATE: RHEL 5.3 Host is showing errors. No Disk I/O to SAN volume
> (last I/O Thursday 12th March);
>
Is there anything in the log before this? Something about a ping or nop
timing out?
> Mar 13 10:38:49 MYHOST53 kernel: connection1:0: iscsi: detected conn
> error (1011)
UPDATE: RHEL 5.3 Host is showing errors. No Disk I/O to SAN volume
(last I/O Thursday 12th March);
Mar 13 10:38:49 MYHOST53 kernel: connection1:0: iscsi: detected conn
error (1011)
Mar 13 10:38:49 MYHOST53 iscsid: Kernel reported iSCSI connection 1:0
error (1011) state (3)
Mar 13 10:38:52 MYHOS
UPDATE: RHEL 5.3 Host with NO disk i/o to SAN volume has encountered
errors;
Mar 13 10:38:49 PETDBLINUX01 kernel: connection1:0: iscsi: detected
conn error (1011)
Mar 13 10:38:49 PETDBLINUX01 iscsid: Kernel reported iSCSI connection
1:0 error (1011) state (3)
Mar 13 10:38:52 PETDBLINUX01 iscsid
Thanks Mike,
> For this RHEL 5.2 setup, does it make a difference if you do not use
> ifaces and setup the box like in 5.3 below?
I have used bonded ifaces so that the I/O requests can be split across
multiple NICS (both Server-side and on the Datacore San Melody SM node
NICS). This split is ach
bigcatxjs wrote:
For this RHEL 5.2 setup, does it make a difference if you do not use
ifaces and setup the box like in 5.3 below?
> iscsiadm:
> iSCSI Transport Class version 2.0-724
> iscsiadm version 2.0-868
> Target: iqn.2000-08.com.datacore:sm2-3
> Current Portal: 172.16.200.9:326
Thanks Ulrich,
Unfortunately, budgetary restrictions prevent us from moving to Fibre
Channel :>(
Rich.
On Mar 12, 2:56 pm, "Ulrich Windl"
wrote:
> Hi,
>
> I haven't investigated, but I see similar short "offline periods" for iSCSI
> here.
> For your situation I'd recommend to move to Fibre Cha
Hi,
I haven't investigated, but I see similar short "offline periods" for iSCSI
here.
For your situation I'd recommend to move to Fibre Channel technology for Oracle
databases. Just MHO...
Regards,
Ulrich
On 12 Mar 2009 at 4:42, bigcatxjs wrote:
>
> Hi,
> This is my first post on this Foru
Hi,
This is my first post on this Forum, so apologies in advance if I have
missed something or not found an existing post that covers this topic.
Situation:
We have a number of hosts running RHEL 5.2 (x86_64) for our Oracle
database estate. A typical deployment could comprise a DELL 1955
Blade w
17 matches
Mail list logo