[Ocfs2-users] heartbeat and slot issues.

2010-11-24 Thread brad hancock
I setup a host with an ocfs partition on a san and then cloned that host to
another and renamed. Both machines mount their ocfs partitions but give the
following errors.


Host that was cloned:
(1888,0):o2hb_do_disk_heartbeat:762 ERROR: Device "sdb1": another node is
heartbeating in our slot!
[345413.242260] sd 1:0:0:0: reservation conflict
[345413.242270] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK
driverbyte=DRIVER_OK,SUGGEST_OK
[345413.242274] end_request: I/O error, dev sdb, sector 1735
[345413.242536] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[345413.242788] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
[345413.243159] sd 1:0:0:0: reservation conflict
[345413.243163] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK
driverbyte=DRIVER_OK,SUGGEST_OK
[345413.243166] end_request: I/O error, dev sdb, sector 1735
[345413.243401] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[345413.243639] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
[448460.370132] sd 1:0:0:0: reservation conflict
[448460.370145] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK
driverbyte=DRIVER_OK,SUGGEST_OK
[448460.370149] end_request: I/O error, dev sdb, sector 1735
[448460.370395] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[448460.370638] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5


Clone:

 sd 1:0:0:0: reservation conflict
[17643.588011] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK
driverbyte=DRIVER_OK,SUGGEST_OK
[17643.588011] end_request: I/O error, dev sdb, sector 1735
[17643.588011] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[17643.588011] (1859,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
[17643.588011] sd 1:0:0:0: reservation conflict

This didn't seem to be a problem, but im noticing the host are no longer
seeing the same data. I unmount the drives and remounted and they were the
same again.


Thanks for any guidance,


cat /etc/ocfs2/cluster.conf
node:
ip_port = 
ip_address = 10.x.x.248
number = 0
name = smes01
cluster = ocfs2

node:
ip_port = 
ip_address = 10.x.x.249
number = 1
name = smes02
cluster = ocfs2

cluster:
node_count = 2
name = ocfs2

cluster.conf same on both hosts.
___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users

Re: [Ocfs2-users] heartbeat and slot issues.

2010-11-24 Thread Ulf Zimmermann
After the clone, you want to probably run tunefs.ocfs2 -U to reset the UUID. 
This is one of the steps we do when cloning volumes for database refreshes.


From: ocfs2-users-boun...@oss.oracle.com 
[mailto:ocfs2-users-boun...@oss.oracle.com] On Behalf Of brad hancock
Sent: Wednesday, November 24, 2010 12:35 PM
To: ocfs2-users@oss.oracle.com
Subject: [Ocfs2-users] heartbeat and slot issues.

I setup a host with an ocfs partition on a san and then cloned that host to 
another and renamed. Both machines mount their ocfs partitions but give the 
following errors.


Host that was cloned:
(1888,0):o2hb_do_disk_heartbeat:762 ERROR: Device "sdb1": another node is 
heartbeating in our slot!
[345413.242260] sd 1:0:0:0: reservation conflict
[345413.242270] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK 
driverbyte=DRIVER_OK,SUGGEST_OK
[345413.242274] end_request: I/O error, dev sdb, sector 1735
[345413.242536] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[345413.242788] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
[345413.243159] sd 1:0:0:0: reservation conflict
[345413.243163] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK 
driverbyte=DRIVER_OK,SUGGEST_OK
[345413.243166] end_request: I/O error, dev sdb, sector 1735
[345413.243401] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[345413.243639] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
[448460.370132] sd 1:0:0:0: reservation conflict
[448460.370145] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK 
driverbyte=DRIVER_OK,SUGGEST_OK
[448460.370149] end_request: I/O error, dev sdb, sector 1735
[448460.370395] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[448460.370638] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5


Clone:

 sd 1:0:0:0: reservation conflict
[17643.588011] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK 
driverbyte=DRIVER_OK,SUGGEST_OK
[17643.588011] end_request: I/O error, dev sdb, sector 1735
[17643.588011] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[17643.588011] (1859,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
[17643.588011] sd 1:0:0:0: reservation conflict

This didn't seem to be a problem, but im noticing the host are no longer seeing 
the same data. I unmount the drives and remounted and they were the same again.


Thanks for any guidance,


cat /etc/ocfs2/cluster.conf
node:
ip_port = 
ip_address = 10.x.x.248
number = 0
name = smes01
cluster = ocfs2

node:
ip_port = 
ip_address = 10.x.x.249
number = 1
name = smes02
cluster = ocfs2

cluster:
node_count = 2
name = ocfs2

cluster.conf same on both hosts.

___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users

Re: [Ocfs2-users] heartbeat and slot issues.

2010-11-24 Thread brad hancock
Thanks for the response.
Is it normal for when I change it on one node, the other node reflects the
same UUID?


node1:
tunefs.ocfs2 -q -Q "BS=%5B\nUUID=%U\n" /dev/sdb1
BS= 4096
UUID=ea0778bd-bdaa-44af-8fbf-cb4a5d85e79f


node2:
 tunefs.ocfs2 -q -Q "BS=%5B\nUUID=%U\n" /dev/sdb1
BS= 4096
UUID=ea0778bd-bdaa-44af-8fbf-cb4a5d85e79f




On Wed, Nov 24, 2010 at 3:00 PM, Ulf Zimmermann  wrote:

> After the clone, you want to probably run tunefs.ocfs2 –U to reset the
> UUID. This is one of the steps we do when cloning volumes for database
> refreshes.
>
>
>
>
>
> *From:* ocfs2-users-boun...@oss.oracle.com [mailto:
> ocfs2-users-boun...@oss.oracle.com] *On Behalf Of *brad hancock
> *Sent:* Wednesday, November 24, 2010 12:35 PM
> *To:* ocfs2-users@oss.oracle.com
> *Subject:* [Ocfs2-users] heartbeat and slot issues.
>
>
>
> I setup a host with an ocfs partition on a san and then cloned that host to
> another and renamed. Both machines mount their ocfs partitions but give the
> following errors.
>
>
>
>
>
> Host that was cloned:
>
> (1888,0):o2hb_do_disk_heartbeat:762 ERROR: Device "sdb1": another node is
> heartbeating in our slot!
>
> [345413.242260] sd 1:0:0:0: reservation conflict
>
> [345413.242270] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK
> driverbyte=DRIVER_OK,SUGGEST_OK
>
> [345413.242274] end_request: I/O error, dev sdb, sector 1735
>
> [345413.242536] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
>
> [345413.242788] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
>
> [345413.243159] sd 1:0:0:0: reservation conflict
>
> [345413.243163] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK
> driverbyte=DRIVER_OK,SUGGEST_OK
>
> [345413.243166] end_request: I/O error, dev sdb, sector 1735
>
> [345413.243401] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
>
> [345413.243639] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
>
> [448460.370132] sd 1:0:0:0: reservation conflict
>
> [448460.370145] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK
> driverbyte=DRIVER_OK,SUGGEST_OK
>
> [448460.370149] end_request: I/O error, dev sdb, sector 1735
>
> [448460.370395] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
>
> [448460.370638] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
>
>
>
>
>
> Clone:
>
>
>
>  sd 1:0:0:0: reservation conflict
>
> [17643.588011] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK
> driverbyte=DRIVER_OK,SUGGEST_OK
>
> [17643.588011] end_request: I/O error, dev sdb, sector 1735
>
> [17643.588011] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
>
> [17643.588011] (1859,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
>
> [17643.588011] sd 1:0:0:0: reservation conflict
>
>
>
> This didn't seem to be a problem, but im noticing the host are no longer
> seeing the same data. I unmount the drives and remounted and they were the
> same again.
>
>
>
>
>
> Thanks for any guidance,
>
>
>
>
>
> cat /etc/ocfs2/cluster.conf
>
> node:
>
> ip_port = 
>
> ip_address = 10.x.x.248
>
> number = 0
>
> name = smes01
>
> cluster = ocfs2
>
>
>
> node:
>
> ip_port = 
>
> ip_address = 10.x.x.249
>
> number = 1
>
> name = smes02
>
> cluster = ocfs2
>
>
>
> cluster:
>
> node_count = 2
>
> name = ocfs2
>
>
>
> cluster.conf same on both hosts.
>
>
>
___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users

Re: [Ocfs2-users] heartbeat and slot issues.

2010-11-24 Thread Ulf Zimmermann
Then you haven't cloned the volume, but it is the same, would be my guess.


From: brad hancock [mailto:braddhanc...@gmail.com]
Sent: Wednesday, November 24, 2010 1:54 PM
To: Ulf Zimmermann
Cc: ocfs2-users@oss.oracle.com
Subject: Re: [Ocfs2-users] heartbeat and slot issues.

Thanks for the response.
Is it normal for when I change it on one node, the other node reflects the same 
UUID?


node1:
tunefs.ocfs2 -q -Q "BS=%5B\nUUID=%U\n" /dev/sdb1
BS= 4096
UUID=ea0778bd-bdaa-44af-8fbf-cb4a5d85e79f


node2:
 tunefs.ocfs2 -q -Q "BS=%5B\nUUID=%U\n" /dev/sdb1
BS= 4096
UUID=ea0778bd-bdaa-44af-8fbf-cb4a5d85e79f



On Wed, Nov 24, 2010 at 3:00 PM, Ulf Zimmermann 
mailto:u...@openlane.com>> wrote:
After the clone, you want to probably run tunefs.ocfs2 -U to reset the UUID. 
This is one of the steps we do when cloning volumes for database refreshes.


From: 
ocfs2-users-boun...@oss.oracle.com<mailto:ocfs2-users-boun...@oss.oracle.com> 
[mailto:ocfs2-users-boun...@oss.oracle.com<mailto:ocfs2-users-boun...@oss.oracle.com>]
 On Behalf Of brad hancock
Sent: Wednesday, November 24, 2010 12:35 PM
To: ocfs2-users@oss.oracle.com<mailto:ocfs2-users@oss.oracle.com>
Subject: [Ocfs2-users] heartbeat and slot issues.

I setup a host with an ocfs partition on a san and then cloned that host to 
another and renamed. Both machines mount their ocfs partitions but give the 
following errors.


Host that was cloned:
(1888,0):o2hb_do_disk_heartbeat:762 ERROR: Device "sdb1": another node is 
heartbeating in our slot!
[345413.242260] sd 1:0:0:0: reservation conflict
[345413.242270] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK 
driverbyte=DRIVER_OK,SUGGEST_OK
[345413.242274] end_request: I/O error, dev sdb, sector 1735
[345413.242536] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[345413.242788] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
[345413.243159] sd 1:0:0:0: reservation conflict
[345413.243163] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK 
driverbyte=DRIVER_OK,SUGGEST_OK
[345413.243166] end_request: I/O error, dev sdb, sector 1735
[345413.243401] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[345413.243639] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
[448460.370132] sd 1:0:0:0: reservation conflict
[448460.370145] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK 
driverbyte=DRIVER_OK,SUGGEST_OK
[448460.370149] end_request: I/O error, dev sdb, sector 1735
[448460.370395] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[448460.370638] (1888,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5


Clone:

 sd 1:0:0:0: reservation conflict
[17643.588011] sd 1:0:0:0: [sdb] Result: hostbyte=DID_OK 
driverbyte=DRIVER_OK,SUGGEST_OK
[17643.588011] end_request: I/O error, dev sdb, sector 1735
[17643.588011] (0,0):o2hb_bio_end_io:225 ERROR: IO Error -5
[17643.588011] (1859,0):o2hb_do_disk_heartbeat:753 ERROR: status = -5
[17643.588011] sd 1:0:0:0: reservation conflict

This didn't seem to be a problem, but im noticing the host are no longer seeing 
the same data. I unmount the drives and remounted and they were the same again.


Thanks for any guidance,


cat /etc/ocfs2/cluster.conf
node:
ip_port = 
ip_address = 10.x.x.248
number = 0
name = smes01
cluster = ocfs2

node:
ip_port = 
ip_address = 10.x.x.249
number = 1
name = smes02
cluster = ocfs2

cluster:
node_count = 2
name = ocfs2

cluster.conf same on both hosts.


___
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users