[SLUG] UUID problem in Kubuntu Hardy 8.40 beta after updates.

2008-04-06 Thread bill
Have been running Kubuntu Hardy 8.40 beta for a while without any 
problems. It even set up my widescreen monitor properly without the need 
to install non-free NVidia driver.


Last week ( Wednesday I think) there was a large update ( about 115 
pkgs) after which my system was screwed insofar as recognising/mounting 
drives/partitions. Both HDs are PATA drives.


Firstly IDE partitions were changed from /hdxX to /sdxx.

Secondly, my system no longer sees my 2nd hard drive ( 3 ext3 partitions 
plus 1 swap ). It doesn;'t show up anywhere.


Watching the boot process I see 3 lines that state:-

Unable to resolve : UUID 

/var/logfsck/checkfs shows nothing relating to this drive.

I assume that these messages refer to the 3 ext3 partitions on the 2nd 
hard drive.


This drive mounts fine if I boot Mepis 7.0 LiveCD.

I've Googled and searched the various Forums without a solution. It 
seems to be a common problem, along with Hardy duplicating mount points 
on each reboot ( not in my case).


It also appears that many users believe that this is a bug in that 
developers did ot allow for multi-boot systems or for removing drives or 
repartitioning or resizing partitions.


There seems to be no solution at this time - at leas none that I can find.

Previous to this update I had entries in /media for all of my 
partitions, but now it seems that this Directory is no longer used, and 
the entries are incorrect anyway, the system nowusing /sdxx instead of 
/hdxx.


/etc/fstab now only shows partitios for 1st HD.

In /dev/ there are /disk/by-id, /disk/by-path and /disk/by-uuid

Both HDs show in /disk/by-id

/disk/by-path only has entries for the first HD.

/disk/by-uuid only has entries for the first HD.

As the 2nd HD is apparently not found during the boot process I assume 
that is why it is not in /disk/by-path or /disk/by-uuid, and that the 
entries in /disk/by-id remain from before the update


As the 2nd HD is not accessible from the system I can't even check its 
UUIDs, nor am I able to access it from the Kubuntu Hardy LiveCD or from 
a 2nd install to the 1st HD.


Questions:-

1) if I remove the 2nd HD ( /hdb, or /sdb as it now will be) and attach 
it as /sdc will it be recognised and given new UUIDs?


2) If I boot from a LiveCD and check the HDs UUIDS ( using say blkid) 
can I then use these UUIDs to fix my system somehow, or will the UUIDS 
change with the use of the LiveCD?


3) Do the UUIDS change when the HD is accessed from a different 
distro/installation ( ie multi-boot system with shared HD for data)?


4) are the UUIDS stored anywhere other than in /dev/disk/by-UUID?

I had trouble with UUID once before when I resized/repartiioned a drive, 
but I fixed that OK. This problem is not related to anything being 
changed in the PC other than updating the packages.


Any help/links appreciated.

Bill




--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Network Real-Time Hot Filesystem Replication?

2008-04-06 Thread Amos Shapira
On Sun, Apr 6, 2008 at 2:47 PM, Jamie Wilkinson [EMAIL PROTECTED] wrote:

 This one time, at band camp, Crossfire wrote:
  Dave Kempe wrote:
  Crossfire wrote:
  I want to be able to set it up so /home (and maybe other filesystems)
  are replicated from one to the other, in both directions, in real
  time so they can run in an all-hot redundant cluster.
 
  The environment should be mostly read-oriented, so I can live with
  write-latent solutions as long as they handle the race/collision
  gracefully (preferably by actually detecting and reporting it if they
  can't avoid it).
 
  isn't this just a description of a network filesytem... say NFS?
 
  No.  Network Filesystems still have a distinct single storage location.
  If that storage is taken offline, clients can only error or hang.
 
  With a hot real-time replicated filesystem, all involved nodes would
  have a full local copy at all times and would be able to continue
  operation.

 I agreed with your earlier decision about not using drbd because you
 wouldn't be able to write from multiple nodes to the filesystem; all the
 slaves would have to be mounted read-only.  However if you wanted to get


Can you provide links which support this?

I've been using DRBD for a few months now (just in stand-by mode, but been
following the forums and docs during that time) and all indications are
that:

1. You CAN'T mount a non-cluster-aware file system even read-only on the
secondary node since the primary will change FS-structs under the feet of
the read-only node and cause it to crash (because non-cluster-aware
filesystems assume that they are the only ones who touch that partition).
2. You CAN mount read-write on multiple nodes if you use one of the
cluster-aware filesystems (GFS and OCFS are regularly mentioned, but if you
find any other cluster-aware file system then it sounds like it will work
too).

Ref:
http://www.linux-ha.org/DRBD/FAQ#head-2cad8caa095cfb6e2935261cb595390c742ebd86


 fancy you could still use drbd (which is a great fit for all your other
 requirements) on a multi-node fileserver, and do some nifty failover using
 IP takeover.

 Or if you're trying to share the local disk of a lot of nodes, then what
 if
 you used DRBD on them all to replicate the block device, and run a NFS
 server on the nodes thremselves?  Yes you'd get a lot of network traffic
 between them, but it'd work, no? :)


Have you tried this suggestions? From all I read about DRBD this will cause
all secondary nodes to crash.

Cheers,

--Amos
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Network Real-Time Hot Filesystem Replication?

2008-04-06 Thread Adrian Chadd
On Sun, Apr 06, 2008, Amos Shapira wrote:

 I've been using DRBD for a few months now (just in stand-by mode, but been
 following the forums and docs during that time) and all indications are
 that:
 
 1. You CAN'T mount a non-cluster-aware file system even read-only on the
 secondary node since the primary will change FS-structs under the feet of
 the read-only node and cause it to crash (because non-cluster-aware
 filesystems assume that they are the only ones who touch that partition).

 2. You CAN mount read-write on multiple nodes if you use one of the
 cluster-aware filesystems (GFS and OCFS are regularly mentioned, but if you
 find any other cluster-aware file system then it sounds like it will work
 too).

IIRC they assume a single back-end device. Does DRBD give you a journaling
block device which will stall updates until they've been pushed? How will
the FSes tolerate the device IO being possibly milliseconds later than the
master?

 Have you tried this suggestions? From all I read about DRBD this will cause
 all secondary nodes to crash.

I looked into it about a year ago and I couldn't find any simple way of
doing this using free software. There's CODA/AFS as possible solutions but
they still push the notion of master/slave rather than equal peers, which
Chris mentions he needs (ie, constant synchronisation between each member
rather than periodic pushback..)

Chris, try looking at CODA/AFS support?



Adrian

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Network Real-Time Hot Filesystem Replication?

2008-04-06 Thread Amos Shapira
On Sun, Apr 6, 2008 at 9:25 PM, Adrian Chadd [EMAIL PROTECTED] wrote:

 On Sun, Apr 06, 2008, Amos Shapira wrote:

  I've been using DRBD for a few months now (just in stand-by mode, but
 been
  following the forums and docs during that time) and all indications are
  that:
 
  1. You CAN'T mount a non-cluster-aware file system even read-only on the
  secondary node since the primary will change FS-structs under the feet
 of
  the read-only node and cause it to crash (because non-cluster-aware
  filesystems assume that they are the only ones who touch that
 partition).

  2. You CAN mount read-write on multiple nodes if you use one of the
  cluster-aware filesystems (GFS and OCFS are regularly mentioned, but if
 you
  find any other cluster-aware file system then it sounds like it will
 work
  too).

 IIRC they assume a single back-end device. Does DRBD give you a journaling
 block device which will stall updates until they've been pushed? How will
 the FSes tolerate the device IO being possibly milliseconds later than the
 master?


Again - I haven't got around to actually use it (as much as I'd like to just
sit down and try it) but you can see in the link that I sent with my
previous reply that they clearly claim that it is supported.

 Have you tried this suggestions? From all I read about DRBD this will
 cause
  all secondary nodes to crash.

 I looked into it about a year ago and I couldn't find any simple way of


Could it be that you looked at 0.7? I always used 0.8+ and got the
impression that there were major improvements introduced in it over 0.7.

doing this using free software. There's CODA/AFS as possible solutions but
 they still push the notion of master/slave rather than equal peers, which
 Chris mentions he needs (ie, constant synchronisation between each member
 rather than periodic pushback..)


That's what DRBD 0.8+GFS/OCFS is promoted as .

Cheers,

--Amos
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] SLUG Monthly Meeting, Friday 2 May

2008-04-06 Thread Sridhar Dhanapalan
Our last announcement contained an error in the subject line. Please accept 
our apologies.


== April SLUG Monthly Meeting ==

You can read the full version of this announcement on the Web at 
http://www.slug.org.au/node/97


When:
   18.30 - 20.30, Friday, 2 May, 2008

NOTE: Due to a clash with ANZAC Day, the April SLUG meeting has been deferred 
by one week to 2 May. The official May meeting will not be affected.

We start at 18:30 but we ask that people arrive 15 minutes early so we can all 
get into the building and start on time. Please do not arrive before 18:00, 
as it may hinder business activities for our host!

Appropriate signage and directions will be posted on the building.


Where:
   Atlassian[0], 173-185 Sussex Street, Sydney
   (corner of Sussex and Market Street)

Entry is via the rear on Slip Street. There are stairs going down along the 
outside of building from Sussex St to near the entrance. A map of the area 
and directions can be found here[1].


= Talks =

General Talk: TBA
In-Depth Talk: TBA

We will release another announcement after we confirm our speakers.


= Meeting Schedule =

See here[2] for an explanation of the segments.

* 18:15 : Open Doors
* 18:30 : Announcements, News, Introductions
* 18:45 : General Talk (see above)
* 19:30 : Intermission
* 19:45 : Split into two groups for
* In-depth Talk (see above)
* SLUGlets: Linux QA and other miscellany
* 20:30 : Dinner

Dinner is at Golden Harbour Restaurant, in Chinatown. We will be having the 
$24 Banquet[3], but we will be collecting $25 per head for ease of accounting 
and to cover a tip. We will be taking numbers during the break to confirm the 
reservation size. If you have any particular dietary requirements (e.g. 
vegetarian), or if you would prefer to order separately, let us know 
beforehand. Dinner is a great way to socialise and learn in a relaxed 
atmosphere :)

We hope to see you there!


[0] http://www.atlassian.com
[1] http://tinyurl.com/35fxes
[2] http://www.slug.org.au/meetings/meetingformat
[3] http://www.goldenharbour.com.au/specials.html

-- 
Sridhar Dhanapalan
President
Sydney Linux Users Group
http://www.slug.org.au


signature.asc
Description: This is a digitally signed message part.
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html