Re: Blueprint: Ubuntu Server Guide development (LTS only)

2013-07-16 Thread Peter M. Petrakis



On 07/16/2013 02:36 PM, Peter Matulis wrote:

On 07/16/2013 02:04 PM, Peter Matulis wrote:

However, I'm supposed to be testing an XML client that
might make peoples' lives easier.  Do you feel like looking at it?:

http://www.xmlmind.com/xmleditor/


Ha, it's free to use but not open source.  But maybe some other editor.


I've had great success with xmlmind and it's what I used to write
the multipath chapter. I tried numerous clients (can't recall)
leading up to this choice, none of them came close.



https://help.ubuntu.com/community/DocBookEditors

peter matulis




--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Re: Monitoring IBM ServeRAID with Ubuntu Server 12.04

2013-05-28 Thread Peter M. Petrakis



On 05/28/2013 01:42 PM, Jorge Andres Brugger wrote:

Hi.

As the subject says: it´s possible to monitor IBM ServeRAID RAID
array from Ubunto 12.04? AFAIK, IBM Director it´s only available in
RPM flavour. Is there any alternative?


alien or rpm2cpio. This is really an OEM issue, they're distributing
the tool, they are responsible for packaging and support. Usually it's
not too painful to convert it, and the tools themselves are self
contained for the most parts e.g. no dependencies.



Thanks!



--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Re: Server increasing load due increasing processes in D state

2013-02-28 Thread Peter M. Petrakis



On 02/28/2013 02:13 AM, Alessandro Tagliapietra wrote:

Thanks for the reply,


1. Reconfigured your VMs to use a single vcpu and cap your available vcpus to 8,
even less considering that more than half of your applications are IO bound.



you think a correct config would be to have less then 8 cpu used by VM?


It truly depends on your workload, KVM will happily DOS the host
system (HV) if you let it. This isn't a new concept,
over-provisioning guidelines are basically the same for every VM
server out there (Xen, KVM, Vmware, etc). Any documentation on the
subject is interchangeable.




2. the software raid can be a choke point for all application progress,
virtual or otherwise; it needs CPU time too.



Yeah we're going to switch to hardware raid asap we've the money to.

By the way, the other server with the same specs but 17 vcpu on it doesn't have 
any issue…


It doesn't have a problem until it does, get the scheduler in the
right zone with the right resource contention and VMs may never get
HV CPU time (enough useful time) again; this also makes the reported
VM load useless as there's not enough cycles to update the statistics
to reflect actual state.


Also can I still run that dump when I'll get processes hanging and let you 
know? Things wouldn't

 just go slower in case of high load? Not hang without any way to stop them?




This is not a software problem, it's a deployment and
administration problem; you are ultimately responsible here.

 

Best Regards and thanks again

--

Alessandro Tagliapietra
alexfu.it (http://www.alexfu.it)

Il giorno lunedì 25 febbraio 2013, alle ore 21:38, Peter M. Petrakis ha scritto:


summary: You're massively over provisioned, +100%

resolution:
1. Reconfigured your VMs to use a single vcpu and cap your available vcpus to 8,
even less considering that more than half of your applications are IO bound.
2. the software raid can be a choke point for all application progress,
virtual or otherwise; it needs CPU time too.

remarks:
you have a test setup, treat it as such, 19 vcpus is overboard.

Peter

On 02/25/2013 12:17 PM, Alessandro Tagliapietra wrote:

Sorry for disturbing again,

after the restart I've seen that I'm unable to ssh to VM since on login they 
run byobu which now hangs (never did on VM).

I managed to ctrl-c fast enough to don't start byobu and an strace on it gave 
me this:

http://pastebin.com/raw.php?i=KYMbsxKV

Thanks

Best Regards

--

Alessandro Tagliapietra
alexfu.it (http://www.alexfu.it)

Il giorno lunedì 25 febbraio 2013, alle ore 17:51, Alessandro Tagliapietra ha 
scritto:


Hi Eduardo

Thank you for the tips.

I'll wait a few days and let you know when this happens again.

About the load, system cpu wasn't more then 10% used from top, io wait was at 
2% most of the time.

We've 4 x 2 (HT) cores on the server and a total number of 19 vcpu allocated on 
VM running on that host.

Vm runs mostly nginx+php-fpm+mysql, one runs also rabbitMQ and a python 
rabbitMQ consumer.

I'll let you know later then.

Thanks again!

Best

--

Alessandro Tagliapietra
alexfu.it (http://www.alexfu.it)

Il giorno lunedì 25 febbraio 2013, alle ore 16:44, Eduardo Damato ha scritto:



Hi Alessandro,

Thanks for the information.

The sysrq-t that I requested is *only* useful during the problem. Please
do that when you encounter the problem again.

It may be that you are overcommitting cpus on your system by having many
virtual machines running on the nova controller node. This is a
completely wild guess, but I would recommend you to look at how many
cpus you have and how many virtual machines and if you have any
processes in real time or sched FIFO.

Cheers,
Eduardo.










--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com (mailto:ubuntu-server@lists.ubuntu.com)
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam








--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam

Re: Server increasing load due increasing processes in D state

2013-02-25 Thread Peter M. Petrakis

summary: You're massively over provisioned, +100%

resolution:
1. Reconfigured your VMs to use a single vcpu and cap your available vcpus to 8,
   even less considering that more than half of your applications are IO bound.
2. the software raid can be a choke point for all application progress,
   virtual or otherwise; it needs CPU time too.

remarks:
you have a test setup, treat it as such, 19 vcpus is overboard.

Peter

On 02/25/2013 12:17 PM, Alessandro Tagliapietra wrote:

Sorry for disturbing again,

after the restart I've seen that I'm unable to ssh to VM since on login they 
run byobu which now hangs (never did on VM).

I managed to ctrl-c fast enough to don't start byobu and an strace on it gave 
me this:

http://pastebin.com/raw.php?i=KYMbsxKV

Thanks

Best Regards

--

Alessandro Tagliapietra
alexfu.it (http://www.alexfu.it)

Il giorno lunedì 25 febbraio 2013, alle ore 17:51, Alessandro Tagliapietra ha 
scritto:


Hi Eduardo

Thank you for the tips.

I'll wait a few days and let you know when this happens again.

About the load, system cpu wasn't more then 10% used from top, io wait was at 
2% most of the time.

We've 4 x 2 (HT) cores on the server and a total number of 19 vcpu allocated on 
VM running on that host.

Vm runs mostly nginx+php-fpm+mysql, one runs also rabbitMQ and a python 
rabbitMQ consumer.

I'll let you know later then.

Thanks again!

Best

--

Alessandro Tagliapietra
alexfu.it (http://www.alexfu.it)

Il giorno lunedì 25 febbraio 2013, alle ore 16:44, Eduardo Damato ha scritto:



Hi Alessandro,

Thanks for the information.

The sysrq-t that I requested is *only* useful during the problem. Please
do that when you encounter the problem again.

It may be that you are overcommitting cpus on your system by having many
virtual machines running on the nova controller node. This is a
completely wild guess, but I would recommend you to look at how many
cpus you have and how many virtual machines and if you have any
processes in real time or sched FIFO.

Cheers,
Eduardo.













--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Re: A HOWTO about LIO iSCSI target on 12.04

2012-06-11 Thread Peter M. Petrakis

Hello,

On 06/09/2012 10:58 PM, Hirotaka Yamamoto wrote:

Hi,

I have just blogged an article about how to use LIO, the new iSCSI target
in precise.
http://ymmt2005.blogspot.jp/2012/06/evaluating-lio-linux-iscsi-target.html


Nice work!


In addition to the basics, it includes API usage for python-rtslib and a
kernel patch
for our specific purpose.  Hope this helps.


I updated the bug some, broke your patch out to an attachment, and
verified that the fix exists upstream.

https://bugs.launchpad.net/ubuntu/+source/rtslib/+bug/1009645

Concerning your iscsi patch, I suggest you send it to linux-scsi
for further review. I suspect they'll ask that the split-brain
avoidance be broken out under the context of a new parameter, configurable
at module load and runtime. Running scripts/checkpatch.pl before sending it
would be prudent.

Peter
 

@ymmt2005






--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Re: failing to boot pass mdadm monitor

2012-03-13 Thread Peter M. Petrakis



On 03/13/2012 02:47 PM, Asif Iqbal wrote:

On Tue, Mar 13, 2012 at 12:35 AM, Peter M. Petrakis
  wrote:



On 03/13/2012 12:09 AM, Asif Iqbal wrote:


On Mon, Mar 12, 2012 at 4:13 PM, Asif Iqbalwrote:


I am failing to boot this server x4270 pass the mdadm --monitor.
Installed lucid amd64. This is a first install.

details: http://paste.ubuntu.com/880895/

It boots all the way in recovery mode. what gives?



ok. I reinstalled lucid 64bit. No more GPT error. But it hangs right
after mdadm --monitor



At least that's an improvement.



http://paste.ubuntu.com/881409/



...

mdadm: sending ioctl 1261 to a partition!

(I confused syscall with storage ioctl earlier, whoops :)

Now that we're just down to this problem, a little google...

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=656899:
https://bugzilla.redhat.com/show_bug.cgi?id=783823
https://bugzilla.redhat.com/show_bug.cgi?id=783955

http://www.serverphorums.com/read.php?12,436266

and ... I think there's a fix:
https://lkml.org/lkml/2012/1/24/136

Probably worth filing a bug at this point, also please try and
reproduce this issue with oneiric, and the latest mainline
kernel  [1] if possible. Thanks.


installed oneiric this time. failed to mount /var, /home, /opt and
/usr/local LVs.
I had to run vgchange -a y to make those LVs available and then
booted all the way. details: http://paste.ubuntu.com/882168/

And I choose skip.. it does boot all the way but not so useful w/o /var
as shown below.


So if you read the Debian bug report, you would find that this issue
is a result of recent security update that tightened ioctl access
to disks, virtual machines were able to write to backing stores. It
appears to require some tuning which is the basis for the fix I posted earlier.

At this point you have to follow through with the rest of my original
advice, and also consider going back one or two kernel revs where
this security fix doesn't exist. That should alleviate the 'ioctl 1261'
issue, and the bug you're supposed to report will get the requisite patch
into the next version of the distro kernel so you can move forward.

So in lucid:
https://launchpad.net/ubuntu/+source/linux/2.6.32-39.86
  * block: fail SCSI passthrough ioctls on partition devices
- LP: #926321

and oneiric:
https://launchpad.net/ubuntu/+source/linux/3.0.0-16.29
  * block: fail SCSI passthrough ioctls on partition devices
- LP: #922799

I think that's the CVE that's the cause of your problems. So rewind kernels
until it's not there and verify that your problem is resolved. I think you
can take the rest from here.

Peter






Peter

1. https://wiki.ubuntu.com/Kernel/MainlineBuilds
   http://kernel.ubuntu.com/~kernel-ppa/mainline/







--
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?







--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam






--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Re: failing to boot pass mdadm monitor

2012-03-12 Thread Peter M. Petrakis



On 03/13/2012 12:09 AM, Asif Iqbal wrote:

On Mon, Mar 12, 2012 at 4:13 PM, Asif Iqbal  wrote:

I am failing to boot this server x4270 pass the mdadm --monitor.
Installed lucid amd64. This is a first install.

details: http://paste.ubuntu.com/880895/

It boots all the way in recovery mode. what gives?


ok. I reinstalled lucid 64bit. No more GPT error. But it hangs right
after mdadm --monitor


At least that's an improvement.



http://paste.ubuntu.com/881409/


...
mdadm: sending ioctl 1261 to a partition!

(I confused syscall with storage ioctl earlier, whoops :)

Now that we're just down to this problem, a little google...

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=656899:
https://bugzilla.redhat.com/show_bug.cgi?id=783823
https://bugzilla.redhat.com/show_bug.cgi?id=783955

http://www.serverphorums.com/read.php?12,436266

and ... I think there's a fix:
https://lkml.org/lkml/2012/1/24/136

Probably worth filing a bug at this point, also please try and
reproduce this issue with oneiric, and the latest mainline
kernel  [1] if possible. Thanks.

Peter

1. https://wiki.ubuntu.com/Kernel/MainlineBuilds
   http://kernel.ubuntu.com/~kernel-ppa/mainline/






--
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?






--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Re: failing to boot pass mdadm monitor

2012-03-12 Thread Peter M. Petrakis



On 03/12/2012 07:00 PM, Asif Iqbal wrote:

On Mon, Mar 12, 2012 at 5:43 PM, Peter M. Petrakis
  wrote:



On 03/12/2012 05:22 PM, Asif Iqbal wrote:


On Mon, Mar 12, 2012 at 5:11 PM, Peter M. Petrakis
wrote:




On 03/12/2012 04:13 PM, Asif Iqbal wrote:



I am failing to boot this server x4270 pass the mdadm --monitor.
Installed lucid amd64. This is a first install.

details: http://paste.ubuntu.com/880895/

It boots all the way in recovery mode. what gives?



I would say your problems began once your partition detection went
inconsistent.

[   26.603469] sd 6:0:3:0: [sdd] 585937500 512-byte logical blocks: (300
GB/279 GiB)
[   26.603599] GPT:Primary header thinks Alt. header is not at the end of
the disk.
[   26.603600] GPT:585937498 != 585937499
[   26.603602] GPT:Alternate GPT header not at the end of the disk.
[   26.603603] GPT:585937498 != 585937499

which is coming from fs/partitions/efi.c
http://lxr.linux.no/linux+v3.2.9/fs/partitions/efi.c#L487

  494if (le64_to_cpu(agpt->my_lba) != lastlba) {
  495printk(KERN_WARNING
  496   "GPT:Alternate GPT header not at the end of
the
disk.\n");
  497printk(KERN_WARNING "GPT:%lld != %lld\n",
  498(unsigned long
long)le64_to_cpu(agpt->my_lba),
  499(unsigned long long)lastlba);
  500error_found++;
  501}

The math for lastlba seems correct, 585937500 - 1ULL, a quick google
shows
the raw size is consistent with what you have. So the question is how
did 585937498 get computed? Would someone with some more experience with
GPT partitions care to comment?

It doesn't look like your system successfully recovered from
find_valid_gpt
or we would have seen "Alternate GPT is invalid, using primary GPT." or
"Primary GPT is invalid, using alternate GPT." in the logs. Those
partitions,
or what's left of them, is being presented to mdadm for assembly.

This part is really weird.

[   27.929744] mdadm: sending ioctl 1261 to a partition!
[   27.935401] mdadm: sending ioctl 1261 to a partition!
[   27.941905] mdadm: sending ioctl 1261 to a partition!

Where 1261 is #define __NR_set_mempolicy  1261

I don't think that has any business being sent to a partition. At the
moment,
I can't explain how this event, and the GPT fault could be related.

[   28.679619] md1: detected capacity change from 0 to 146360172544
[   28.684287]  md1: unknown partition table
[   28.709647] mdadm: sending ioctl 1261 to a partition!
[   28.713255] mdadm: sending ioctl 1261 to a partition!
Begin: Running /scripts/local-premount ...
Done.
[   29.082318] EXT3-fs: INFO: recovery required on readonly filesystem.
[   29.088849] EXT3-fs: write access will be enabled during recovery.
[   29.252295] kjournald starting.  Commit interval 5 seconds
[   29.252387] EXT3-fs: recovery complete.
[   29.265115] EXT3-fs: mounted filesystem with ordered data mode.

I really don't know what shape your backing store is in, noting your
second post, I'm surprised your disks are readable.

So... what changed? Have you upgraded system firmware recently,
kernel upgrades, or anything at all really? I see you have an external
storage enclosure, has that seen any changes either?



This is a new box. This is the first install. There is nothing attached to
it.



Then you either have questionable equipment, bad install medium, or this
platform isn't fully supported
on this release of Ubuntu. I misspoke when I said "external" storage
enclosure. there
is a SES enclosure.

[   26.628621] scsi 6:0:10:0: Enclosure LSILOGIC SASX28 A.0
502E PQ: 0 ANSI: 5
[   26.633357] scsi 6:0:10:0: Attached scsi generic sg10 type 13
[   27.380344] sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled,
supports DPO and FUA

Not knowing what the platform was, I assumed it was external.

Looking back on your post, you called this a x4270. Is that a Sun x4270?


yes


Well, I can tell you we have an X4150 and it never fully completed certification
for 10.04 LTS. I assume they're in the same ball park? Most cost effective
thing you can do is make sure all your firmware bits are up to date and try
installing with our latest stable release. Should you find that the new kernel
alleviates your issues you could stick with oneiric, since precise (the next 
LTS)
is just around the corner. Failing that, there's always commercial support.

Peter





Peter







Peter

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam







--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam






--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Re: failing to boot pass mdadm monitor

2012-03-12 Thread Peter M. Petrakis



On 03/12/2012 05:22 PM, Asif Iqbal wrote:

On Mon, Mar 12, 2012 at 5:11 PM, Peter M. Petrakis
  wrote:



On 03/12/2012 04:13 PM, Asif Iqbal wrote:


I am failing to boot this server x4270 pass the mdadm --monitor.
Installed lucid amd64. This is a first install.

details: http://paste.ubuntu.com/880895/

It boots all the way in recovery mode. what gives?



I would say your problems began once your partition detection went
inconsistent.

[   26.603469] sd 6:0:3:0: [sdd] 585937500 512-byte logical blocks: (300
GB/279 GiB)
[   26.603599] GPT:Primary header thinks Alt. header is not at the end of
the disk.
[   26.603600] GPT:585937498 != 585937499
[   26.603602] GPT:Alternate GPT header not at the end of the disk.
[   26.603603] GPT:585937498 != 585937499

which is coming from fs/partitions/efi.c
http://lxr.linux.no/linux+v3.2.9/fs/partitions/efi.c#L487

  494if (le64_to_cpu(agpt->my_lba) != lastlba) {
  495printk(KERN_WARNING
  496   "GPT:Alternate GPT header not at the end of the
disk.\n");
  497printk(KERN_WARNING "GPT:%lld != %lld\n",
  498(unsigned long long)le64_to_cpu(agpt->my_lba),
  499(unsigned long long)lastlba);
  500error_found++;
  501}

The math for lastlba seems correct, 585937500 - 1ULL, a quick google shows
the raw size is consistent with what you have. So the question is how
did 585937498 get computed? Would someone with some more experience with
GPT partitions care to comment?

It doesn't look like your system successfully recovered from find_valid_gpt
or we would have seen "Alternate GPT is invalid, using primary GPT." or
"Primary GPT is invalid, using alternate GPT." in the logs. Those
partitions,
or what's left of them, is being presented to mdadm for assembly.

This part is really weird.

[   27.929744] mdadm: sending ioctl 1261 to a partition!
[   27.935401] mdadm: sending ioctl 1261 to a partition!
[   27.941905] mdadm: sending ioctl 1261 to a partition!

Where 1261 is #define __NR_set_mempolicy  1261

I don't think that has any business being sent to a partition. At the
moment,
I can't explain how this event, and the GPT fault could be related.

[   28.679619] md1: detected capacity change from 0 to 146360172544
[   28.684287]  md1: unknown partition table
[   28.709647] mdadm: sending ioctl 1261 to a partition!
[   28.713255] mdadm: sending ioctl 1261 to a partition!
Begin: Running /scripts/local-premount ...
Done.
[   29.082318] EXT3-fs: INFO: recovery required on readonly filesystem.
[   29.088849] EXT3-fs: write access will be enabled during recovery.
[   29.252295] kjournald starting.  Commit interval 5 seconds
[   29.252387] EXT3-fs: recovery complete.
[   29.265115] EXT3-fs: mounted filesystem with ordered data mode.

I really don't know what shape your backing store is in, noting your
second post, I'm surprised your disks are readable.

So... what changed? Have you upgraded system firmware recently,
kernel upgrades, or anything at all really? I see you have an external
storage enclosure, has that seen any changes either?


This is a new box. This is the first install. There is nothing attached to it.


Then you either have questionable equipment, bad install medium, or this 
platform isn't fully supported
on this release of Ubuntu. I misspoke when I said "external" storage enclosure. 
there
is a SES enclosure.

[   26.628621] scsi 6:0:10:0: Enclosure LSILOGIC SASX28 A.0   502E 
PQ: 0 ANSI: 5
[   26.633357] scsi 6:0:10:0: Attached scsi generic sg10 type 13
[   27.380344] sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, 
supports DPO and FUA

Not knowing what the platform was, I assumed it was external.

Looking back on your post, you called this a x4270. Is that a Sun x4270?

Peter






Peter

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam






--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Re: failing to boot pass mdadm monitor

2012-03-12 Thread Peter M. Petrakis



On 03/12/2012 04:13 PM, Asif Iqbal wrote:

I am failing to boot this server x4270 pass the mdadm --monitor.
Installed lucid amd64. This is a first install.

details: http://paste.ubuntu.com/880895/

It boots all the way in recovery mode. what gives?



I would say your problems began once your partition detection went
inconsistent.

[   26.603469] sd 6:0:3:0: [sdd] 585937500 512-byte logical blocks: (300 GB/279 
GiB)
[   26.603599] GPT:Primary header thinks Alt. header is not at the end of the 
disk.
[   26.603600] GPT:585937498 != 585937499
[   26.603602] GPT:Alternate GPT header not at the end of the disk.
[   26.603603] GPT:585937498 != 585937499

which is coming from fs/partitions/efi.c
http://lxr.linux.no/linux+v3.2.9/fs/partitions/efi.c#L487

 494if (le64_to_cpu(agpt->my_lba) != lastlba) {
 495printk(KERN_WARNING
 496   "GPT:Alternate GPT header not at the end of the 
disk.\n");
 497printk(KERN_WARNING "GPT:%lld != %lld\n",
 498(unsigned long long)le64_to_cpu(agpt->my_lba),
 499(unsigned long long)lastlba);
 500error_found++;
 501}

The math for lastlba seems correct, 585937500 - 1ULL, a quick google shows
the raw size is consistent with what you have. So the question is how
did 585937498 get computed? Would someone with some more experience with
GPT partitions care to comment?

It doesn't look like your system successfully recovered from find_valid_gpt
or we would have seen "Alternate GPT is invalid, using primary GPT." or
"Primary GPT is invalid, using alternate GPT." in the logs. Those partitions,
or what's left of them, is being presented to mdadm for assembly.

This part is really weird.

[   27.929744] mdadm: sending ioctl 1261 to a partition!
[   27.935401] mdadm: sending ioctl 1261 to a partition!
[   27.941905] mdadm: sending ioctl 1261 to a partition!

Where 1261 is #define __NR_set_mempolicy  1261

I don't think that has any business being sent to a partition. At the moment,
I can't explain how this event, and the GPT fault could be related.

[   28.679619] md1: detected capacity change from 0 to 146360172544
[   28.684287]  md1: unknown partition table
[   28.709647] mdadm: sending ioctl 1261 to a partition!
[   28.713255] mdadm: sending ioctl 1261 to a partition!
Begin: Running /scripts/local-premount ...
Done.
[   29.082318] EXT3-fs: INFO: recovery required on readonly filesystem.
[   29.088849] EXT3-fs: write access will be enabled during recovery.
[   29.252295] kjournald starting.  Commit interval 5 seconds
[   29.252387] EXT3-fs: recovery complete.
[   29.265115] EXT3-fs: mounted filesystem with ordered data mode.

I really don't know what shape your backing store is in, noting your
second post, I'm surprised your disks are readable.

So... what changed? Have you upgraded system firmware recently,
kernel upgrades, or anything at all really? I see you have an external
storage enclosure, has that seen any changes either?

Peter

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Re: Using biosdevname by default?

2012-01-31 Thread Peter M. Petrakis



On 01/31/2012 09:29 AM, Colin Watson wrote:

Dell has asked for biosdevname to be enabled by default on new Ubuntu
installations on Dell PowerEdge servers:

   https://bugs.launchpad.net/ubuntu/+source/biosdevname/+bug/891258

I do not feel that this is a decision I can take on my own, and would
like to seek guidance from the good folks on these lists.

For those unfamiliar with biosdevname, it's a udev helper intended to
handle the problem of unstable network device enumeration ordering by
mapping them into different namespaces according to BIOS-provided naming
information.  More information can be found here:

   http://linux.dell.com/biosdevname/
   http://manpages.ubuntu.com/biosdevname

This is supported on an opt-in basis from Ubuntu 11.04 on (modulo a bug
in 11.04 to the effect that it didn't work with a separate /usr, fixed
in 11.10) by passing the "biosdevname=1" kernel parameter when starting
the installer.


There are certainly some advantages to enabling biosdevname by default.
On systems that support it, it makes it somewhat easier to write scripts
that predictably apply to a certain interface without having to mess
around with looking up interfaces by MAC address.

However, I have a few concerns about enabling this by default.  Firstly,
I think it is in general unwise to make this kind of change for a single
class of machine, at least for Ubuntu itself as opposed to
vendor-specific builds.  The effect of doing that is to divide our
testing efforts, so that tests of relevant functionality on one class of
machine can no longer be presumed to be valid for others.  This usually
ends up being to the detriment of everyone: Dell servers would no longer
be able to take advantage of the testing we do on other classes of
system.

Of course, not many other systems support biosdevname anyway; HP
ProLiants are explicitly handled in the biosdevname source, but for many
systems, including at least kvm but many real machines as well,
biosdevname will just leave the kernel-provided interface names in
place.  (Even if biosdevname supported no non-Dell systems right now,
I'd still be of the opinion that we should be as consistent about it as
we can on the basis that other BIOSes might add support at some point in
the next five years.)

Secondly, while as I said above I agree that enabling biosdevname solves
some problems, it seems likely that this change will cause problems of
its own.  For example, any software that needs to know about network
interfaces (let's say it listens on a particular interface) might well
default to eth0; this will break on many wireless-only systems and
require manual configuration, but if it's not the sort of thing that you
use on a laptop, many users might not previously have noticed.  Using
biosdevname by default would extend these problems to many server-class
machines out of the box.  While anything like this is certainly a bug
already, with the new scheme we'd *have* to fix everything like that and
it'd be easy to miss something.  The question of whether you see this as
an opportunity to expose existing bugs or as a risk rather depends on
your point of view. :-)

Still, I'm not typically working in environments where unstable network
device naming causes me any problems, so I tend to see the downsides
rather than the upsides.  I'd like to hear from people who do suffer
from this kind of problem, as well as from the server team who would
presumably be at the sharp end either way.

Thanks,



It concerns me that we would be populating new device names, that depend
on specific bios features, and then modify our network stack in this
example to take advantage of them. In general, distributions have a spotty
record of encouraging vendors to repair bios issues, like S3. Once these
servers are no longer supported, I'm worried that we'll be compelled to
maintain a quirks database for unsupported systems or systems with unresponsive
vendors.

While Dell and HP servers seem to work with biosdevname. How would ARM
servers or appliances go about implementing this feature, or your average
whitebox server? I know it depends on SMBIOS, which ARM firmware really has
no motivation to implement. Are whitebox servers even populating SMBIOS well
enough for biosdevname to function?

Suppose we could enable it by default for only PowerEdge servers. It appears
that the effort to integrate the use of these new names falls to the community,
is that right? As for enabling it by default, a simple packaging change to the 
Dell
OMSA tools could enable biosdevname.

Peter

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Re: Multipath and iSCSI on 11.10

2012-01-03 Thread Peter M. Petrakis


On 12/23/2011 09:07 PM, Albert Chin wrote:
> On Fri, Dec 23, 2011 at 07:20:01PM -0600, Albert Chin wrote:
>> I have device-mapper-multipath configured with iSCSI. I have two
>> interfaces configured to communicate with the iSCSI server. I discover
>> and connect with the iSCSI targets as follows:
>>   (1) # iscsiadm -m discovery -t st -p 10.191.61.1:3260 -I iface1 -P 1
>>   (2) # iscsiadm -m discovery -t st -p 10.191.62.1:3260 -I iface2 -P 1
>>   (3) # iscsiadm -m node -p 10.191.61.1:3260 -I iface1 -l
>>   (4) # iscsiadm -m node -p 10.191.62.1:3260 -I iface2 -l
>>   # cat /etc/multipath.conf
>>   defaults {
>> selector "round-robin 0"
>> udev_dir /dev
>> user_friendly_names  yes
>>   }
>>   [[ snip snip ]]
>>   # multipath -ll
>>   winry (3600144f057774b004eeab58a0001) dm-24 OI,COMSTAR
>>   size=500G features='0' hwhandler='0' wp=rw
>>   |-+- policy='round-robin 0' prio=1 status=active
>>   | `- 12:0:0:0 sdd  8:48   active ready running
>>   `-+- policy='round-robin 0' prio=1 status=enabled
>> `- 41:0:0:0 sdae 65:224 active ready running
>>   sai (3600144f057774b004ebf539e001d) dm-9 OI,COMSTAR
>>   size=500G features='0' hwhandler='0' wp=rw
>>   |-+- policy='round-robin 0' prio=1 status=active
>>   | `- 17:0:0:0 sdv  65:80  active ready running
>>   `-+- policy='round-robin 0' prio=1 status=enabled
>> `- 54:0:0:0 sdai 66:32  active ready running
>>   ...
>>
>> 1. Based on the above multipath -ll output, it seems that I am not
>>load-balancing across both NICs. Why not? 
> 
> Adding path_grouping_policy multibus fixed this.

Interesting, multibus is supposed to be the default policy...
 
>> 2. When I boot the system, the initiator does not log in to any
>>target. Therefore, I need to rerun (3) and (4) above. Why?
> 
> Still not sure about this one.

This is to be configured outside of multipath. You'll need to configure
the iscsid initiator config file to login into these targets at boot.

apt-get install open-iscsi
vi /etc/iscsi/iscsid.conf

Peter


-- 
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Re: This menu allows you to configure iscsi volumes --> screen while installing Ubuntu Server LTS 10.04

2011-12-14 Thread Peter M. Petrakis


On 12/13/2011 07:46 PM, Kaushal Shriyan wrote:
> 
> 
> On Wed, Dec 14, 2011 at 6:03 AM, Paul Graydon  > wrote:
> 
> On 12/13/2011 02:27 PM, Kaushal Shriyan wrote:
>> 
>> 
>> On Wed, Dec 14, 2011 at 1:49 AM, Kaushal Shriyan
>> mailto:kaushalshri...@gmail.com>>
>> wrote:
>> 
>> Hi,
>> 
>> This menu allows you to configure iscsi volumes --> screen while
>> installing Ubuntu Server LTS 10.04 probably ->
>> http://ubuntuforums.org/showthread.php?t=1679536
>> 
>> Any clue ?
>> 
>> Regards
>> 
>> Kaushal
>> 
>> 
>> Hi
>> 
>> I am hit with this bug
>> https://bugs.launchpad.net/ubuntu/+source/kickseed/+bug/548617
>> Please suggest further
>> 
>> Regards
>> 
>> Kaushal
>> 
> You're going to need to throw the list at least a little bit of a
> bone here...  You've given virtually no useful information at all.
> 
> What hardware do you have? What exactly are you seeing? What have you
> tried so far?
> 
> Paul
> 
> 
> 
> Hi Paul,
> 
> I am using PXE Server to install ubuntu lucid 10.04.3 on IBM System
> x3650 M3 and i see this below message
> 
> During kickstart installation it stops in menu:
> 
> !! Partitioning disks This menu allows you to configure iSCSI
> volumes iSCSI configuration actions Log into iSCSI targets Finish  Back>
> 
> I have checked for any bugs if there are any and i get these bug very
> close to my issue
> https://bugs.launchpad.net/ubuntu/+source/kickseed/+bug/548617 which
> is a duplicate of https://bugs.launchpad.net/null/+bug/546929
> 
> Hard disk attached to the system is 300 GB SAS Drive * 4 Nos
> configured with RAID 10. RAID Controller card is  RAID bus
> controller: LSI Logic / Symbios Logic MegaRAID SAS 9240 (rev 03)
> 
> Do let me know if you need any further information.

iSCSI isn't a catch all for anything that uses the SCSI protocol,
it's for external storage over TCP/IP. Your internal storage RAID
array doesn't qualify. Simply exit out of this menu and continue to
the partitioning phase. If you continue to have problems like say the
volume you provisioned isn't visible then please let us know.

Peter

> 
> Regards
> 
> Kaushal
> 
> 
> 
> 
> -- ubuntu-server mailing list ubuntu-server@lists.ubuntu.com
>  
> https://lists.ubuntu.com/mailman/listinfo/ubuntu-server More info:
> https://wiki.ubuntu.com/ServerTeam
> 
> 
> 
> 

-- 
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Re: software raid and multiple cores

2011-11-28 Thread Peter M. Petrakis


On 11/26/2011 04:35 AM, Mark van Harmelen wrote:
> Hi
> 
> Thanks to those who contributed to a discussion on software raid
> recently, it changed my mind about the universal desirability of
> hardware raid cards.
> 
> So now I'm intent on building a sw raid based machine, but mostly am
> interested in performance under disc load.
> 
> Basic question: given a number of spare cores (be they hyperthreaded
> or not) is ubuntu's sofware raid clever enough to be able to deal
> with multiple read and write requests simultaneously, one request per
> spare core? Or would I expect only to see one of the spare core
> utilised?

Request level might be the the wrong level to look at, Linux already has
lots of queue level optimizations. What's really going to matter here is
efficiently handling parity calculations and the effective RAID
real estate (stripes). There does appear to be a  multi-core optimization
for MD RAID5/6 and it's disabled by default on Ubuntu kernels.

http://lxr.linux.no/linux+v3.1.3/drivers/md/Kconfig#L157

config MULTICORE_RAID456
bool "RAID-4/RAID-5/RAID-6 Multicore processing (EXPERIMENTAL)"
depends on MD_RAID456
depends on SMP
depends on EXPERIMENTAL
---help---
  Enable the raid456 module to dispatch per-stripe raid operations to a
  thread pool.

  If unsure, say N.

$ grep MULTICORE_RAID456 /boot/config-3.0.0-1*
/boot/config-3.0.0-12-generic:# CONFIG_MULTICORE_RAID456 is not set
/boot/config-3.0.0-13-generic:# CONFIG_MULTICORE_RAID456 is not set

It looks like a work in progress, I hope you have nothing to lose on those 
disks.
See the linux-raid list for more information: http://marc.info/?l=linux-raid .

Peter

> 
> thanks mark
> 
> 

-- 
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Re: Ubuntu Server Guide - weak areas and missing pieces

2011-11-18 Thread Peter M. Petrakis


On 11/18/2011 10:19 AM, Peter Matulis wrote:
> Hi everyone.
> 
> For this important LTS cycle I wanted to get some fresh ideas concerning
> the Ubuntu Server Guide [1].  In particular, right now it would be great
> if people could give their thoughts on:
> 
> 1. What parts of the Guide need more attention (weak areas)

> and
> 
> 2. What are some of the missing pieces
>

Storage: I'd like to contribute topics concerning LVM, MD, and Multipath.
Also document deploying "stacked" solutions e.g. LVM backed by MD or MP.
The scope of the contribution will depend on resolving the licensing
issue I just raised here.


> For the latter, I've already identified the glaring omissions of
> Openstack, Juju, and Orchestra.
> 
> Secondly, and I vaguely remember a discussion about this on a mailing
> list, I'm not sure we should be including Eucalyptus docs in the Guide
> anymore as Openstack is quickly becoming the dominant force for the
> cloud on Ubuntu.  People can obviously continue to use the older release
> Server Guides.  Comments?
> 
> I will be sending out a separate post where I will be asking for
> reviewers and contributors for the Guide.  Stay tuned.
> 
> Thanks for listening.
> 
> [1]: https://help.ubuntu.com/11.10/serverguide/C/index.html
> 

Peter

-- 
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Server guide license typo?

2011-11-18 Thread Peter M. Petrakis
Comparing Lucid 10.04:
https://help.ubuntu.com/10.04/serverguide/C/index.html -> 
[credits and license] https://help.ubuntu.com/legal.html

to Oneiric 11.10
https://help.ubuntu.com/11.10/serverguide/C/index.html ->
[credits and license] https://help.ubuntu.com/11.10/serverguide/C/legal.html

We went from CC SA 3.0 to CC SA 2.5, was that really the intention?

Reason I bring this up is I wish to contribute some storage related
documentation for the 12.04 server guide and use the RH docs
for a reference. The RH docs are licensed under CC SA 3.0 and if
I interpreted the CC SA license correctly, I can't go backwards.

Thanks.

Peter

-- 
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Re: Hardware vs software raid

2011-10-20 Thread Peter M. Petrakis


On 10/19/2011 07:35 PM, Diego Xirinachs wrote:
> Hi list,
> 
> We are about to implement open bravo on our organization and need a new
> server.
> 
> I decided to get a dell r310 but I dont know if I should get the hardware
> raid or just configure software raid on the server.

It really depends on what the load is and how much flexibility you want.
Parity calculations will always be faster using a dedicated controller,
so consider that when you deploy a software RAID 5/6. MD itself is very
reliable, I have no problem trusting it with my data, with the added bonus
that you have the opportunity to fix it yourself. Sometimes vendors can
be slow to address HW RAID firmware bugs. You don't have to worry so
much about the "RAID metadata problem" because you're going with an OEM.

> 
> I have been reading about this but still I am undecided.

If you have time do a little benchmarking, expect that the HW RAID will
win out in write intensive loads. The tradeoff then becomes one of management
and instrumentation. Openmanage + SNMP traps and you're basically done with
monitoring your RAID. With MD there will be a little more work involved and
perhaps require integration with something like nagios to get the same effect
as what openmanage offers. Then again, going with an opensource stack offers
all sorts of possibilities.

Peter

> What do you think?
> 
> Cheers
> 
> 

-- 
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


[SOLVED] Re: paths scsi names persistence after rescan

2011-06-29 Thread Peter M. Petrakis



On 06/02/2011 10:58 AM, ben...@cauterized.net wrote:

On Wed, 01 Jun 2011 15:37:44 -0400, "Peter M. Petrakis"
  wrote:


One thing that bugged me about your multipath.conf

blacklist {

 devnode "sda"
 devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
 devnode "^hd[a-z][[0-9]*]"
 devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
}


I'd rather see:

devnode "^sda[0-9]*"

beyond that it looks OK.


agreed, fixed.



So to debug this we'll need to see:
- contents of /dev/disk
- udevadm monitor>  udev.log
- stop multipathd and run it in the foreground to capture logging


http://paste.ubuntu.com/616565/


* service multipath-tools stop
* multipathd -v4 -d>  mpd.log


http://paste.ubuntu.com/616561/



You haven't modified or added any udev rules have you?
Also do you have anything like dmraid or LVM running?


I run LVM: http://paste.ubuntu.com/616563/

Thanks for you help!



To follow up:

https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/644489

This problem has been run down to multipath-tools and an SRU
is pending to address the issue from lucid-natty, oneiric is
unaffected. If you're a multipath user and you're seeing
a swath of unexplained UDEV CHANGE events from your path members,
the next update will solve your problem. Dropping the CHANGE
filter from the associated UDEV rules *is not* the answer.

An updated lucid multipath-tools is available from my ppa.

ppa:peter-petrakis/storage

Peter



--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Re: paths scsi names persistence after rescan

2011-06-01 Thread Peter M. Petrakis
Hi,

On 06/01/2011 12:42 PM, ben...@cauterized.net wrote:
> On Tue, 31 May 2011 17:36:42 -0400, "Peter M. Petrakis"
>  wrote:
>> On 05/31/2011 05:24 PM, Jorge Salamero Sanz wrote:
>>>
>>> hi all,
>>>
>>> i've configured multipath on a linux box and i'm getting these messages
>>> on
>>> syslog:
>>
>> Let's start with the basics:
>> - which version of Ubuntu are you running?
> 
> ubuntu lucid updated
> 
>> - multipath version
> 
>  multipath-tools0.4.8-14ubuntu4
> 
>> - kernel version
> 
> Linux mail-1 2.6.32-31-generic #61-Ubuntu SMP Fri Apr 8 18:25:51 UTC 2011
> x86_64 GNU/Linux
> 
>>
>>
>>> May 30 10:50:05 mail-1 udevd-work[25148]: rename(/dev/disk/by-
>>> id/wwn-0x6006016052b022002e770b609680e011.udev-tmp, /dev/disk/by-
>>> id/wwn-0x6006016052b022002e770b609680e011) failed: No such file or
>>> directory
>>
>> So without the alias directives the wwn are created and persist just
> fine?
> 
> i get these messages even with no path failure:
> http://paste.ubuntu.com/615969/
> 

Yeah, that doesn't look good.

One thing that bugged me about your multipath.conf

blacklist {
> devnode "sda"
> devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
> devnode "^hd[a-z][[0-9]*]"
> devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
> }

I'd rather see:

devnode "^sda[0-9]*"

beyond that it looks OK.

So to debug this we'll need to see:
- contents of /dev/disk
- udevadm monitor > udev.log
- stop multipathd and run it in the foreground to capture logging
* service multipath-tools stop
* multipathd -v4 -d > mpd.log

You haven't modified or added any udev rules have you?
Also do you have anything like dmraid or LVM running?
Thanks.

Peter



-- 
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam


Re: paths scsi names persistence after rescan

2011-05-31 Thread Peter M. Petrakis


On 05/31/2011 05:24 PM, Jorge Salamero Sanz wrote:
> 
> hi all,
> 
> i've configured multipath on a linux box and i'm getting these messages on 
> syslog:

Let's start with the basics:
- which version of Ubuntu are you running?
- multipath version
- kernel version


> May 30 10:50:05 mail-1 udevd-work[25148]: rename(/dev/disk/by-
> id/wwn-0x6006016052b022002e770b609680e011.udev-tmp, /dev/disk/by-
> id/wwn-0x6006016052b022002e770b609680e011) failed: No such file or directory

So without the alias directives the wwn are created and persist just fine?


> and sometimes after a path failure,  the different paths get different scsi 
> names, /proc/partitions shows in that case:
> 
>8   16  838860800 sdb
>8   32  849582720 sdc
>8   80  838860800 sdf
>8   96  849582720 sdg
>  2523  849582720 dm-3
>  2524  838860800 dm-4
>8  208  838860800 sdn
>8  224  849582720 sdo
>   650  849582720 sdq
>8  240  838860800 sdp
> 
> and then multipath gets crazy:
> 
> May 25 12:26:51 mail-1 multipathd: sdh: emc_clariion_checker: sending query 
> command failed
> ...
> 
> i guess i should tell udev to asign persistant names using uuid? what's the 
> right way to address this problem?
> 
> this is my config:
> 
> root@mail-1:~# multipath -ll
> backup (36006016052b022009cd7f2aeaf80e011) dm-3 DGC ,RAID 5
> [size=810G][features=1 queue_if_no_path][hwhandler=1 emc]
> \_ round-robin 0 [prio=2][active]
>  \_ 3:0:1:1 sdg 8:96  [active][ready]
>  \_ 4:0:0:1 sde 8:64  [active][ready]
> \_ round-robin 0 [prio=0][enabled]
>  \_ 3:0:0:1 sdc 8:32  [active][ready]
>  \_ 4:0:1:1 sdi 8:128 [active][ready]
> mail (36006016052b022002e770b609680e011) dm-4 DGC ,RAID 5
> [size=800G][features=1 queue_if_no_path][hwhandler=1 emc]
> \_ round-robin 0 [prio=2][active]
>  \_ 4:0:0:0 sdd 8:48  [active][ready]
>  \_ 3:0:1:0 sdf 8:80  [active][ready]
> \_ round-robin 0 [prio=0][enabled]
>  \_ 3:0:0:0 sdb 8:16  [active][ready]
>  \_ 4:0:1:0 sdh 8:112 [active][ready]
> 
> root@mail-1:~# cat /etc/multipath.conf 
> #
> 
> blacklist {
> devnode "sda"
> devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
> devnode "^hd[a-z][[0-9]*]"
> devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
> }
> 
> defaults {
> user_friendly_names no
> }
> 
> devices {
>   device {
>   vendor "DGC"
>   product "*"
>   path_grouping_policy group_by_prio
>   getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
>   prio_callout "/sbin/mpath_prio_emc /dev/%n"
>   hardware_handler "1 emc"
>   no_path_retry 300
>   path_checker emc_clariion
>   failback immediate   
>   }
> }
> 
> multipaths {
>   multipath {
>   wwid 36006016052b022002e770b609680e011
>   alias mail
>   }
>   multipath {
>   wwid 36006016052b022009cd7f2aeaf80e011
>   alias backup
>   }
> }
> 
> thanks in advance!

Peter

> 
> --
> dm-devel mailing list
> dm-de...@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel
> 

-- 
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam