Re: [Bug 1135453] Re: open-iscsi +mpio with multipathd init script order errors

2013-03-01 Thread Ritesh Raj Sarraf
On Friday 01 March 2013 05:49 PM, Mike Burgener wrote:
 defaults {
  udev_dir/dev
  polling_interval30
  selectorround-robin 0
  path_grouping_policymultibus
  getuid_callout   /lib/udev/scsi_id --whitelisted --device=/dev/%n
  prio_callout/bin/true
  path_checkerreadsector0
  prioconst
  rr_min_io   100
  rr_weight   uniform
  failbackimmediate
  no_path_retry   12
  user_friendly_name  yes
  hardware_handler0
 }

The behavior you reported is pretty interesting. You reported that you
suffered the hang state because the umount occurred after the multipathd
and iscsid processes were killed by the init scripts (which I also
assumed given the new init systems, upstart and systemd).

But looking at your config, you should not typically see the hang
scenario. I was suspecting that you might be using the multipath
queue_if_no_path feature, but that is not the case here. The iscsi
replacement timeouts are also the defaults, i.e. 120 seconds, which
would mean that after 120 seconds of retry, iscsi will error out and the
errors will propagate to the upper layers.

Ideally, with the configuration settings that you have, you should see
the errors show up after 120 seconds, and NOT hang. I have no more ideas
on what might be going wrong in your setup. Perhaps the Ubuntu multipath
maintainer might have some insight to this.

-- 
Ritesh Raj Sarraf
RESEARCHUT - http://www.researchut.com
Necessity is the mother of invention.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to multipath-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1135453

Title:
  open-iscsi +mpio with multipathd init script order errors

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1135453/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


Re: [Bug 1135453] Re: open-iscsi +mpio with multipathd init script order errors

2013-03-01 Thread Ritesh Raj Sarraf
On Fri, Mar 1, 2013 at 11:08 PM, Mike Burgener
mburge...@tuxinator.orgwrote:

 you seem to feel that it is normal that multipathd is stopped then the
 iscsi device logged out (so the blockdevice disappears) and then umount?


No. The correct steps are to first umount the device, then flush the
multipath map, and then depending on the transport (iSCSI, FC or FCoE) act
further.
Never should we do it any other way. It is just asking for too much trouble.

But technically iscsid or multipatd daemon processes are not the  core
performers. If you look at my Debian packages regular system v init
scripts, you'll see that killing the daemon and terminating the iscsi
sessions are 2 different tasks [1]. That's what I meant in my previous
mails.

[1]
http://anonscm.debian.org/gitweb/?p=pkg-iscsi/open-iscsi.git;a=blob;f=debian/open-iscsi.init;h=221fc9147f684bdb0bbcbda36799d5867bc617f2;hb=HEAD

this would end in a possible dataloss scenario isn't it?


Hard to predict. But yes, there could be a rare chance.



 btw. also bootup does not work so there must be some logical error i
 think.


Would it be possible for you to test this out on a sysv init? Or maybe even
try this on Debian? (But do note that the Ubuntu packages are not directly
identical to Debian ones).


 of course the 120 hung timeout message from kernel arrives but i think
 it does never continue as it can no more sync disks? could that make
 sense?


That might be the hung_task kernel messages. Yes.



 regards

 Mike

 --
 You received this bug notification because you are subscribed to
 multipath-tools in Ubuntu.
 https://bugs.launchpad.net/bugs/1135453

 Title:
   open-iscsi +mpio with multipathd init script order errors

 Status in “multipath-tools” package in Ubuntu:
   New

 Bug description:
   when using open-iscsi and multipathd for a mpio setup there are
   several init script logical issues:

   when shutting down, the system does first stop multipathd and then try
   to umount the filesystem and then stop open-iscsi to the system hangs
   forever on shutdown.

   also when booting up it does mount the partition before multipathd and
   open-iscsi are ready and you get the ubuntu screen that the partition
   could not get mounted, do you want to skip.

   after bootup process you can however mount the partition without any
   issue.

 To manage notifications about this bug go to:

 https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1135453/+subscriptions



-- 
Ritesh Raj Sarraf
RESEARCHUT - http://www.researchut.com
Necessity is the mother of invention.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to multipath-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1135453

Title:
  open-iscsi +mpio with multipathd init script order errors

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1135453/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


Re: [Bug 1135453] [NEW] open-iscsi +mpio with multipathd init script order errors

2013-02-28 Thread Ritesh Raj Sarraf
On Thu, Feb 28, 2013 at 6:51 PM, Launchpad Bug Tracker 
1135...@bugs.launchpad.net wrote:

 You have been subscribed to a public bug:

 when using open-iscsi and multipathd for a mpio setup there are several
 init script logical issues:

 when shutting down, the system does first stop multipathd and then try
 to umount the filesystem and then stop open-iscsi to the system hangs
 forever on shutdown.


Killing the daemon shouldn't have direct impact on umount of the map,
unless all the paths for that map go offline. If every operation goes in
parallel (with my understanding of the new gen init services like upstart
and systemd), and iscsi sessions get dropped before the umount, yes, you
definitely will run into the hang situation.

Again, the hang situation depends on how your multipath policy is
configured.



 also when booting up it does mount the partition before multipathd and
 open-iscsi are ready and you get the ubuntu screen that the partition
 could not get mounted, do you want to skip.

 after bootup process you can however mount the partition without any
 issue.


It is hard to make out what might be the cause from just the english
explanation. Perhaps the multipath maintainer here can translate this to *Steps
To Reproduce* and then investigate it further.


 ** Affects: multipath-tools (Ubuntu)
  Importance: Undecided
  Status: New


 ** Tags: bot-comment
 --
 open-iscsi +mpio with multipathd init script order errors
 https://bugs.launchpad.net/bugs/1135453
 You received this bug notification because you are subscribed to
 multipath-tools in Ubuntu.



-- 
Ritesh Raj Sarraf
RESEARCHUT - http://www.researchut.com
Necessity is the mother of invention.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to multipath-tools in Ubuntu.
https://bugs.launchpad.net/bugs/1135453

Title:
  open-iscsi +mpio with multipathd init script order errors

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1135453/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 936756] [NEW] lxc complains about cgroup not available

2012-02-19 Thread Ritesh Raj Sarraf
Public bug reported:

I have been trying to work with lxc to make use of my uml image. My
config is as follows:


12:21:05 rrs@champaran:~$ cat /var/lib/lxc/test/config 
lxc.utsname = test
lxc.rootfs = /home/rrs/Debian-Wheezy-AMD64-root_fs
lxc.rootfs.mount = /tmp/foo


lxc.cgroup.cpuset.cpus = 0,1
lxc.cgroup.cpu.shares = 1234
lxc.cgroup.devices.deny = a
lxc.cgroup.devices.allow = c 1:3 rw
lxc.cgroup.devices.allow = b 8:0 rw
#lxc.mount = /etc/fstab.complex
lxc.mount.entry = /lib /root/myrootfs/lib none ro,bind 0 0
lxc.cap.drop = sys_module mknod setuid net_raw
lxc.cap.drop = mac_override


When I try to start the instance, I get the following error:


12:16:04 rrs@champaran:~$ sudo lxc-start -n test
lxc-start: cgroup is not mounted
lxc-start: failed to setup the cgroups for 'test'
lxc-start: failed to setup the container
lxc-start: invalid sequence number 1. expected 2
lxc-start: failed to spawn 'test'

But as you can see below, cgroup is already mounted.


12:16:49 rrs@champaran:~$ mount
/dev/sda6 on / type ext4 (rw,errors=remount-ro,commit=600)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
/dev/sda2 on /boot type ext4 (rw,commit=600)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc 
(rw,noexec,nosuid,nodev)
cgroups on /sys/fs/cgroup type tmpfs (rw,uid=0,gid=0,mode=0755)
gvfs-fuse-daemon on /home/rrs/.cache/gvfs type fuse.gvfs-fuse-daemon 
(rw,nosuid,nodev,user=rrs)

ProblemType: Bug
DistroRelease: Ubuntu 12.04
Package: lxc 0.7.5-3ubuntu28
ProcVersionSignature: Ubuntu 3.2.0-17.26-generic 3.2.6
Uname: Linux 3.2.0-17-generic x86_64
ApportVersion: 1.91-0ubuntu1
Architecture: amd64
Date: Mon Feb 20 12:17:21 2012
SourcePackage: lxc
UpgradeStatus: Upgraded to precise on 2012-01-24 (26 days ago)
mtime.conffile..etc.default.lxc: 2012-01-25T01:58:25.599609

** Affects: lxc (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug precise

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/936756

Title:
  lxc complains about cgroup not available

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/936756/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 936756] Re: lxc complains about cgroup not available

2012-02-19 Thread Ritesh Raj Sarraf
-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/936756

Title:
  lxc complains about cgroup not available

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/936756/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 936756] Re: lxc complains about cgroup not available

2012-02-19 Thread Ritesh Raj Sarraf
actually, this must be a problem at my end. I just tried with lxc-
create, and it was able to create a container. Sorry for the noise.

** Changed in: lxc (Ubuntu)
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/936756

Title:
  lxc complains about cgroup not available

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/936756/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


Re: [Bug 925511] Re: lxc init script should fail when it ... failed

2012-02-08 Thread Ritesh Raj Sarraf
On Wednesday 08 February 2012 03:54 AM, Serge Hallyn wrote:
 @Ritesh,

 Unfortunately I don't know that that many people would read the README :)
 It is worth adding though, thanks for the suggestion.

 In addition, I will add an LXC section to the ubuntu server guide soon,
 and this should be mentioned there.

 I'm also marking this (and the equivalent libvirt) bugs as affecting
 dnsmasq.  Perhaps we can do something to its default configuration to be
 less belligerant.  Maybe even just an explicit
 '--except-interface=virbr0,lxcbr0', though hard-coding that seems a bit
 ugly.

Serge,

IMO the better option would be to just ship a binder in /etc/dnsmasq.d/

dnsmasq is a personal dns caching service. I doubt if anyone is using it
as a bind replacement.

By shipping a dnsmasq sub conf file (and making it bind to loopback
only), you eliminate the need to track the list of virtual bridges.
Then, you also don't need to spawn off your own dnsmasq proc from the
lxc init script.

-- 
Ritesh Raj Sarraf | http://people.debian.org/~rrs
Debian - The Universal Operating System

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/925511

Title:
  lxc init script should fail when it ... failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/dnsmasq/+bug/925511/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


Re: [Bug 925511] Re: lxc init script should fail when it ... failed

2012-02-07 Thread Ritesh Raj Sarraf
On Tuesday 07 February 2012 09:19 PM, Serge Hallyn wrote:
 @Ritesh,

 the dnsmasq for the lxc bridge explicitly binds only lxcbr0.  So if that
 fails, then your other dnsmasq has already bound all interfaces.

Yes. Because I had dnsmasq installed by default. From the dnsmasq.conf
file's documentation, it says:

# On systems which support it, dnsmasq binds the wildcard address,
# even when it is listening on only some interfaces. It then discards
# requests that it shouldn't reply to. This has the advantage of
# working even when interfaces come and go and change address. If you
# want dnsmasq to really bind only the interfaces it is listening on,
# uncomment this option. About the only time you may need this is when
# running another nameserver on the same machine.
#bind-interfaces


So any machine, where dnsmasq is installed, will bind to all the
interfaces. This is the _default_ behavior, as installed.

libvirt was doing something similar.
http://anonscm.debian.org/gitweb/?p=pkg-libvirt/libvirt.git;a=blob;f=debian/README.Debian;h=6248662c56111c4ec4a5b2c0887059ddfb5fdda6;hb=HEAD

They bind dnsmasq to the loopback interface. Since LXC's bridge also has
similar purpose, it could try to do similar things.


 If /etc/init.d/lxc fails to start now, then lxcbr0 never had dhcp
 before.  If you're not using lxcbr0 for your containers, then you can
 simply set USE_LXC_BRIDGE=false in /etc/default/lxc.
I haven't started using it yet. So I'm not sure how it has been behaving
up till now.

 If do want to use lxcbr0, then you should change your other dnsmasq to
 not bind all interfaces.
Yes. But would you want this to be a default?

Or actually, just adding a similar documentation into LXC's
README.Debian will also suffice. No?

 A third alternative, I suppose, would be that you want to use lxcbr0 but
 your statically assign addresses to your containers.  We could add a
 USE_LXC_BRIDGE_DNSMASQ variable to /etc/default/lxc to support that use
 case.  If that is what you want, please open a new bug against lxc and
 I'll add it.


-- 
Ritesh Raj Sarraf | http://people.debian.org/~rrs
Debian - The Universal Operating System

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/925511

Title:
  lxc init script should fail when it ... failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/925511/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 925511] Re: lxc init script should fail when it ... failed

2012-02-06 Thread Ritesh Raj Sarraf
So how is it supposed to behave now?

I just did the upgrade and:


Setting up lxc (0.7.5-3ubuntu18) ...
Installing new version of config file /etc/init.d/lxc ...
 * Starting Linux Containers
   
dnsmasq: failed to create listening socket for 172.16.3.1: Address already in 
use
 * Failed to set up LXC network
invoke-rc.d: initscript lxc, action start failed.
dpkg: error processing lxc (--configure):
 subprocess installed post-installation script returned error exit status 2
Setting up ssh-askpass-gnome (1:5.9p1-2ubuntu2) ...
Setting up libcgroup1 (0.37.1-1ubuntu10) ...
Setting up cgroup-bin (0.37.1-1ubuntu10) ...
cgconfig start/running
Setting up libqtgui4 (4:4.8.0-1ubuntu5) ...


Shutting down dnsmasq allowed for the installation to complete. But now, I 
cannot start my dnsmasq service.


12:16:19 rrs@champaran:~$ sudo /etc/init.d/dnsmasq restart
[sudo] password for rrs: 
 * Restarting DNS forwarder and DHCP server dnsmasq 
   
dnsmasq: failed to create listening socket for port 53: Address already in use

[fail]

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to lxc in Ubuntu.
https://bugs.launchpad.net/bugs/925511

Title:
  lxc init script should fail when it ... failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/925511/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


Re: [Bug 920956] [NEW] Kpartx interferes with automount behaviour

2012-01-24 Thread Ritesh Raj Sarraf
On Tue, Jan 24, 2012 at 6:12 PM, Cefn launchpad@cefn.com wrote:
 Public bug reported:

 I tried plugging in an external USB drive for the first time after
 installing kpartx and multipath-tools, expecting it to automount through
 Gnome/Nautilus to /media/ . Only after removing kpartx did the automount
 behaviour return.

Maybe you are hit by this:
http://researchut.com/site/blog/seagate-freeagent-goflex

-- 
Ritesh Raj Sarraf
RESEARCHUT - http://www.researchut.com
Necessity is the mother of invention.

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to multipath-tools in Ubuntu.
https://bugs.launchpad.net/bugs/920956

Title:
  Kpartx interferes with automount behaviour

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/920956/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs