[Bug 1295371] Re: ifup activates wrong interfaces
Not that it matters, but just to make it clear that it's not just related to 'static' addresses. If the vlan interfaces are set to dhcp, ifup eth0.101 will cause dhcp on all of them. Mar 20 21:33:15 dixie dhcpd: DHCPREQUEST for 192.168.203.100 from f0:92:1c:10:0f:f0 (hertz) via eth0.104 Mar 20 21:33:15 dixie dhcpd: DHCPACK on 192.168.203.100 to f0:92:1c:10:0f:f0 (hertz) via eth0.104 Mar 20 21:33:43 dixie dhcpd: DHCPREQUEST for 192.168.200.100 from f0:92:1c:10:0f:f0 (hertz) via eth0.101 Mar 20 21:33:43 dixie dhcpd: DHCPACK on 192.168.200.100 to f0:92:1c:10:0f:f0 (hertz) via eth0.101 Mar 20 21:34:14 dixie dhcpd: DHCPREQUEST for 192.168.201.100 from f0:92:1c:10:0f:f0 (hertz) via eth0.102 Mar 20 21:34:14 dixie dhcpd: DHCPACK on 192.168.201.100 to f0:92:1c:10:0f:f0 (hertz) via eth0.102 Mar 20 21:34:29 dixie dhcpd: DHCPREQUEST for 192.168.202.100 from f0:92:1c:10:0f:f0 (hertz) via eth0.103 Mar 20 21:34:29 dixie dhcpd: DHCPACK on 192.168.202.100 to f0:92:1c:10:0f:f0 (hertz) via eth0.103 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1295371 Title: ifup activates wrong interfaces To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/vlan/+bug/1295371/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1295371] [NEW] ifup activates wrong interfaces
Public bug reported: When using ifup to activate one vlan interface defined in /etc/network/interfaces, ifup brings up all the vlan interfaces even if ifup -a was not issued. This is trivial to reproduce. Expectation is that ifup ethX.Y will only bring up ethX.Y and not ethX.Z as well. 1- Started with the following in /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth0.101 iface eth0.101 inet static address 192.168.200.2 netmask 255.255.255.0 2- Added more vlans to the file auto eth0.102 iface eth0.102 inet static address 192.168.201.2 netmask 255.255.255.0 auto eth0.103 iface eth0.103 inet static address 192.168.202.2 netmask 255.255.255.0 auto eth0.104 iface eth0.104 inet static address 192.168.203.2 netmask 255.255.255.0 3- Upped only eth0.102 root@dixie:~# ifup eth0.102 Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config Added VLAN with VID == 102 to IF -:eth0:- * Setting up iSCSI targets ...done. ssh stop/waiting ssh start/running, process 2224 4- Then when upping the other interfaces obtained error that they were already up: root@dixie:~# ifup eth0.103 ifup: interface eth0.103 already configured root@dixie:~# vi /etc/network/interfaces root@dixie:~# ifup eth0.103 ifup: interface eth0.103 already configured root@dixie:~# ifup eth0.104 ifup: interface eth0.104 already configured 5- klog shows they were added all on the same second: Mar 20 20:45:56 dixie kernel: [12270.182797] eth0.103: no IPv6 routers present Mar 20 20:45:56 dixie kernel: [12270.742537] eth0.102: no IPv6 routers present Mar 20 20:45:56 dixie kernel: [12270.798510] eth0.104: no IPv6 routers present 6- Added more interfaces to the interfaces file: auto eth0.105 iface eth0.105 inet static address 192.168.205.2 netmask 255.255.255.0 auto eth0.106 iface eth0.106 inet static address 192.168.206.2 netmask 255.255.255.0 auto eth0.107 iface eth0.107 inet static address 192.168.207.2 netmask 255.255.255.0 7- Upped only eth0.105, but eth0.106 and eth0.107 also come up: root@dixie:~# ifup eth0.105 Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config Added VLAN with VID == 105 to IF -:eth0:- * Setting up iSCSI targets ...done. ssh stop/waiting ssh start/running, process 2619 root@dixie:~# Mar 20 20:52:14 dixie ntpdate[2574]: step time server 91.189.89.199 offset -0.009063 sec Mar 20 20:52:16 dixie kernel: [12650.777148] eth0.105: no IPv6 routers present Mar 20 20:52:16 dixie kernel: [12650.809124] eth0.106: no IPv6 routers present Mar 20 20:52:17 dixie kernel: [12651.336879] eth0.107: no IPv6 routers present 8- Sys info: root@dixie:~# apt-cache policy ifupdown ifupdown: Installed: 0.7~beta2ubuntu10 Candidate: 0.7~beta2ubuntu10 Version table: *** 0.7~beta2ubuntu10 0 500 http://archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages 100 /var/lib/dpkg/status 0.7~beta2ubuntu8 0 500 http://archive.ubuntu.com/ubuntu/ precise/main amd64 Packages root@dixie:~# cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.3 LTS" 9- messages: root@dixie:~# for file in /var/log/upstart/network-interface-eth0.10*; do echo "--- $file"; cat $file; done --- /var/log/upstart/network-interface-eth0.101.log ifup: interface eth0.101 already configured --- /var/log/upstart/network-interface-eth0.102.log ifup: interface eth0.102 already configured --- /var/log/upstart/network-interface-eth0.103.log Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config * Setting up iSCSI targets ...done. ssh stop/waiting ssh start/running, process 2438 --- /var/log/upstart/network-interface-eth0.104.log Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config * Setting up iSCSI targets ...done. --- /var/log/upstart/network-interface-eth0.105.log ifup: interface eth0.105 already configured --- /var/log/upstart/network-interface-eth0.106.log Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config * Setting up iSCSI targets ...done. ssh stop/waiting ssh start/running, process 2932 --- /var/log/upstart/network-interface-eth0.107.log Set name-type for VLAN subsystem. Should be visible in /proc/net/vlan/config * Setting up iSCSI targets ...done. ** Affects: ifupdown (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1295371 Title: ifup activates wrong interfaces To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/1295371/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1126233] Re: libapache2-mod-rpaf has wrong name in conf file
** Tags removed: verification-needed ** Tags added: verification-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1126233 Title: libapache2-mod-rpaf has wrong name in conf file To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/libapache2-mod-rpaf/+bug/1126233/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1123950] Re: /etc/init.d/stud restart does not start the daemon
Can engineering please evaluate the simple patch proposed here, which adds a sleep 1 to the restart clause, fixing the problem on stud? thanks, Eduardo. ** Patch added: "stud.debdiff" https://bugs.launchpad.net/ubuntu/+source/stud/+bug/1123950/+attachment/3690558/+files/stud.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1123950 Title: /etc/init.d/stud restart does not start the daemon To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/stud/+bug/1123950/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1163905] Re: Backport-Request: Failure in Duply's pre-scripts are muted
Verification done. Exit code is correct now. root@precise:~/.duply/srv# duply srv pre Start duply v1.5.5.4, time is 2013-05-22 13:10:25. Using profile '/root/.duply/srv'. Using installed duplicity version 0.6.18, python 2.7.3, gpg 1.4.11 (Home: ~/.gnupg), awk 'GNU Awk 3.1.8', bash '4.2.24(1)-release (x86_64-pc-linux-gnu)'. Autoset found secret key of first GPG_KEY entry 'A0C57B6D' for signing. Test - Encrypt to A0C57B6D & Sign with A0C57B6D (OK) Test - Decrypt (OK) Test - Compare (OK) Cleanup - Delete '/tmp/duply.24091.1369228225_*'(OK) --- Start running command PRE at 13:10:25.668 --- Running '/root/.duply/srv/pre' - FAILED (code 1) 13:10:25.683 Task 'PRE' failed with exit code '1'. --- Finished state FAILED 'code 1' at 13:10:25.683 - Runtime 00:00:00.014 --- root@precise:~/.duply/srv# echo $? 1 ** Tags removed: verification-needed ** Tags added: verification-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1163905 Title: Backport-Request: Failure in Duply's pre-scripts are muted To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/duply/+bug/1163905/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1144612] Re: mount.cifs incorrectly update /etc/mtab when -o remount is used
Verified on a precise system, Problem fixed on ii cifs-utils 2:5.1-1ubuntu2 ** Tags removed: verification-needed ** Tags added: verification-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1144612 Title: mount.cifs incorrectly update /etc/mtab when -o remount is used To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/cifs-utils/+bug/1144612/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 816153] Re: dante-server using the wrong libc.so
My understanding is that the problem is that a dlopen() is attempted against libc.so, which is a text file. $ file /usr/lib/x86_64-linux-gnu/libc.so /usr/lib/x86_64-linux-gnu/libc.so: ASCII English text Changing it to libc.so.6 did fix the problem: $ file /lib/x86_64-linux-gnu/libc.so.6 /lib/x86_64-linux-gnu/libc.so.6: symbolic link to `libc-2.15.so' therefore I tried the following patch which seems to fix the issue: --- dante-1.1.19.dfsg.orig/configure +++ dante-1.1.19.dfsg/configure @@ -29690,7 +29690,7 @@ LIBC_NAME=`ls /usr/lib/libc.so* /lib/libc.so* | sed -e 's/.*\///' | sort -nr | head -n 1` if test "x${LIBC_NAME}" = x; then #nothing found, set libc.so anyway - LIBC_NAME="${base_library_path}libc.so" + LIBC_NAME="${base_library_path}libc.so.6" fi ;; Providing the debdiff for precise. ** Attachment added: "debdiff.danted" https://bugs.launchpad.net/ubuntu/+source/dante/+bug/816153/+attachment/3678285/+files/debdiff.danted -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/816153 Title: dante-server using the wrong libc.so To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/dante/+bug/816153/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1130608] Re: [RFE] - Horizon - Make available S3 and openstack_s3 environments for juju
Example of such environment: environments: openstack: type: openstack_s3 provider: openstack auth-mode: userpass admin-secret: password control-bucket: bucket1 default-image-id: 29c1c699-3020-4d32-9286-e62c7142a002 default-instance-type: m1.tiny default-series: precise auth-url: http://192.168.1.23:5000/v2.0/ username: admin password: openstack project-name: admin access-key: 36ec7513b62frfrfrfr8de28a882b38548 secret-key: bdbeb3b029b3420394r5t676t8073d81 ssl-hostname-verification: false s3-uri: http://192.168.1.23: combined-key: 36ec7513b6294b8eb8de28a882b38548 juju-origin: ppa ** Summary changed: - [RFE] - Horizon - Make available S3 and openstack_s3 environments for juju + [RFE] - Horizon - Make available openstack_s3 environments for juju ** Description changed: When downloading the juju environment within horizon, the only environment defined is the EC2 one. It should be possible to specify what juju connector to use and then download the environment with the right pro vider. The RFE consists of having the possibility to choose the provider from a - list within, including at least: EC2, S3 and openstack_s3. + list within, including at least: EC2 and openstack_s3. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1130608 Title: [RFE] - Horizon - Make available openstack_s3 environments for juju To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1130608/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1130608] Re: [RFE] - Horizon - Make available S3 and openstack_s3 environments for juju
This is an example of the openstack_s3 environment environments: openstack: type: openstack_s3 provider: openstack auth-mode: userpass admin-secret: pikachu12 control-bucket: abraxas-bucket default-image-id: 29c1c699-3020-4d32-9286-e62c7142a002 default-instance-type: m1.tiny default-series: precise auth-url: http://192.168.1.23:5000/v2.0/ username: admin password: openstack project-name: admin access-key: 36ec7513b6294b8eb8de28a882b38548 secret-key: bdbeb3b029b3420390c8b70f18073d81 ssl-hostname-verification: false s3-uri: http://192.168.1.23: combined-key: 36ec7513b6294b8eb8de28a882b38548 juju-origin: ppa -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1130608 Title: [RFE] - Horizon - Make available openstack_s3 environments for juju To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1130608/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1163905] Re: Backport-Request: Failure in Duply's pre-scripts are muted
Thanks for the feedback. Helps me learn how to do it right :-) how about this patch? ** Patch added: "new precise patch" https://bugs.launchpad.net/ubuntu/+source/duply/+bug/1163905/+attachment/3652347/+files/precise-20140424.patch -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1163905 Title: Backport-Request: Failure in Duply's pre-scripts are muted To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/duply/+bug/1163905/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1163905] Re: Backport-Request: Failure in Duply's pre-scripts are muted
-- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1163905 Title: Backport-Request: Failure in Duply's pre-scripts are muted To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/duply/+bug/1163905/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1163905] Re: Backport-Request: Failure in Duply's pre-scripts are muted
** Description changed: - - duply does not check the return status of pre and post scripts and therefore returns 0 even if there was a problem in the pre/post scripts which returned an error. + duply does not check the return status of pre and post scripts and + therefore returns 0 even if there was a problem in the pre/post scripts + which returned an error. This issue happens every time and is 100% reproducible SRU Justification: [Impact] - * When using duply, errors on pre/post scripts are not caught and + * When using duply, errors on pre/post scripts are not caught and therefore succeed even if there were problems. [Test Case] - * Set up duply - * Create a pre script containing simply 'exit 1' inside it. - * Run 'duply srv pre' - * See duply succeed and return 0 + * Set up duply + * Create a pre script containing simply 'exit 1' inside it. + * Run 'duply srv pre' + * See duply succeed and return 0 --- Start running command PRE at 12:58:29.630 --- Running '/root/.duply/srv/pre' - FAILED (code 1) --- Finished state OK at 12:58:29.638 - Runtime 00:00:00.008 --- root@nas:~# echo $? 0 with the patch: --- Start running command PRE at 13:02:03.643 --- Running '/root/.duply/srv/pre' - FAILED (code 1) 13:02:03.652 Task 'PRE' failed with exit code '1'. --- Finished state FAILED 'code 1' at 13:02:03.652 - Runtime 00:00:00.009 --- root@nas:~# echo $? 1 [Regression Potential] - * The patch is minimal and has been accepted/committed upstream. - * This package has been tested on a virtual machine with the test case above and showed the right exit value. Minimal likelihood of regressions. + * The patch is minimal and has been accepted/committed upstream. + * This package has been tested on a virtual machine with the test case above and showed the right exit value. Minimal likelihood of regressions. Status: - * Patch is the same as: - * http://sourceforge.net/tracker/index.php?func=detail&aid=3609075&group_id=217745&atid=1041147 - * Packages agains precise: https://launchpad.net/~edamato/+archive/lp1163905 + * Patch is the same as: + * http://sourceforge.net/tracker/index.php?func=detail&aid=3609075&group_id=217745&atid=1041147 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1163905 Title: Backport-Request: Failure in Duply's pre-scripts are muted To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/duply/+bug/1163905/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1163905] Re: Backport-Request: Failure in Duply's pre-scripts are muted
** Patch added: "precise debdiff" https://bugs.launchpad.net/ubuntu/+source/duply/+bug/1163905/+attachment/3650629/+files/duply_1.5.5.4-1.precise.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1163905 Title: Backport-Request: Failure in Duply's pre-scripts are muted To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/duply/+bug/1163905/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1163905] Re: Backport-Request: Failure in Duply's pre-scripts are muted
** Patch added: "raring debdiff" https://bugs.launchpad.net/ubuntu/+source/duply/+bug/1163905/+attachment/3650623/+files/duply_1.5.5.5-1.raring.debdiff ** Attachment removed: "debdiff patch for precise" https://bugs.launchpad.net/ubuntu/+source/duply/+bug/1163905/+attachment/3647637/+files/duply_1.5.5.4-1ubuntu1.precise.debdiff.2 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1163905 Title: Backport-Request: Failure in Duply's pre-scripts are muted To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/duply/+bug/1163905/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1163905] Re: Backport-Request: Failure in Duply's pre-scripts are muted
** Patch removed: "debdiff patch for precise" https://bugs.launchpad.net/ubuntu/+source/duply/+bug/1163905/+attachment/3647574/+files/duply_1.5.5.4-1ubuntu1.precise.debdiff ** Attachment added: "debdiff patch for precise" https://bugs.launchpad.net/ubuntu/+source/duply/+bug/1163905/+attachment/3647637/+files/duply_1.5.5.4-1ubuntu1.precise.debdiff.2 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1163905 Title: Backport-Request: Failure in Duply's pre-scripts are muted To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/duply/+bug/1163905/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1163905] Re: Backport-Request: Failure in Duply's pre-scripts are muted
** Patch added: "debdiff patch for precise" https://bugs.launchpad.net/ubuntu/+source/duply/+bug/1163905/+attachment/3647574/+files/duply_1.5.5.4-1ubuntu1.precise.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1163905 Title: Backport-Request: Failure in Duply's pre-scripts are muted To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/duply/+bug/1163905/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1163905] Re: Backport-Request: Failure in Duply's pre-scripts are muted
** Description changed: - This happens in precise: - If you create a Duply pre-script which reads + duply does not check the return status of pre and post scripts and therefore returns 0 even if there was a problem in the pre/post scripts which returned an error. - exit 1 + This issue happens every time and is 100% reproducible - then a run of "duply srv pre" will give + SRU Justification: + [Impact] + + * When using duply, errors on pre/post scripts are not caught and + therefore succeed even if there were problems. + + [Test Case] + + * Set up duply + * Create a pre script containing simply 'exit 1' inside it. + * Run 'duply srv pre' + * See duply succeed and return 0 --- Start running command PRE at 12:58:29.630 --- Running '/root/.duply/srv/pre' - FAILED (code 1) --- Finished state OK at 12:58:29.638 - Runtime 00:00:00.008 --- root@nas:~# echo $? 0 - So there is no easy way to detect a failure of the pre script and the - same applies to post. This has already been fixed upstream and I'll - attach a patch that fixes this issue. With the patch applied, the - outcome is + with the patch: --- Start running command PRE at 13:02:03.643 --- Running '/root/.duply/srv/pre' - FAILED (code 1) 13:02:03.652 Task 'PRE' failed with exit code '1'. --- Finished state FAILED 'code 1' at 13:02:03.652 - Runtime 00:00:00.009 --- root@nas:~# echo $? 1 + + [Regression Potential] + + * The patch is minimal and has been accepted/committed upstream. + * This package has been tested on a virtual machine with the test case above and showed the right exit value. Minimal likelihood of regressions. + + + Status: + + * Patch is the same as: + * http://sourceforge.net/tracker/index.php?func=detail&aid=3609075&group_id=217745&atid=1041147 + * Packages agains precise: https://launchpad.net/~edamato/+archive/lp1163905 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1163905 Title: Backport-Request: Failure in Duply's pre-scripts are muted To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/duply/+bug/1163905/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1130608] [NEW] [RFE] - Horizon - Make available S3 and openstack_s3 environments for juju
Public bug reported: When downloading the juju environment within horizon, the only environment defined is the EC2 one. It should be possible to specify what juju connector to use and then download the environment with the right pro vider. The RFE consists of having the possibility to choose the provider from a list within, including at least: EC2, S3 and openstack_s3. ** Affects: horizon (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1130608 Title: [RFE] - Horizon - Make available S3 and openstack_s3 environments for juju To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/horizon/+bug/1130608/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1091780] Re: nova-network - "iptables-restore v1.4.12: host/network `None' not found
Hi Joseph, This is not a kernel bug, it's simply because nova is building an iptables rule that is broken. The problem happens on: network/manager.py def _setup_network_on_host(self, context, network): ... if address == FLAGS.vpn_ip and hasattr(self.driver, "ensure_vpn_forward"): LOG.warn(_('call add_vpn %s %s %s\n'), FLAGS.vpn_ip,network['vpn_public_port'], network['vpn_private_address']) self.l3driver.add_vpn(FLAGS.vpn_ip, network['vpn_public_port'], network['vpn_private_address']) We can then see: 2012-12-18 13:23:12 WARNING nova.network.manager [req-f9747e6d-e438-4801 -992f-5e11324488aa None None] call add_vpn 192.168.124.150 None None the network object is a which is obtaining the info via MySQL in this case. The question is whether there should be checking on the format of the variables in the network object, or if this should be done on the DB level. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1091780 Title: nova-network - "iptables-restore v1.4.12: host/network `None' not found To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1091780/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1091780] [NEW] nova-network - "iptables-restore v1.4.12: host/network `None' not found
Public bug reported: 1- In Precise nova-network crashes because it cannot apply iptables rules when trying to apply vpn rules. nova-network tries to set VPN iptables rules for openvpn access: 2012-12-17 07:17:24 TRACE nova Stderr: "iptables-restore v1.4.12: host/network `None' not found\nError occurred at line: 23\nTry `iptables-restore -h' or 'iptables-restore --help' for more information.\n" 2- How reproducible? Not clear. The configuration I used with juju seems to create an environment that causes this problem. When this problem is present the issue reproduces every time. 3- How to reproduce: When the issue is present just starting up nova-network causes the problem to reproduce. Nova-network exits in the end and dies because of the error on iptables-restore 4- I added debugging in nova.conf with --debug=true and added extra debugging in /usr/lib/python2.7/dist-packages/nova/utils.py which showed the full iptables rules that were to be restored by iptables-restore: 2012-12-17 07:17:24 DEBUG nova.utils [req-391688fd-3b99-4b1c-8b46-fb4f64e64246 None None] process input: # Generated by iptables-save v1.4.12 on Mon Dec 17 07:17:21 2012 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :nova-api-FORWARD - [0:0] :nova-api-INPUT - [0:0] :nova-api-OUTPUT - [0:0] :nova-api-local - [0:0] :nova-network-FORWARD - [0:0] :nova-network-INPUT - [0:0] :nova-network-local - [0:0] :nova-network-OUTPUT - [0:0] :nova-filter-top - [0:0] -A FORWARD -j nova-filter-top -A OUTPUT -j nova-filter-top -A nova-filter-top -j nova-network-local -A INPUT -j nova-network-INPUT -A OUTPUT -j nova-network-OUTPUT -A FORWARD -j nova-network-FORWARD -A nova-network-FORWARD --in-interface br100 -j ACCEPT -A nova-network-FORWARD --out-interface br100 -j ACCEPT -A nova-network-FORWARD -d None -p udp --dport 1194 -j ACCEPT -A INPUT -j nova-api-INPUT -A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT -A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT -A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT -A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT -A FORWARD -j nova-api-FORWARD -A FORWARD -d 192.168.122.0/24 -o virbr0 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT -A FORWARD -i virbr0 -o virbr0 -j ACCEPT -A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable -A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable -A OUTPUT -j nova-api-OUTPUT -A nova-api-INPUT -d 192.168.124.150/32 -p tcp -m tcp --dport 8775 -j ACCEPT -A nova-filter-top -j nova-api-local COMMIT 4.1- Among the rules above we have: -A nova-network-FORWARD -d None -p udp --dport 1194 -j ACCEPT which is responsible for the fault in iptables-restore. 5- These are the error messages: 2012-12-17 07:17:24 DEBUG nova.utils [req-391688fd-3b99-4b1c-8b46-fb4f64e64246 None None] Result was 2 from (pid=14699) execute /usr/lib/python2.7/dist-packages/nova/utils.py:237 2012-12-17 07:17:24 CRITICAL nova [-] Unexpected error while running command. Command: sudo nova-rootwrap iptables-restore Exit code: 2 Stdout: '' Stderr: "iptables-restore v1.4.12: host/network `None' not found\nError occurred at line: 23\nTry `iptables-restore -h' or 'iptables-restore --help' for more information.\n" 2012-12-17 07:17:24 TRACE nova Traceback (most recent call last): 2012-12-17 07:17:24 TRACE nova File "/usr/bin/nova-network", line 49, in 2012-12-17 07:17:24 TRACE nova service.wait() 2012-12-17 07:17:24 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 413, in wait 2012-12-17 07:17:24 TRACE nova _launcher.wait() 2012-12-17 07:17:24 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 131, in wait 2012-12-17 07:17:24 TRACE nova service.wait() 2012-12-17 07:17:24 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 166, in wait 2012-12-17 07:17:24 TRACE nova return self._exit_event.wait() 2012-12-17 07:17:24 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait 2012-12-17 07:17:24 TRACE nova return hubs.get_hub().switch() 2012-12-17 07:17:24 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 177, in switch 2012-12-17 07:17:24 TRACE nova return self.greenlet.switch() 2012-12-17 07:17:24 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 192, in main 2012-12-17 07:17:24 TRACE nova result = function(*args, **kwargs) 2012-12-17 07:17:24 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 101, in run_server 2012-12-17 07:17:24 TRACE nova server.start() 2012-12-17 07:17:24 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 162, in start 2012-12-17 07:17:24 TRACE nova self.manager.init_host() 2012-12-17 07:17:24 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 1766, in init_host 2012-12-17 07:
[Bug 1079764] Re: Precise - iPXE does not dhcp on kvm-virtio and shows RXE
** Tags added: kernel-unable-to-test-upstream -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1079764 Title: Precise - iPXE does not dhcp on kvm-virtio and shows RXE To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1079764/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1079764] [NEW] Precise - iPXE does not dhcp on kvm-virtio and shows RXE
Public bug reported: Can't DHCP with iPXE using kvm-virtio. Component: ii kvm-ipxe 1.0.0+git-3.55f6c88-0ubuntu1 PXE ROM's for KVM ii kvm-ipxe 1.0.0+git-3.55f6c88-0ubuntu5 PXE ROM's for KVM --> also tried this from Quantal Steps: 1- MAAS machine has the following dnsmasq.conf: dhcp-range=192.168.124.100,192.168.124.128 dhcp-option=3,192.168.124.1 dhcp-lease-max=1000 dhcp-authoritative dhcp-boot=pxelinux.0 dhcp-boot=net:normalarch,pxelinux.0 dhcp-boot=net:ia64,/var/lib/cobbler/elilo-3.6-ia64.efi 2- Booting a virtual machine on KVM using kvm-virtio, dnsmasq can see the DHCPDISCOVER requests and answers with DHCPOFFER, the iPXE client never sents a DHCPACK Nov 16 12:04:30 maas dnsmasq-dhcp[838]: DHCPDISCOVER(eth0) 00:00:77:77:00:02 Nov 16 12:04:30 maas dnsmasq-dhcp[838]: DHCPOFFER(eth0) 192.168.124.110 00:00:77:77:00:02 Nov 16 12:04:30 maas dnsmasq-dhcp[838]: DHCPDISCOVER(eth0) 00:00:77:77:00:02 Nov 16 12:04:30 maas dnsmasq-dhcp[838]: DHCPOFFER(eth0) 192.168.124.110 00:00:77:77:00:02 Nov 16 12:04:30 maas dnsmasq-dhcp[838]: DHCPDISCOVER(eth0) 00:00:77:77:00:02 Nov 16 12:04:30 maas dnsmasq-dhcp[838]: DHCPOFFER(eth0) 192.168.124.110 00:00:77:77:00:02 Nov 16 12:04:33 maas dnsmasq-dhcp[838]: DHCPDISCOVER(eth0) 00:00:77:77:00:02 Nov 16 12:04:33 maas dnsmasq-dhcp[838]: DHCPOFFER(eth0) 192.168.124.110 00:00:77:77:00:02 3- Tcpdump done on virbr2 which is the bridge on the KVM host running MAAS and the other VM. (Maas is a vm itself) 3.1- dhcpdiscover from iPXE 12:04:27.036729 00:00:77:77:00:02 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 437: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 00:00:77:77:00:02, length 395 3.2- dnsmasq tests if anyone is answering ARP for the address it wants to provide 12:04:27.037179 52:54:00:70:04:57 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 192.168.124.110 tell 192.168.124.2, length 28 3.3- dhcpdiscover from iPXE 12:04:28.013471 00:00:77:77:00:02 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 437: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 00:00:77:77:00:02, length 395 3.4- dnsmasq tests if anyone is answering ARP for the address it wants to provide 12:04:28.036663 52:54:00:70:04:57 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 192.168.124.110 tell 192.168.124.2, length 28 12:04:29.036673 52:54:00:70:04:57 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 192.168.124.110 tell 192.168.124.2, length 28 3.5- dhcpdiscover from iPXE 12:04:29.990785 00:00:77:77:00:02 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 437: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 00:00:77:77:00:02, length 395 3.6- dhcpoffer from dnsmasq 12:04:30.036766 52:54:00:70:04:57 > 00:00:77:77:00:02, ethertype IPv4 (0x0800), length 361: 192.168.124.2.67 > 192.168.124.110.68: BOOTP/DHCP, Reply, length 319 12:04:30.036832 52:54:00:70:04:57 > 00:00:77:77:00:02, ethertype IPv4 (0x0800), length 361: 192.168.124.2.67 > 192.168.124.110.68: BOOTP/DHCP, Reply, length 319 12:04:30.036844 52:54:00:70:04:57 > 00:00:77:77:00:02, ethertype IPv4 (0x0800), length 361: 192.168.124.2.67 > 192.168.124.110.68: BOOTP/DHCP, Reply, length 319 3.7- dhcpdiscover from iPXE 12:04:33.945420 00:00:77:77:00:02 > ff:ff:ff:ff:ff:ff, ethertype IPv4 (0x0800), length 437: 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from 00:00:77:77:00:02, length 395 3.8- dhcpoffer from dnsmasq 12:04:33.945816 52:54:00:70:04:57 > 00:00:77:77:00:02, ethertype IPv4 (0x0800), length 361: 192.168.124.2.67 > 192.168.124.110.68: BOOTP/DHCP, Reply, length 319 3.4- dnsmasq tests if anyone is answering ARP for the address it offered, directed to the host MAC address 12:04:38.960433 52:54:00:70:04:57 > 00:00:77:77:00:02, ethertype ARP (0x0806), length 42: Request who-has 192.168.124.110 tell 192.168.124.2, length 28 12:04:39.960414 52:54:00:70:04:57 > 00:00:77:77:00:02, ethertype ARP (0x0806), length 42: Request who-has 192.168.124.110 tell 192.168.124.2, length 28 12:04:40.960417 52:54:00:70:04:57 > 00:00:77:77:00:02, ethertype ARP (0x0806), length 42: Request who-has 192.168.124.110 tell 192.168.124.2, length 28 4- RXE errors for net0 on iPXE cmdline 5- if i configure net0 manually with an IP, ping works without problems and RXE doesn't increase 6- Tried workaround mentioned on https://bugs.launchpad.net/ubuntu/+source/isc-dhcp/+bug/930962, which doesn't seem applicable because I haven't seen any checksum errors on tcpdump and the dhcpd is using dnsmasq, not isc. Cheers, Eduardo. ** Affects: linux (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1079764 Title: Precise - iPXE does not dhcp on kvm-virtio and shows RXE To manage notifications about this b
[Bug 1076342] [NEW] Precise - inclusion of hpwdt kernel module
Public bug reported: 1. Description of the problem: the hpwdt kernel module that is necessary to handle NMIs sent via ilo2 is not present. 2. Ubuntu release, software version, Release Number and Architecture of the selected components. 12.04 Precise LTS 3. How reproducible is the problem? Always, hpwdt kernel module is not present. 4. Steps to Reproduce: Not Applicable, module absent ** Affects: linux (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1076342 Title: Precise - inclusion of hpwdt kernel module To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1076342/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs