[Desktop-packages] [Bug 1679623] [NEW] Zesty : VPN connections appear grayed out - impossible to launch VPN connection from nm-applet

2017-04-04 Thread Louis Bouchard
Public bug reported:

nm-applet display each configured VPN connection as greyed out so it is
impossible to launch any of the openVPN connection from the network
indicator.

It is still possible to enable the VPN connection using :

 $ nmcli conn up {VPN}

ProblemType: Bug
DistroRelease: Ubuntu 17.04
Package: network-manager 1.4.4-1ubuntu3
ProcVersionSignature: Ubuntu 4.10.0-14.16-generic 4.10.3
Uname: Linux 4.10.0-14-generic x86_64
ApportVersion: 2.20.4-0ubuntu2
Architecture: amd64
CurrentDesktop: Unity:Unity7
Date: Tue Apr  4 12:45:27 2017
IfupdownConfig: # interfaces(5) file used by ifup(8) and ifdown(8)
InstallationDate: Installed on 2016-06-27 (281 days ago)
InstallationMedia: Ubuntu 16.04 LTS "Xenial Xerus" - Release amd64 (20160420.1)
NetworkManager.conf:
 [main]
 plugins=ifupdown,keyfile
 
 [ifupdown]
 managed=false
NetworkManager.state:
 [main]
 NetworkingEnabled=true
 WirelessEnabled=true
 WWANEnabled=false
SourcePackage: network-manager
UpgradeStatus: No upgrade log present (probably fresh install)
nmcli-nm:
 RUNNING  VERSION  STATE   STARTUP   CONNECTIVITY  NETWORKING  WIFI-HW  
WIFI WWAN-HW  WWAN 
 running  1.4.4connecting  starting  none  enabled enabled  
enabled  enabled  disabled

** Affects: network-manager (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug zesty

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to network-manager in Ubuntu.
https://bugs.launchpad.net/bugs/1679623

Title:
  Zesty : VPN connections appear grayed out - impossible to launch VPN
  connection from nm-applet

Status in network-manager package in Ubuntu:
  New

Bug description:
  nm-applet display each configured VPN connection as greyed out so it
  is impossible to launch any of the openVPN connection from the network
  indicator.

  It is still possible to enable the VPN connection using :

   $ nmcli conn up {VPN}

  ProblemType: Bug
  DistroRelease: Ubuntu 17.04
  Package: network-manager 1.4.4-1ubuntu3
  ProcVersionSignature: Ubuntu 4.10.0-14.16-generic 4.10.3
  Uname: Linux 4.10.0-14-generic x86_64
  ApportVersion: 2.20.4-0ubuntu2
  Architecture: amd64
  CurrentDesktop: Unity:Unity7
  Date: Tue Apr  4 12:45:27 2017
  IfupdownConfig: # interfaces(5) file used by ifup(8) and ifdown(8)
  InstallationDate: Installed on 2016-06-27 (281 days ago)
  InstallationMedia: Ubuntu 16.04 LTS "Xenial Xerus" - Release amd64 
(20160420.1)
  NetworkManager.conf:
   [main]
   plugins=ifupdown,keyfile
   
   [ifupdown]
   managed=false
  NetworkManager.state:
   [main]
   NetworkingEnabled=true
   WirelessEnabled=true
   WWANEnabled=false
  SourcePackage: network-manager
  UpgradeStatus: No upgrade log present (probably fresh install)
  nmcli-nm:
   RUNNING  VERSION  STATE   STARTUP   CONNECTIVITY  NETWORKING  WIFI-HW  
WIFI WWAN-HW  WWAN 
   running  1.4.4connecting  starting  none  enabled enabled  
enabled  enabled  disabled

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1679623/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


Re: [Desktop-packages] [Bug 1438510] Re: [REGRESSION] bluetooth headset no longer supports a2dp in 16.04 xenial and 16.10 yakkety

2017-04-03 Thread Louis Bouchard
Hello,

Le 03/04/2017 à 12:02, Konrad Zapałowicz a écrit :
> Hey, to everyone interested we are currently testing and reviewing an
> improvement to PA that fixes problems with selecting A2DP mode for BT
> headsets.
> 
> Right now it is being tested internally with good results for Sony
> headsets, QC35 from Bose and BT speakers. Once it is in a silo/ppa I
> will share this information for much wider testing. Then it will be
> submitted as a SRU and will land in xenial.
> 

I've been subscribed to this bug for ages & I'm still seeing this issue with my
BT headset so I'd be more than happy to help you tests.

Kind regards,

...Louis

-- 
Louis Bouchard
Software engineer, Cloud & Sustaining eng.
Canonical Ltd
Ubuntu developer   Debian Maintainer
GPG : 429D 7A3B DD05 B6F8 AF63  B9C4 8B3D 867C 823E 7A61

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to pulseaudio in Ubuntu.
https://bugs.launchpad.net/bugs/1438510

Title:
  [REGRESSION] bluetooth headset no longer supports a2dp in 16.04 xenial
  and 16.10 yakkety

Status in PulseAudio:
  Confirmed
Status in bluez package in Ubuntu:
  Confirmed
Status in pulseaudio package in Ubuntu:
  Confirmed
Status in bluez source package in Vivid:
  Won't Fix
Status in pulseaudio source package in Vivid:
  Won't Fix

Bug description:
  Just installed 15.04 fresh from the latest ISO (beta2).

  I'm bummed to see my bluetooth headset (Bose Soundlink overear) seems
  to have regressed in functionality.

  In 14.10, I was able to set the output profile either to a2dp or
  hsp/hfp (telephony duplex).

  In 15.04, it only works in telephony duplex mode.  I can't get high
  fidelity sound playback to work at all.

  This thread seems to be related, though the workaround within did not solve 
the problem for me:
  https://bbs.archlinux.org/viewtopic.php?id=194006

  The bug is still present in 16.04 LTS and 16.10.

To manage notifications about this bug go to:
https://bugs.launchpad.net/pulseaudio/+bug/1438510/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1665018] Re: client tools ignore -h option without port number

2017-03-22 Thread Louis Bouchard
** Tags removed: sts-sru
** Tags added: sts-sru-done

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/1665018

Title:
  client tools ignore -h option without port number

Status in cups package in Ubuntu:
  Fix Released
Status in cups source package in Trusty:
  Fix Released

Bug description:
  [Impact]
  Prevents user from overriding default print server on cmdline if he's not 
aware of the fact that this bug may be overriden by giving a port number.

  [Test Case]
  1. Setup 2 cups servers with a shared printer set as default destination: 
server1, server2.
  2. On a trusty client try:
  export CUPS_SERVER=server1
  lpstat -h server2 -H
  3. Expected result:
  server2:631
  4. Actual result:
  server1:631
  (server given by CUPS_SERVER is used instead of the one given by -h option).

  [Regression Potential]
  Minimal. ipp_port is initialized to default value if not given explicitly. 
Fix is a backport from Xenial version and already present in upstream release.

  [Other Info]
   
  * Original bug description:

  Some commandline tools (e.g. lp, lpstat) ignore -h option if no port number 
is given.
  This version affects Trusty with cups-client 1.7.2-0ubuntu1.7. Xenial works 
fine.

  Test to reproduce:
  1. Setup 2 cups servers with a shared printer set as default destination: 
server1, server2.
  2. On a trusty client try:
  export CUPS_SERVER=server1
  lpstat -h server2 -H

  3. Expected result:
  server2:631

  4. Actual result:
  server1:631
  (server given by CUPS_SERVER is used instead of the one given by -h option).

  If a port number is given (e.g. server2:631) the commands work as
  expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1665018/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1666207] Re: Unable to create VPN connection on Zesty

2017-02-20 Thread Louis Bouchard
This is not a bug, but seems to happen when some required fields are not
yet filled. While the 'remote' field can be obvious,it is much less
evident when the interface refuses to allow the configuration to be
saved, because the certificate provided requires a password and the
password field has been left blank.

The UI should be more explicit on why it does not allow the
configuration to be saved.

Marking the bug as invalid

** Changed in: network-manager (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to network-manager in Ubuntu.
https://bugs.launchpad.net/bugs/1666207

Title:
  Unable to create VPN connection on Zesty

Status in network-manager package in Ubuntu:
  Invalid

Bug description:
  When trying to create a new VPN connection with nm-connection-editor,
  the new connection cannot be saved and the editor issues the following
  error :

  LC_ALL="C" nm-connection-editor
  Gtk-Message: GtkDialog mapped without a transient parent. This is discouraged.
  ** Message: Cannot save connection due to error: Editor initializing...
  ** Message: Cannot save connection due to error: Invalid setting VPN: remote

  dpkg -l | grep network-manager-gnome
  network-manager-gnome   1.4.2-1ubuntu2

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1666207/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1666207] [NEW] Unable to create VPN connection on Zesty

2017-02-20 Thread Louis Bouchard
Public bug reported:

When trying to create a new VPN connection with nm-connection-editor,
the new connection cannot be saved and the editor issues the following
error :

LC_ALL="C" nm-connection-editor
Gtk-Message: GtkDialog mapped without a transient parent. This is discouraged.
** Message: Cannot save connection due to error: Editor initializing...
** Message: Cannot save connection due to error: Invalid setting VPN: remote

dpkg -l | grep network-manager-gnome
network-manager-gnome   1.4.2-1ubuntu2

** Affects: network-manager (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to network-manager in Ubuntu.
https://bugs.launchpad.net/bugs/1666207

Title:
  Unable to create VPN connection on Zesty

Status in network-manager package in Ubuntu:
  New

Bug description:
  When trying to create a new VPN connection with nm-connection-editor,
  the new connection cannot be saved and the editor issues the following
  error :

  LC_ALL="C" nm-connection-editor
  Gtk-Message: GtkDialog mapped without a transient parent. This is discouraged.
  ** Message: Cannot save connection due to error: Editor initializing...
  ** Message: Cannot save connection due to error: Invalid setting VPN: remote

  dpkg -l | grep network-manager-gnome
  network-manager-gnome   1.4.2-1ubuntu2

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/1666207/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1665018] Re: client tools ignore -h option without port number

2017-02-16 Thread Louis Bouchard
** Also affects: cups (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Tags added: sts-sru

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/1665018

Title:
  client tools ignore -h option without port number

Status in cups package in Ubuntu:
  New
Status in cups source package in Trusty:
  New

Bug description:
  [Impact]
  Prevents user from overriding default print server on cmdline if he's not 
aware of the fact that this bug may be overriden by giving a port number.

  [Test Case]
  1. Setup 2 cups servers with a shared printer set as default destination: 
server1, server2.
  2. On a trusty client try:
  export CUPS_SERVER=server1
  lpstat -h server2 -H
  3. Expected result:
  server2:631
  4. Actual result:
  server1:631
  (server given by CUPS_SERVER is used instead of the one given by -h option).

  [Regression Potential]
  Minimal. ipp_port is initialized to default value if not given explicitly. 
Fix is a backport from Xenial version and already present in upstream release.

  [Other Info]
   
  * Original bug description:

  Some commandline tools (e.g. lp, lpstat) ignore -h option if no port number 
is given.
  This version affects Trusty with cups-client 1.7.2-0ubuntu1.7. Xenial works 
fine.

  Test to reproduce:
  1. Setup 2 cups servers with a shared printer set as default destination: 
server1, server2.
  2. On a trusty client try:
  export CUPS_SERVER=server1
  lpstat -h server2 -H

  3. Expected result:
  server2:631

  4. Actual result:
  server1:631
  (server given by CUPS_SERVER is used instead of the one given by -h option).

  If a port number is given (e.g. server2:631) the commands work as
  expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1665018/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1550983] Re: Fails to start with "Couldn't open libGL.so.1" (missing dependency?)

2017-01-23 Thread Louis Bouchard
** Also affects: gtk+3.0 (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: gtk+3.0 (Ubuntu Xenial)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to gtk+3.0 in Ubuntu.
https://bugs.launchpad.net/bugs/1550983

Title:
  Fails to start with "Couldn't open libGL.so.1" (missing dependency?)

Status in One Hundred Papercuts:
  Confirmed
Status in gtk+3.0 package in Ubuntu:
  Fix Released
Status in gtk+3.0 source package in Xenial:
  New
Status in gtk+3.0 source package in Yakkety:
  New
Status in gtk+3.0 package in Debian:
  Fix Released

Bug description:
  [Impact] 
  There are some unlinked calls to libGL.so.1 undetected in the build process 
because of using libepoxy. Running an application that does not depend on 
libgl1 (directly or indirectly) may lead to aborting the process with an 
undefined reference to libGL.so.1.

  [Test Case]
  1. Deploy a server / cloud image of Xenial or Yakkety.
  2. Use a Windows or a Mac client with Cygwin/X and ssh -XY to Ubuntu machine.
  3. sudo apt install firefox; firefox

  Expected result:
  firefox is launched on the client machine.

  Actual result:
  "Couldn't open libGL.so.1" message is printed.

  [Regression Potential] 
  Minimal

  [Other Info]
  Original bug description:

  virt-manager fails to start:

  $ virt-manager --debug --no-fork
  [Sun, 28 Feb 2016 19:18:22 virt-manager 7592] DEBUG (cli:256) Launched with 
command line: /usr/share/virt-manager/virt-manager --debug --no-fork
  [Sun, 28 Feb 2016 19:18:22 virt-manager 7592] DEBUG (virt-manager:143) 
virt-manager version: 1.3.2
  [Sun, 28 Feb 2016 19:18:22 virt-manager 7592] DEBUG (virt-manager:144) 
virtManager import: 
  Couldn't open libGL.so.1: libGL.so.1: cannot open shared object file: No such 
file or directory
  $

  Installing the 'libgl1-mesa-glx' package resolves the issue.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: virt-manager 1:1.3.2-0ubuntu1
  ProcVersionSignature: Ubuntu 4.4.0-8.23-generic 4.4.2
  Uname: Linux 4.4.0-8-generic x86_64
  ApportVersion: 2.20-0ubuntu3
  Architecture: amd64
  Date: Sun Feb 28 19:19:27 2016
  InstallationDate: Installed on 2016-02-27 (0 days ago)
  InstallationMedia: Ubuntu-Server 16.04 LTS "Xenial Xerus" - Alpha amd64 
(20160206)
  PackageArchitecture: all
  SourcePackage: virt-manager
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/hundredpapercuts/+bug/1550983/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1554004] Re: Segfault on X startup with VX900

2016-11-09 Thread Louis Bouchard
** Tags removed: sts-sru

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to xserver-xorg-video-openchrome in
Ubuntu.
https://bugs.launchpad.net/bugs/1554004

Title:
  Segfault on X startup with VX900

Status in openchrome:
  Fix Released
Status in xserver-xorg-video-openchrome package in Ubuntu:
  Fix Released
Status in xserver-xorg-video-openchrome source package in Trusty:
  Fix Released
Status in xserver-xorg-video-openchrome source package in Wily:
  Fix Released

Bug description:
  [Impact]

   * Prevents from using X at all with some VIA chipsets - a segfault
  occurs and is logged in Xorg.log

  [Test Case]

   * Start X on a affected hw (e.g. VX900).

   * Examine Xorg.log after crash.

  [Regression Potential]

   * This is a bug fixed upstream
  (https://cgit.freedesktop.org/openchrome/xf86-video-
  openchrome/commit/?id=ecb1695ac2de1d840c036f64b5b71602e0f522a4).

   * The fix is a one-liner with minimal impact.

  [Other Info]

   * Original bug description:

  There is a segfault in Xorg.log visible when starting X on Trusty
  14.04 with the following hardware:

  00:01.0 VGA compatible controller [0300]: VIA Technologies, Inc. VX900 
Graphics [Chrome9 HD] [1106:7122] (prog-if 00 [VGA controller])
  Subsystem: Gigabyte Technology Co., Ltd Device [1458:d000]
  Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
  Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- 
SERR- https://bugs.launchpad.net/openchrome/+bug/1554004/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1567578] Re: libnl should be updated to support up to 63 VFs per single PF

2016-11-09 Thread Louis Bouchard
** Tags removed: sts-sru

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to libnl3 in Ubuntu.
https://bugs.launchpad.net/bugs/1567578

Title:
   libnl should be updated to support up to 63 VFs per single PF

Status in libnl3 package in Ubuntu:
  Fix Released
Status in libnl3 source package in Trusty:
  Fix Released

Bug description:
  [Impact]

  libnl can only enable up to 30 VFs even if the PF supports up to 63
  VFs in an Openstack SRIOV configuration.

  As already documented in https://bugs.launchpad.net/mos/+bug/1501738
  there is a bug in the default libnl library release installed on
  Ubuntu 14.04.4.

  When trying to enable a guest with more than 30 VFs attached, the
  following error is returned:

  error: Failed to start domain guest1
  error: internal error: missing IFLA_VF_INFO in netlink response

  [Test Case]

   1) Edit /etc/default/grub.

  GRUB_CMDLINE_LINUX="intel_iommu=on ixgbe.max_vfs=63"

   2) Update grub and reboot the machine.

  $ sudo update-grub

   3) Check that the virtual functions are available.

  $ sudo lspci|grep -i eth | grep -i virtual | wc -l
  126

   4) Create a KVM guest.

  $ sudo uvt-kvm create guest1 release=trusty

   5) List the VF devices.

  $ sudo lspci|grep -i eth | grep -i virtual | awk '{print $1}' | sed
  's/\:/\_/g' | sed 's/\./\_/g' > devices.txt

   6) Get the libvirt node device.

  $ sudo for device in $(cat ./devices.txt); do virsh nodedev-list |
  grep $device; done > pci_devices.txt

   7) Generate the XML config for each device.

  $ sudo mkdir devices && for d in $(cat pci_devices.txt); do virsh
  nodedev-dumpxml $d > devices/$d.xml; done

   8) Save and Run the following script.
  (http://pastebin.ubuntu.com/23374186/)

  $ sudo python generate-interfaces.py |grep address | wc -l

   9) Finally attach the devices to the guest.

  $ sudo for i in $(seq 0 63); do virsh attach-device guest1 
./interfaces/$i.xml --config; done
  Device attached successfully
  [...]

  Device attached successfully
  Device attached successfully

   10) Then destroy/start the guest again, at this point the error is
  reproduced.

  $ sudo virsh destroy guest1
  Domain guest1 destroyed

  $ sudo virsh start guest1

  error: Failed to start domain guest1
  error: internal error: missing IFLA_VF_INFO in netlink response

  [Regression Potential]

   * None identified.

  [Other Info]

   * Redhat Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1040626

   * A workaround is to install a newer library release.

  $ wget 
https://launchpad.net/ubuntu/+archive/primary/+files/libnl-3-200_3.2.24-2_amd64.deb
  $ wget 
https://launchpad.net/ubuntu/+archive/primary/+files/libnl-genl-3-200_3.2.24-2_amd64.deb
  $ wget 
https://launchpad.net/ubuntu/+archive/primary/+files/libnl-route-3-200_3.2.24-2_amd64.deb
  $ dpkg -i libnl-3-200_3.2.24-2_amd64.deb
  $ dpkg -i libnl-genl-3-200_3.2.24-2_amd64.deb
  $ dpkg -i libnl-route-3-200_3.2.24-2_amd64.deb

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libnl3/+bug/1567578/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1632357] Re: remove llvm-toolchain-3.6 from zesty

2016-11-02 Thread Louis Bouchard
The debian clamav development team has dropped llvm-3.6 dependancy from
the latest clamav (soon to be merged) :

  * Drop llvm supported for now. The bytecode will be interpreted by clamav
instead of llvm's JIT - there is loss in functionality. It will come back
once we llvm support again (Closes: #839850).

So my suggestion is to drop it once the latest clamav is merged

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to llvm-toolchain-3.6 in Ubuntu.
https://bugs.launchpad.net/bugs/1632357

Title:
  remove llvm-toolchain-3.6 from zesty

Status in llvm-toolchain-3.6 package in Ubuntu:
  Incomplete

Bug description:
  Lets track llvm-toolchain-3.6 removal

  reverse-depends -r zesty -b src:llvm-toolchain-3.6

  * clamav(for llvm-3.6-dev) -> needs merge
  * indicator-location(for clang-format-3.6) ->  tracked in bug 
#1637128
  * julia (for llvm-3.6-dev) -> false positive
  * lightspark(for llvm-3.6-dev) -> broken, I don't think 
we want it in the archive (see Debian status)
  * oclgrind  (for libclang-3.6-dev) -> needs some 
decruft/removals
  * qtcreator (for libclang-3.6-dev) -> needs merge

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/llvm-toolchain-3.6/+bug/1632357/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1567578] Re: libnl should be updated to support up to 63 VFs per single PF

2016-10-24 Thread Louis Bouchard
** Tags added: sts-sru

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to libnl3 in Ubuntu.
https://bugs.launchpad.net/bugs/1567578

Title:
   libnl should be updated to support up to 63 VFs per single PF

Status in libnl3 package in Ubuntu:
  Fix Released
Status in libnl3 source package in Precise:
  New
Status in libnl3 source package in Trusty:
  In Progress

Bug description:
  [Description]

  Ubuntu 14.04.4 and SRIOV settings.

  As already documented in https://bugs.launchpad.net/mos/+bug/1501738
  there is a bug in the default libnl library release installed on
  Ubuntu 14.04.4

  When trying to enable a guest with more than 30 VFs attached, the
  following error is returned:

  error: Failed to start domain guest1
  error: internal error: missing IFLA_VF_INFO in netlink response

  [Impact]

  The library release is the 3.2.21-1 the bug is impacting on the
  maximum VFs number that can be enabled (up to 30) even if the PF
  supports up to 63 VFs in an Openstack SRIOV configuration

  [Test Case]

  The sequence to reproduce this bug is:

  1) Edit /etc/default/grub

  GRUB_CMDLINE_LINUX="intel_iommu=on ixgbe.max_vfs=63"

  2) $ sudo update-grub

  ### Reboot the machine.

  3) Check that the virtual functions are available:

  $ sudo lspci|grep -i eth | grep -i virtual | wc -l
  126

  4) Create a KVM guest

  $ sudo uvt-kvm create guest1 release=trusty

  5) List the VF devices :

  $ sudo lspci|grep -i eth | grep -i virtual | awk '{print $1}' | sed
  's/\:/\_/g' | sed 's/\./\_/g' > devices.txt

  6) Get the libvirt node device:

  $ sudo for device in $(cat ./devices.txt); do virsh nodedev-list |
  grep $device; done > pci_devices.txt

  7) Generate the XML config for each device:

  $ sudo mkdir devices && for d in $(cat pci_devices.txt); do virsh
  nodedev-dumpxml $d > devices/$d.xml; done

  8) Save and Run the following script
  (http://pastebin.ubuntu.com/23374186/)

  $ sudo python generate-interfaces.py |grep address | wc -l

  9) Finally attach the devices to the guest.

  $ sudo for i in $(seq 0 63); do virsh attach-device guest1 
./interfaces/$i.xml --config; done
  Device attached successfully
  [...]

  Device attached successfully
  Device attached successfully

  10) Then destroy/start the guest again, at this point the error is
  reproduced.

  $ sudo virsh destroy guest1
  Domain guest1 destroyed

  $ sudo virsh start guest1

  error: Failed to start domain guest1
  error: internal error: missing IFLA_VF_INFO in netlink response

   * detailed instructions how to reproduce the bug

   * these should allow someone who is not familiar with the affected
 package to reproduce the bug and verify that the updated package fixes
 the problem.

  [Regression Potential]

   ** None identified.

  [Other Info]

  - Redhat Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1040626

  [Workaround]

  The workaround is to install a newer library release, the 3.2.24-2:

  wget 
https://launchpad.net/ubuntu/+archive/primary/+files/libnl-3-200_3.2.24-2_amd64.deb
  wget 
https://launchpad.net/ubuntu/+archive/primary/+files/libnl-genl-3-200_3.2.24-2_amd64.deb
  wget 
https://launchpad.net/ubuntu/+archive/primary/+files/libnl-route-3-200_3.2.24-2_amd64.deb
  dpkg -i libnl-3-200_3.2.24-2_amd64.deb
  dpkg -i libnl-genl-3-200_3.2.24-2_amd64.deb
  dpkg -i libnl-route-3-200_3.2.24-2_amd64.deb

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libnl3/+bug/1567578/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1567578] Re: libnl should be updated to support up to 63 VFs per single PF

2016-10-21 Thread Louis Bouchard
** Also affects: libnl3 (Ubuntu Precise)
   Importance: Undecided
   Status: New

** Also affects: libnl3 (Ubuntu Trusty)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to libnl3 in Ubuntu.
https://bugs.launchpad.net/bugs/1567578

Title:
   libnl should be updated to support up to 63 VFs per single PF

Status in libnl3 package in Ubuntu:
  Fix Released
Status in libnl3 source package in Precise:
  New
Status in libnl3 source package in Trusty:
  New

Bug description:
  Ubuntu 14.04.4 and SRIOV settings.

  As already documented in https://bugs.launchpad.net/mos/+bug/1501738
  there is a bug in the default libnl library release installed on
  Ubuntu 14.04.4

  The library release is the 3.2.21-1 and the bug is impacting on the
  maximum VFs number that can be enabled (up to 30) even if the PF
  supports up to 63 VFs in an Openstack SRIOV configuration

  The workaround is to install a newer library release, the 3.2.24-2:

  wget 
https://launchpad.net/ubuntu/+archive/primary/+files/libnl-3-200_3.2.24-2_amd64.deb
  wget 
https://launchpad.net/ubuntu/+archive/primary/+files/libnl-genl-3-200_3.2.24-2_amd64.deb
  wget 
https://launchpad.net/ubuntu/+archive/primary/+files/libnl-route-3-200_3.2.24-2_amd64.deb
  dpkg -i libnl-3-200_3.2.24-2_amd64.deb
  dpkg -i libnl-genl-3-200_3.2.24-2_amd64.deb
  dpkg -i libnl-route-3-200_3.2.24-2_amd64.deb

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libnl3/+bug/1567578/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1438510] Re: [REGRESSION] bluetooth headset no longer supports a2dp

2016-06-14 Thread Louis Bouchard
Testing with a JBL SB400 BT on a Lenovo Thinkpad T450S with the intel
ibt driver on F/W version 37081001103110e23 works correctly.

Doing a parallel test on my laptop (HP evo 850) that has :

[12383.666088] Bluetooth: hci0: read Intel version: 370710018002030d00
[12383.666095] Bluetooth: hci0: Intel Bluetooth firmware file: 
intel/ibt-hw-37.7.10-fw-1.80.2.3.d.bseq

Fails according to the bug report.

So it may be firmware related

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to pulseaudio in Ubuntu.
https://bugs.launchpad.net/bugs/1438510

Title:
  [REGRESSION] bluetooth headset no longer supports a2dp

Status in bluez package in Ubuntu:
  Confirmed
Status in pulseaudio package in Ubuntu:
  Confirmed
Status in bluez source package in Vivid:
  Confirmed
Status in pulseaudio source package in Vivid:
  Confirmed

Bug description:
  Just installed 15.04 fresh from the latest ISO (beta2).

  I'm bummed to see my bluetooth headset (Bose Soundlink overear) seems
  to have regressed in functionality.

  In 14.10, I was able to set the output profile either to a2dp or
  hsp/hfp (telephony duplex).

  In 15.04, it only works in telephony duplex mode.  I can't get high
  fidelity sound playback to work at all.

  This thread seems to be related, though the workaround within did not solve 
the problem for me:
  https://bbs.archlinux.org/viewtopic.php?id=194006

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/bluez/+bug/1438510/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1554004] Re: Segfault on X startup with VX900

2016-03-14 Thread Louis Bouchard
** Changed in: xserver-xorg-video-openchrome (Ubuntu Trusty)
   Status: New => In Progress

** Changed in: xserver-xorg-video-openchrome (Ubuntu Wily)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to xserver-xorg-video-openchrome in
Ubuntu.
https://bugs.launchpad.net/bugs/1554004

Title:
  Segfault on X startup with VX900

Status in openchrome:
  Fix Released
Status in xserver-xorg-video-openchrome package in Ubuntu:
  Fix Released
Status in xserver-xorg-video-openchrome source package in Trusty:
  In Progress
Status in xserver-xorg-video-openchrome source package in Wily:
  In Progress

Bug description:
  [Impact]

   * Prevents from using X at all with some VIA chipsets - a segfault
  occurs and is logged in Xorg.log

  [Test Case]

   * Start X on a affected hw (e.g. VX900).

   * Examine Xorg.log after crash.

  [Regression Potential]

   * This is a bug fixed upstream
  (https://cgit.freedesktop.org/openchrome/xf86-video-
  openchrome/commit/?id=ecb1695ac2de1d840c036f64b5b71602e0f522a4).

   * The fix is a one-liner with minimal impact.

  [Other Info]

   * Original bug description:

  There is a segfault in Xorg.log visible when starting X on Trusty
  14.04 with the following hardware:

  00:01.0 VGA compatible controller [0300]: VIA Technologies, Inc. VX900 
Graphics [Chrome9 HD] [1106:7122] (prog-if 00 [VGA controller])
  Subsystem: Gigabyte Technology Co., Ltd Device [1458:d000]
  Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
  Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- 
SERR- https://bugs.launchpad.net/openchrome/+bug/1554004/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1554004] Re: Segfault on X startup with VX900

2016-03-10 Thread Louis Bouchard
** Also affects: xserver-xorg-video-openchrome (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: xserver-xorg-video-openchrome (Ubuntu Wily)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to xserver-xorg-video-openchrome in
Ubuntu.
https://bugs.launchpad.net/bugs/1554004

Title:
  Segfault on X startup with VX900

Status in openchrome:
  Confirmed
Status in xserver-xorg-video-openchrome package in Ubuntu:
  Fix Released
Status in xserver-xorg-video-openchrome source package in Trusty:
  New
Status in xserver-xorg-video-openchrome source package in Wily:
  New

Bug description:
  [Impact]

   * Prevents from using X at all with some VIA chipsets - a segfault
  occurs and is logged in Xorg.log

  [Test Case]

   * Start X on a affected hw (e.g. VX900).

   * Examine Xorg.log after crash.

  [Regression Potential]

   * This is a bug fixed upstream
  (https://cgit.freedesktop.org/openchrome/xf86-video-
  openchrome/commit/?id=ecb1695ac2de1d840c036f64b5b71602e0f522a4).

   * The fix is a one-liner with minimal impact.

  [Other Info]

   * Original bug description:

  There is a segfault in Xorg.log visible when starting X on Trusty
  14.04 with the following hardware:

  00:01.0 VGA compatible controller [0300]: VIA Technologies, Inc. VX900 
Graphics [Chrome9 HD] [1106:7122] (prog-if 00 [VGA controller])
  Subsystem: Gigabyte Technology Co., Ltd Device [1458:d000]
  Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
  Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- 
SERR- https://bugs.launchpad.net/openchrome/+bug/1554004/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1538724] Re: GraphicsCriticalError: |[0][GFX1]: Unknown cairo format 3

2016-02-01 Thread Louis Bouchard
** Also affects: firefox (Ubuntu Precise)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to firefox in Ubuntu.
https://bugs.launchpad.net/bugs/1538724

Title:
  GraphicsCriticalError: |[0][GFX1]: Unknown cairo format 3

Status in Mozilla Firefox:
  Fix Released
Status in firefox package in Ubuntu:
  Triaged
Status in firefox source package in Precise:
  New
Status in firefox source package in Trusty:
  Triaged
Status in firefox source package in Wily:
  Triaged

Bug description:
  [Impact]

   * In some configurations (like VM connected via vnc) prevents from
  using Firefox at all (crash on launch).

  [Test Case]

   * Setup an Ubuntu VM with a VNC server
   * Connect to it with a vnc client
   * Launch the default Firefox version (44.0+build3)

  Expected result: Firefox is launched
  Actual result: Firefox crashes and Mozilla crash report tool appears

  [Regression Potential]

   * Very low - backport of upstream fix.

  [Other Info]
   
   * Original bug description:

  After installing Firefox, I'm receiving the following crash:

  Add-ons: 
ubufox%40ubuntu.com:3.2,%7B972ce4c6-7e08-4474-a285-3208198ce6fd%7D:44.0
  BuildID: 20160126223146
  CrashTime: 1453922984
  EMCheckCompatibility: true
  EventLoopNestingLevel: 1
  FramePoisonBase: 70dea000
  FramePoisonSize: 4096
  GraphicsCriticalError: |[0][GFX1]: Unknown cairo format 3
  InstallTime: 1453922031
  Notes: OpenGL: Mesa project: www.mesa3d.org -- Mesa GLX Indirect -- 1.3 Mesa 
4.0.4 -- texture_from_pixmap

  ProductID: {ec8030f7-c20a-464f-9b0e-13a3a9e97384}
  ProductName: Firefox
  ReleaseChannel: release
  SafeMode: 0
  SecondsSinceLastCrash: 285
  StartupTime: 1453922983
  TelemetryEnvironment: 
{"build":{"applicationId":"{ec8030f7-c20a-464f-9b0e-13a3a9e97384}","applicationName":"Firefox","architecture":"x86-64","buildId":"20160126223146","version":"44.0","vendor":"Mozilla","platformVersion":"44.0","xpcomAbi":"x86_64-gcc3","hotfixVersion":null},"partner":{"distributionId":"canonical","distributionVersion":"1.0","partnerId":null,"distributor":null,"distributorChannel":null,"partnerNames":[]},"system":{"memoryMB":64454,"virtualMaxMB":null,"cpu":{"count":8,"cores":4,"vendor":"GenuineIntel","family":6,"model":62,"stepping":4,"l2cacheKB":null,"l3cacheKB":10240,"speedMHz":null,"extensions":["hasMMX","hasSSE","hasSSE2","hasSSE3","hasSSSE3","hasSSE4_1","hasSSE4_2"]},"os":{"name":"Linux","version":"3.14.32--grs-ipv6-64","locale":"en-GB"},"hdd":{"profile":{"model":null,"revision":null},"binary":{"model":null,"revision":null},"system":{"model":null,"revision":null}},"gfx":{"D2DEnabled":null,"DWriteEnabled":null,"adapters":[{"description":"Mesa
 project: www.mesa3d.org -- Mesa GLX Indirect","vendorID":"Mesa project: 
www.mesa3d.org","deviceID":"Mesa GLX 
Indirect","subsysID":null,"RAM":null,"driver":null,"driverVersion":"1.3 Mesa 
4.0.4","driverDate":null,"GPUActive":true}],"monitors":[],"features":{"compositor":"none"}}},"settings":{"blocklistEnabled":true,"e10sEnabled":false,"telemetryEnabled":false,"isInOptoutSample":false,"locale":"en-US","update":{"channel":"release","enabled":true,"autoDownload":true},"userPrefs":{"browser.newtabpage.enhanced":true},"addonCompatibilityCheckEnabled":true,"isDefaultBrowser":false},"profile":{"creationDate":16827},"addons":{"activeAddons":{"ubu...@ubuntu.com":{"blocklisted":false,"description":"Ubuntu
 modifications for Firefox","name":"Ubuntu 
Modifications","userDisabled":false,"appDisabled":false,"version":"3.2","scope":8,"type":"extension","foreignInstall":true,"hasBinaryComponents":false,"installDay":16696,"updateDay":16696,"signedState":2}},"theme":{"id":"{972ce4c6-7e08-4474-a285-3208198ce6fd}","blocklisted":false,"description":"The
 default 
theme.","name":"Default","userDisabled":false,"appDisabled":false,"version":"44.0","scope":4,"foreignInstall":false,"hasBinaryComponents":false,"installDay":16827,"updateDay":16827},"activePlugins":[{"name":"iTunes
 Application Detector","version":"","description":"This plug-in detects the 
presence of iTunes when opening iTunes Store URLs in a web page with 
Firefo","blocklisted":false,"disabled":false,"clicktoplay":true,"mimeTypes":["application/itunes-plugin"],"updateDay":16188}],"activeGMPlugins":{"gmp-gmpopenh264":{"version":null,"userDisabled":false,"applyBackgroundUpdates":1}},"activeExperiment":{},"persona":null}}
  Theme: classic/1.0
  Throttleable: 1
  Vendor: Mozilla
  Version: 44.0
  useragent_locale: en-US

  This report also contains technical information about the state of the
  application when it crashed.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: firefox 44.0+build3-0ubuntu0.14.04.1
  Uname: Linux 3.14.32--grs-ipv6-64 x86_64
  AddonCompatCheckDisabled: False
  ApportVersion: 2.14.1-0ubuntu3.19
  Architecture: amd64
  BuildID: 20160126223146
  CRDA: Error: command ['iw', 'reg', 'get'] failed with exit code 1: nl80211 
not 

[Desktop-packages] [Bug 1510824] Re: PolkitAgentSession ignores multiline output (with pam_vas)

2015-10-28 Thread Louis Bouchard
** Also affects: policykit-1 (Ubuntu Wily)
   Importance: Undecided
   Status: New

** Also affects: policykit-1 (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Also affects: policykit-1 (Ubuntu Vivid)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to policykit-1 in Ubuntu.
https://bugs.launchpad.net/bugs/1510824

Title:
  PolkitAgentSession ignores multiline output (with pam_vas)

Status in policykit-1 package in Ubuntu:
  New
Status in policykit-1 source package in Trusty:
  New
Status in policykit-1 source package in Vivid:
  New
Status in policykit-1 source package in Wily:
  New

Bug description:
  There is an error observed when Ubuntu is configured to perform
  authentication via pam_vas (Vintela Authentication Services by Dell)
  in a disconnected mode (using cached authentication).

  Steps to reproduce:
  1. Configure pam_vas client authenticating to a remote server.
  2. Perform authentication to cache the credentials.
  3. Disconnect from the network where the server is reachable (to force using 
cached information).
  4. Perform an action requiring polkit authentication.

  Expected result:
  Authentication succeeds accompanied by the following message "You have logged 
in using cached account information.  Some network services will be 
unavailable".

  Actual result:
  Authentication fails accompanied by the following message "You have logged in 
using cached account information.  Some network services will be unavailable".

  Probable cause:
  The PolkitAgentSession part of polkit is designed to interpret only 1-line 
output, while interaction with pam_vas in the above scenario triggers helper to 
produce the following 2-line output:
  PAM_TEXT_INFO You have logged in using cached account information.  Some 
network services will be unavailable
  SUCCESS

  The 'SUCCESS' part is never read so the authentication never ends
  successfully.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: policykit-1 0.105-4ubuntu2.14.04.1
  ProcVersionSignature: Ubuntu 3.16.0-52.71~14.04.1-generic 3.16.7-ckt18
  Uname: Linux 3.16.0-52-generic x86_64
  NonfreeKernelModules: nvidia zfs zunicode zcommon znvpair zavl
  ApportVersion: 2.14.1-0ubuntu3.18
  Architecture: amd64
  CurrentDesktop: Unity
  Date: Wed Oct 28 09:01:37 2015
  InstallationDate: Installed on 2015-04-13 (197 days ago)
  InstallationMedia: Ubuntu 14.04.2 LTS "Trusty Tahr" - Release amd64 
(20150218.1)
  SourcePackage: policykit-1
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/policykit-1/+bug/1510824/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1244578] Re: lightdm-session runs xrdb with -nocpp option

2015-06-15 Thread Louis Bouchard
** Also affects: lightdm (Ubuntu Vivid)
   Importance: Undecided
   Status: New

** Also affects: lightdm (Ubuntu Utopic)
   Importance: Undecided
   Status: New

** Also affects: lightdm (Ubuntu Trusty)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to lightdm in Ubuntu.
https://bugs.launchpad.net/bugs/1244578

Title:
  lightdm-session runs xrdb with -nocpp option

Status in lightdm package in Ubuntu:
  Confirmed
Status in lightdm source package in Trusty:
  In Progress
Status in lightdm source package in Utopic:
  In Progress
Status in lightdm source package in Vivid:
  In Progress

Bug description:
  lightdm-session runs xrdb for .Xresources file with the -nocpp option
  (Line 37 and 43), which prevents the xrdb from preprocessing the
  .Xresources file. Many configurations like the popular solarized color
  theme (https://github.com/solarized/xresources/blob/master/solarized)
  use this and you find some complaints about in on the internet

  https://bbs.archlinux.org/viewtopic.php?id=164108
  https://bugs.launchpad.net/ubuntu/+source/unity/+bug/1163129
  
http://superuser.com/questions/655857/urxvt-uses-pink-instead-of-solarized-until-i-run-xrdb-xresources/656213

  I don't see a reason for not using the preprocessor and so did the
  editor of Xsession (the option is not used in
  /etc/X11/Xsession.d/30x11-common_xresources)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lightdm/+bug/1244578/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1462356] [NEW] broadband connection establishes then drop after a few seconds

2015-06-05 Thread Louis Bouchard
Public bug reported:

When connecting a cell phone to share the cell network connection, the
link gets established then drops after a few seconds. Doing the same on
Trusty works correctly.

Here is the end of a successful connection on Trusty :
==
Jun  5 14:13:54 trusty NetworkManager[850]: info Activation (usb0) 
successful, device activated.
Jun  5 14:13:54 trusty dbus[756]: [system] Activating service 
name='org.freedesktop.nm_dispatcher' (using servicehelper)
Jun  5 14:13:54 trusty dbus[756]: [system] Successfully activated service 
'org.freedesktop.nm_dispatcher'
Jun  5 14:14:05 trusty ModemManager[769]: info  Creating modem with plugin 
'Generic' and '2' ports
Jun  5 14:14:05 trusty ModemManager[769]: info  Modem for device at 
'/sys/devices/pci:00/:00:1d.0/usb2/2-1/2-1.4' successfully created
Jun  5 14:14:09 trusty ModemManager[769]: warn  couldn't load Manufacturer: 
'Serial command timed out'
Jun  5 14:14:11 trusty NetworkManager[850]: info (usb0): IP6 addrconf timed 
out or failed.
Jun  5 14:14:11 trusty NetworkManager[850]: info Activation (usb0) Stage 4 of 
5 (IPv6 Configure Timeout) scheduled...
Jun  5 14:14:11 trusty NetworkManager[850]: info Activation (usb0) Stage 4 of 
5 (IPv6 Configure Timeout) started...
Jun  5 14:14:11 trusty NetworkManager[850]: info Activation (usb0) Stage 4 of 
5 (IPv6 Configure Timeout) complete.
Jun  5 14:14:12 trusty ModemManager[769]: warn  couldn't load Model: 'Serial 
command timed out'
Jun  5 14:14:13 trusty ntpdate[2828]: no server suitable for synchronization 
found
Jun  5 14:14:15 trusty ModemManager[769]: warn  couldn't load Revision: 
'Serial command timed out'
Jun  5 14:14:18 trusty ModemManager[769]: warn  couldn't load Equipment 
Identifier: 'Serial command timed out'
Jun  5 14:14:22 trusty ModemManager[769]: warn  couldn't load IMEI: 'Serial 
command timed out'
Jun  5 14:14:22 trusty ModemManager[769]: info  Modem: state changed (unknown 
- disabled)
Jun  5 14:14:22 trusty NetworkManager[850]: warn (ttyACM0): failed to look up 
interface index
Jun  5 14:14:22 trusty NetworkManager[850]: info (ttyACM0): new Broadband 
device (driver: 'cdc_acm, cdc_ether' ifindex: 0)
Jun  5 14:14:22 trusty NetworkManager[850]: info (ttyACM0): exported as 
/org/freedesktop/NetworkManager/Devices/3
Jun  5 14:14:22 trusty NetworkManager[850]: info (ttyACM0): device state 
change: unmanaged - unavailable (reason 'managed') [10 20 2]
Jun  5 14:14:22 trusty NetworkManager[850]: info (ttyACM0): deactivating 
device (reason 'managed') [2]
Jun  5 14:14:22 trusty NetworkManager[850]: info (ttyACM0): device state 
change: unavailable - disconnected (reason 'none') [20 30 0]
Jun  5 14:14:58 trusty wpa_supplicant[1063]: wlan0: CTRL-EVENT-SCAN-STARTED   
Jun  5 14:15:02 trusty wpa_supplicant[1063]: nl80211: 
send_and_recv-nl_recvmsgs failed: -33

A connection using the same cell phone on Wily has this log :

Jun  5 14:20:30 wily NetworkManager[836]: info Activation (usb0) successful, 
device activated.
Jun  5 14:20:30 wily dbus[897]: [system] Activating via systemd: service 
name='org.freedesktop.nm_dispatcher' 
unit='dbus-org.freedesktop.nm-dispatcher.service'
Jun  5 14:20:30 wily systemd[1]: Starting Network Manager Script Dispatcher 
Service...
Jun  5 14:20:30 wily NetworkManager[836]: warn dnsmasq appeared on DBus: :1.81
Jun  5 14:20:30 wily NetworkManager[836]: info Writing DNS information to 
/sbin/resolvconf
Jun  5 14:20:30 wily dnsmasq[2869]: configuration des serveurs amonts à partir 
de DBus
Jun  5 14:20:30 wily dnsmasq[2869]: utilise le serveur de nom 192.168.42.129#53
Jun  5 14:20:30 wily dbus[897]: [system] Successfully activated service 
'org.freedesktop.nm_dispatcher'
Jun  5 14:20:30 wily systemd[1]: Started Network Manager Script Dispatcher 
Service.
Jun  5 14:20:30 wily nm-dispatcher: Dispatching action 'up' for usb0   
Jun  5 14:20:30 wily NetworkManager[836]: status: Impossible de se connecter à 
Upstart: Failed to connect to socket /com/ubuntu/upstart: Connexion refusée
Jun  5 14:20:30 wily systemd[1]: Stopping LSB: Start NTP daemon... 
Jun  5 14:20:30 wily ntp[3063]: * Stopping NTP server ntpd 
Jun  5 14:20:30 wily ntpd[1045]: ntpd exiting on signal 15 
Jun  5 14:20:30 wily ntp[3063]: ...done.   
Jun  5 14:20:30 wily systemd[1]: Stopped LSB: Start NTP daemon.
Jun  5 14:20:30 wily systemd[1]: Reloaded OpenBSD Secure Shell server. 
Jun  5 14:20:30 wily ntpdate[3075]: name server cannot be used: Temporary 
failure in name resolution (-3)
Jun  5 14:20:30 wily systemd[1]: Starting LSB: Start NTP daemon... 
Jun  5 14:20:30 wily ntp[3138]: * Starting NTP server ntpd 
Jun  5 14:20:30 wily ntpd[3149]: ntpd 4.2.6p5@1.2349-o Mon Apr 13 17:00:14 UTC 
2015 (1)
Jun  5 14:20:30 wily ntpd[3158]: proto: precision = 0.171 usec   

[Desktop-packages] [Bug 1462356] Re: broadband connection establishes then drop after a few seconds

2015-06-05 Thread Louis Bouchard
** Changed in: network-manager (Ubuntu)
   Status: New = Triaged

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to network-manager in Ubuntu.
https://bugs.launchpad.net/bugs/1462356

Title:
  broadband connection establishes then drop after a few seconds

Status in network-manager package in Ubuntu:
  Triaged

Bug description:
  When connecting a cell phone to share the cell network connection, the
  link gets established then drops after a few seconds. Doing the same
  on Trusty works correctly.

  Here is the end of a successful connection on Trusty :
  ==
  Jun  5 14:13:54 trusty NetworkManager[850]: info Activation (usb0) 
successful, device activated.
  Jun  5 14:13:54 trusty dbus[756]: [system] Activating service 
name='org.freedesktop.nm_dispatcher' (using servicehelper)
  Jun  5 14:13:54 trusty dbus[756]: [system] Successfully activated service 
'org.freedesktop.nm_dispatcher'
  Jun  5 14:14:05 trusty ModemManager[769]: info  Creating modem with plugin 
'Generic' and '2' ports
  Jun  5 14:14:05 trusty ModemManager[769]: info  Modem for device at 
'/sys/devices/pci:00/:00:1d.0/usb2/2-1/2-1.4' successfully created
  Jun  5 14:14:09 trusty ModemManager[769]: warn  couldn't load Manufacturer: 
'Serial command timed out'
  Jun  5 14:14:11 trusty NetworkManager[850]: info (usb0): IP6 addrconf timed 
out or failed.
  Jun  5 14:14:11 trusty NetworkManager[850]: info Activation (usb0) Stage 4 
of 5 (IPv6 Configure Timeout) scheduled...
  Jun  5 14:14:11 trusty NetworkManager[850]: info Activation (usb0) Stage 4 
of 5 (IPv6 Configure Timeout) started...
  Jun  5 14:14:11 trusty NetworkManager[850]: info Activation (usb0) Stage 4 
of 5 (IPv6 Configure Timeout) complete.
  Jun  5 14:14:12 trusty ModemManager[769]: warn  couldn't load Model: 
'Serial command timed out'
  Jun  5 14:14:13 trusty ntpdate[2828]: no server suitable for synchronization 
found
  Jun  5 14:14:15 trusty ModemManager[769]: warn  couldn't load Revision: 
'Serial command timed out'
  Jun  5 14:14:18 trusty ModemManager[769]: warn  couldn't load Equipment 
Identifier: 'Serial command timed out'
  Jun  5 14:14:22 trusty ModemManager[769]: warn  couldn't load IMEI: 'Serial 
command timed out'
  Jun  5 14:14:22 trusty ModemManager[769]: info  Modem: state changed 
(unknown - disabled)
  Jun  5 14:14:22 trusty NetworkManager[850]: warn (ttyACM0): failed to look 
up interface index
  Jun  5 14:14:22 trusty NetworkManager[850]: info (ttyACM0): new Broadband 
device (driver: 'cdc_acm, cdc_ether' ifindex: 0)
  Jun  5 14:14:22 trusty NetworkManager[850]: info (ttyACM0): exported as 
/org/freedesktop/NetworkManager/Devices/3
  Jun  5 14:14:22 trusty NetworkManager[850]: info (ttyACM0): device state 
change: unmanaged - unavailable (reason 'managed') [10 20 2]
  Jun  5 14:14:22 trusty NetworkManager[850]: info (ttyACM0): deactivating 
device (reason 'managed') [2]
  Jun  5 14:14:22 trusty NetworkManager[850]: info (ttyACM0): device state 
change: unavailable - disconnected (reason 'none') [20 30 0]
  Jun  5 14:14:58 trusty wpa_supplicant[1063]: wlan0: CTRL-EVENT-SCAN-STARTED   
  Jun  5 14:15:02 trusty wpa_supplicant[1063]: nl80211: 
send_and_recv-nl_recvmsgs failed: -33

  A connection using the same cell phone on Wily has this log :
  
  Jun  5 14:20:30 wily NetworkManager[836]: info Activation (usb0) 
successful, device activated.
  Jun  5 14:20:30 wily dbus[897]: [system] Activating via systemd: service 
name='org.freedesktop.nm_dispatcher' 
unit='dbus-org.freedesktop.nm-dispatcher.service'
  Jun  5 14:20:30 wily systemd[1]: Starting Network Manager Script Dispatcher 
Service...
  Jun  5 14:20:30 wily NetworkManager[836]: warn dnsmasq appeared on DBus: 
:1.81
  Jun  5 14:20:30 wily NetworkManager[836]: info Writing DNS information to 
/sbin/resolvconf
  Jun  5 14:20:30 wily dnsmasq[2869]: configuration des serveurs amonts à 
partir de DBus
  Jun  5 14:20:30 wily dnsmasq[2869]: utilise le serveur de nom 
192.168.42.129#53
  Jun  5 14:20:30 wily dbus[897]: [system] Successfully activated service 
'org.freedesktop.nm_dispatcher'
  Jun  5 14:20:30 wily systemd[1]: Started Network Manager Script Dispatcher 
Service.
  Jun  5 14:20:30 wily nm-dispatcher: Dispatching action 'up' for usb0  
 
  Jun  5 14:20:30 wily NetworkManager[836]: status: Impossible de se connecter 
à Upstart: Failed to connect to socket /com/ubuntu/upstart: Connexion refusée
  Jun  5 14:20:30 wily systemd[1]: Stopping LSB: Start NTP daemon...
 
  Jun  5 14:20:30 wily ntp[3063]: * Stopping NTP server ntpd
 
  Jun  5 14:20:30 wily ntpd[1045]: ntpd exiting on signal 15
 
  Jun  5 14:20:30 wily ntp[3063]: ...done.  
 
  Jun  5 14:20:30 wily systemd[1]: Stopped LSB: Start NTP daemon.   
 
  Jun  5 14:20:30 wily systemd[1]: 

[Desktop-packages] [Bug 1424652] [NEW] deja-dup 32.0-0ubuntu3 fails to build following change in Build-dep

2015-02-23 Thread Louis Bouchard
Public bug reported:

the package fails to build with the following :

95/95 Test  #3: validate-deja-dup.appdata.xml ***Failed  252.85 sec
THIS TOOL IS *DEPRECATED* AND WILL BE REMOVED SOON.
Please use 'apstream-util validate' in appstream-glib.

/build/buildd/deja-dup-32.0/obj-x86_64-linux-gnu/deja-dup/deja-dup.appdata.xml 
2 problems detected:
? url-not-found : screenshot url not found 
[https://launchpad.net/deja-dup/32/32.0/+download/screenshot-1.png]
? url-not-found : screenshot url not found 
[https://launchpad.net/deja-dup/32/32.0/+download/screenshot-2.png]


99% tests passed, 1 tests failed out of 95

apstream-util validate is deprecated (though it still runs) and is
replaced by appstream-util validate-relax

** Affects: deja-dup (Ubuntu)
 Importance: Low
 Status: Confirmed

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to deja-dup in Ubuntu.
https://bugs.launchpad.net/bugs/1424652

Title:
  deja-dup 32.0-0ubuntu3 fails to build following change in Build-dep

Status in deja-dup package in Ubuntu:
  Confirmed

Bug description:
  the package fails to build with the following :

  95/95 Test  #3: validate-deja-dup.appdata.xml ***Failed  252.85 
sec
  THIS TOOL IS *DEPRECATED* AND WILL BE REMOVED SOON.
  Please use 'apstream-util validate' in appstream-glib.

  
/build/buildd/deja-dup-32.0/obj-x86_64-linux-gnu/deja-dup/deja-dup.appdata.xml 
2 problems detected:
  ? url-not-found : screenshot url not found 
[https://launchpad.net/deja-dup/32/32.0/+download/screenshot-1.png]
  ? url-not-found : screenshot url not found 
[https://launchpad.net/deja-dup/32/32.0/+download/screenshot-2.png]

  
  99% tests passed, 1 tests failed out of 95

  apstream-util validate is deprecated (though it still runs) and is
  replaced by appstream-util validate-relax

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/deja-dup/+bug/1424652/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1424652] Re: deja-dup 32.0-0ubuntu3 fails to build following change in Build-dep

2015-02-23 Thread Louis Bouchard
** Patch added: lp1424652_appdata_ftbs.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/deja-dup/+bug/1424652/+attachment/4325491/+files/lp1424652_appdata_ftbs.debdiff

** Changed in: deja-dup (Ubuntu)
   Status: New = Confirmed

** Changed in: deja-dup (Ubuntu)
   Importance: Undecided = Low

** Description changed:

  the package fails to build with the following :
  
  95/95 Test  #3: validate-deja-dup.appdata.xml ***Failed  252.85 
sec
  THIS TOOL IS *DEPRECATED* AND WILL BE REMOVED SOON.
  Please use 'apstream-util validate' in appstream-glib.
  
  
/build/buildd/deja-dup-32.0/obj-x86_64-linux-gnu/deja-dup/deja-dup.appdata.xml 
2 problems detected:
  ? url-not-found : screenshot url not found 
[https://launchpad.net/deja-dup/32/32.0/+download/screenshot-1.png]
  ? url-not-found : screenshot url not found 
[https://launchpad.net/deja-dup/32/32.0/+download/screenshot-2.png]
  
- 
  99% tests passed, 1 tests failed out of 95
  
- apstream-util validate is deprecated (though it still runs) and is
- replaced by appstream-util validate-relax
+ appdata-validate is deprecated (though it still runs) and is replaced by
+ appstream-util validate-relax

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to deja-dup in Ubuntu.
https://bugs.launchpad.net/bugs/1424652

Title:
  deja-dup 32.0-0ubuntu3 fails to build following change in Build-dep

Status in deja-dup package in Ubuntu:
  Confirmed

Bug description:
  the package fails to build with the following :

  95/95 Test  #3: validate-deja-dup.appdata.xml ***Failed  252.85 
sec
  THIS TOOL IS *DEPRECATED* AND WILL BE REMOVED SOON.
  Please use 'apstream-util validate' in appstream-glib.

  
/build/buildd/deja-dup-32.0/obj-x86_64-linux-gnu/deja-dup/deja-dup.appdata.xml 
2 problems detected:
  ? url-not-found : screenshot url not found 
[https://launchpad.net/deja-dup/32/32.0/+download/screenshot-1.png]
  ? url-not-found : screenshot url not found 
[https://launchpad.net/deja-dup/32/32.0/+download/screenshot-2.png]

  99% tests passed, 1 tests failed out of 95

  appdata-validate is deprecated (though it still runs) and is replaced
  by appstream-util validate-relax

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/deja-dup/+bug/1424652/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1424652] Re: deja-dup 32.0-0ubuntu3 fails to build following change in Build-dep

2015-02-23 Thread Louis Bouchard
For some obscur reason, a local build and/or PPA build work but it does
FTBS for the archive.

The following debdiff should fix the test for good and use the new
appstream-util validate-relax syntax as outlined here :

http://blogs.gnome.org/hughsie/2014/10/30/appdata-tools-is-dead/

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to deja-dup in Ubuntu.
https://bugs.launchpad.net/bugs/1424652

Title:
  deja-dup 32.0-0ubuntu3 fails to build following change in Build-dep

Status in deja-dup package in Ubuntu:
  Confirmed

Bug description:
  the package fails to build with the following :

  95/95 Test  #3: validate-deja-dup.appdata.xml ***Failed  252.85 
sec
  THIS TOOL IS *DEPRECATED* AND WILL BE REMOVED SOON.
  Please use 'apstream-util validate' in appstream-glib.

  
/build/buildd/deja-dup-32.0/obj-x86_64-linux-gnu/deja-dup/deja-dup.appdata.xml 
2 problems detected:
  ? url-not-found : screenshot url not found 
[https://launchpad.net/deja-dup/32/32.0/+download/screenshot-1.png]
  ? url-not-found : screenshot url not found 
[https://launchpad.net/deja-dup/32/32.0/+download/screenshot-2.png]

  99% tests passed, 1 tests failed out of 95

  appdata-validate is deprecated (though it still runs) and is replaced
  by appstream-util validate-relax

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/deja-dup/+bug/1424652/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1352809] Re: /usr/bin/lp on Trusty using -h option doesn't work as expected

2015-02-12 Thread Louis Bouchard
Verified for both Utopic  Trusty. Thanks

** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/1352809

Title:
  /usr/bin/lp on Trusty using -h option doesn't work as expected

Status in cups package in Ubuntu:
  Fix Released
Status in cups source package in Trusty:
  Fix Committed
Status in cups source package in Utopic:
  Fix Committed

Bug description:
  [SRU justification]
  The -h option should override the value stored in the CUPS_SERVER environment 
variable

  [Impact]
  Without this fix, user is unable to define the printserver to use with the -h 
option

  [Fix]
  Verify if the -h option has defined the printserver before using CUPS_SERVER 
to define it.

  [Test Case]
  $ export CUPS_SERVER=blah
  $ lp -h {valid printserver} -d {printqueue} /etc/hosts

  Without the fix :
  lp: No such file or directory

  With the fix :
  request id is {printqueue}-37 (1 file(s))

  [Regression]
  None expected. New code path (from upstream patch) is only triggered if
  -h option is used.

  [Original description of the problem]

  1) The release of Ubuntu you are using, via 'lsb_release -rd' or
  System - About Ubuntu

  Description:  Ubuntu 14.04 LTS
  Release:  14.04
  Codename: trusty

  2) The version of the package you are using, via 'apt-cache policy
  pkgname' or by checking in Software Center

  cups-client:
    Installed: 1.7.2-0ubuntu1.1

  3) What you expected to happen

  When using lp -h to send a print job to a printer, I expected to the
  job to be sent to the printer and to override any env variables set or
  conf file setting

  4) What happened instead

  To summarise the behaviour I'm seeing

  1) with CUPS_SERVER env variable unset and no ServerName set in 
/etc/cups/client.conf, we see that using lp -h with hostnames gives us error 
lp: Error - add '/version=1.1' to server name.
  Using IPs sends the job to the printer as expected.

  2) with CUPS_SERVER env variable set using hostname with/without port
  and no ServerName set in /etc/cups/client.conf, we see that the only
  time it sends a job is when an IP and port are used. Otherwise it
  seems to error. I believe it's using the env variable, even though -h
  is being used.

  3) with CUPS_SERVER env variable set using IP address with/without
  port and no ServerName set in /etc/cups/client.conf we see that the
  job is always sent using lp -h but it is always sent to the value in
  env variable, thus ignoring the command line.

  ===

  with CUPS_SERVER env variable unset and no ServerName option set in
  /etc/cups/client.conf

  $ lp test2.txt
  lp: Error - scheduler not responding. (expected as no server is set or 
specified)

  Using hostname
  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  request id is PDF-54 (1 file(s))

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-55 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server1

  $ lp test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using hostname
  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-56 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server:631

  Same results as above

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8:631

  $ lp -h 555.555.555.555 test2.txt
  request id is PDF-66 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8

  Same results as above

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1352809/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1307413] Re: apport-unpack requires too much main memory to run

2015-02-09 Thread Louis Bouchard
** Also affects: apport (Ubuntu Precise)
   Importance: Undecided
   Status: New

** Changed in: apport (Ubuntu Precise)
   Status: New = Confirmed

** Changed in: apport (Ubuntu Precise)
   Importance: Undecided = Medium

** Changed in: apport (Ubuntu Precise)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to apport in Ubuntu.
https://bugs.launchpad.net/bugs/1307413

Title:
  apport-unpack requires too much main memory to run

Status in apport package in Ubuntu:
  Fix Released
Status in apport source package in Precise:
  Invalid

Bug description:
  when running apport-unpack on large apport reports (linux-image kernel
  dumps is a good example), it requires an enormous amount of main
  memory to run.

  An example is a 1.3Gb apport report that runs for more than 24 hours
  with more than 4Gb of RSS.

  The command should requires less memory to extract those big reports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apport/+bug/1307413/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1307413] Re: apport-unpack requires too much main memory to run

2015-02-09 Thread Louis Bouchard
After discussion with upstream; doing the SRU to precise for such a
corner case is a waste of time. The Vivid package is available if the
change is _really_ needed.

Marking the precise task invalid

** Changed in: apport (Ubuntu Precise)
   Status: Confirmed = Invalid

** Changed in: apport (Ubuntu Precise)
 Assignee: Louis Bouchard (louis-bouchard) = (unassigned)

** Changed in: apport (Ubuntu)
 Assignee: Louis Bouchard (louis-bouchard) = (unassigned)

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to apport in Ubuntu.
https://bugs.launchpad.net/bugs/1307413

Title:
  apport-unpack requires too much main memory to run

Status in apport package in Ubuntu:
  Fix Released
Status in apport source package in Precise:
  Invalid

Bug description:
  when running apport-unpack on large apport reports (linux-image kernel
  dumps is a good example), it requires an enormous amount of main
  memory to run.

  An example is a 1.3Gb apport report that runs for more than 24 hours
  with more than 4Gb of RSS.

  The command should requires less memory to extract those big reports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apport/+bug/1307413/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1352809] Re: /usr/bin/lp on Trusty using -h option doesn't work as expected

2015-02-09 Thread Louis Bouchard
** Changed in: cups (Ubuntu Trusty)
   Status: Confirmed = In Progress

** Changed in: cups (Ubuntu Utopic)
   Status: Confirmed = In Progress

** Changed in: cups (Ubuntu Trusty)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

** Changed in: cups (Ubuntu Utopic)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/1352809

Title:
  /usr/bin/lp on Trusty using -h option doesn't work as expected

Status in cups package in Ubuntu:
  Fix Released
Status in cups source package in Trusty:
  In Progress
Status in cups source package in Utopic:
  In Progress

Bug description:
  1) The release of Ubuntu you are using, via 'lsb_release -rd' or
  System - About Ubuntu

  Description:  Ubuntu 14.04 LTS
  Release:  14.04
  Codename: trusty

  
  2) The version of the package you are using, via 'apt-cache policy pkgname' 
or by checking in Software Center

  cups-client:
Installed: 1.7.2-0ubuntu1.1

  3) What you expected to happen

  When using lp -h to send a print job to a printer, I expected to the
  job to be sent to the printer and to override any env variables set or
  conf file setting

  4) What happened instead

  To summarise the behaviour I'm seeing

  1) with CUPS_SERVER env variable unset and no ServerName set in 
/etc/cups/client.conf, we see that using lp -h with hostnames gives us error 
lp: Error - add '/version=1.1' to server name.
  Using IPs sends the job to the printer as expected.

  2) with CUPS_SERVER env variable set using hostname with/without port
  and no ServerName set in /etc/cups/client.conf, we see that the only
  time it sends a job is when an IP and port are used. Otherwise it
  seems to error. I believe it's using the env variable, even though -h
  is being used.

  3) with CUPS_SERVER env variable set using IP address with/without
  port and no ServerName set in /etc/cups/client.conf we see that the
  job is always sent using lp -h but it is always sent to the value in
  env variable, thus ignoring the command line.

  ===

  with CUPS_SERVER env variable unset and no ServerName option set in
  /etc/cups/client.conf

  $ lp test2.txt
  lp: Error - scheduler not responding. (expected as no server is set or 
specified)

  Using hostname
  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  request id is PDF-54 (1 file(s))

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-55 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server1

  $ lp test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using hostname
  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-56 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server:631

  Same results as above

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8:631

  $ lp -h 555.555.555.555 test2.txt
  request id is PDF-66 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8

  Same results as above

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1352809/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1352809] Re: /usr/bin/lp on Trusty using -h option doesn't work as expected

2015-02-09 Thread Louis Bouchard
debdiff for trusty's SRU

** Patch added: lp1352809_option_override_trusty.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1352809/+attachment/4315357/+files/lp1352809_option_override_trusty.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/1352809

Title:
  /usr/bin/lp on Trusty using -h option doesn't work as expected

Status in cups package in Ubuntu:
  Fix Released
Status in cups source package in Trusty:
  In Progress
Status in cups source package in Utopic:
  In Progress

Bug description:
  [SRU justification]
  The -h option should override the value stored in the CUPS_SERVER environment 
variable

  [Impact]
  Without this fix, user is unable to define the printserver to use with the -h 
option

  [Fix]
  Verify if the -h option has defined the printserver before using CUPS_SERVER 
to define it.

  [Test Case]
  $ export CUPS_SERVER=blah
  $ lp -h {valid printserver} -d {printqueue} /etc/hosts

  Without the fix :
  lp: No such file or directory

  With the fix :
  request id is {printqueue}-37 (1 file(s))

  [Regression]
  None expected. New code path (from upstream patch) is only triggered if
  -h option is used.

  [Original description of the problem]

  1) The release of Ubuntu you are using, via 'lsb_release -rd' or
  System - About Ubuntu

  Description:  Ubuntu 14.04 LTS
  Release:  14.04
  Codename: trusty

  2) The version of the package you are using, via 'apt-cache policy
  pkgname' or by checking in Software Center

  cups-client:
    Installed: 1.7.2-0ubuntu1.1

  3) What you expected to happen

  When using lp -h to send a print job to a printer, I expected to the
  job to be sent to the printer and to override any env variables set or
  conf file setting

  4) What happened instead

  To summarise the behaviour I'm seeing

  1) with CUPS_SERVER env variable unset and no ServerName set in 
/etc/cups/client.conf, we see that using lp -h with hostnames gives us error 
lp: Error - add '/version=1.1' to server name.
  Using IPs sends the job to the printer as expected.

  2) with CUPS_SERVER env variable set using hostname with/without port
  and no ServerName set in /etc/cups/client.conf, we see that the only
  time it sends a job is when an IP and port are used. Otherwise it
  seems to error. I believe it's using the env variable, even though -h
  is being used.

  3) with CUPS_SERVER env variable set using IP address with/without
  port and no ServerName set in /etc/cups/client.conf we see that the
  job is always sent using lp -h but it is always sent to the value in
  env variable, thus ignoring the command line.

  ===

  with CUPS_SERVER env variable unset and no ServerName option set in
  /etc/cups/client.conf

  $ lp test2.txt
  lp: Error - scheduler not responding. (expected as no server is set or 
specified)

  Using hostname
  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  request id is PDF-54 (1 file(s))

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-55 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server1

  $ lp test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using hostname
  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-56 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server:631

  Same results as above

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8:631

  $ lp -h 555.555.555.555 test2.txt
  request id is PDF-66 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8

  Same results as above

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1352809/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1352809] Re: /usr/bin/lp on Trusty using -h option doesn't work as expected

2015-02-09 Thread Louis Bouchard
debdiff for Utopic

** Patch added: lp1352809_option_override_utopic.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1352809/+attachment/4315358/+files/lp1352809_option_override_utopic.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/1352809

Title:
  /usr/bin/lp on Trusty using -h option doesn't work as expected

Status in cups package in Ubuntu:
  Fix Released
Status in cups source package in Trusty:
  In Progress
Status in cups source package in Utopic:
  In Progress

Bug description:
  [SRU justification]
  The -h option should override the value stored in the CUPS_SERVER environment 
variable

  [Impact]
  Without this fix, user is unable to define the printserver to use with the -h 
option

  [Fix]
  Verify if the -h option has defined the printserver before using CUPS_SERVER 
to define it.

  [Test Case]
  $ export CUPS_SERVER=blah
  $ lp -h {valid printserver} -d {printqueue} /etc/hosts

  Without the fix :
  lp: No such file or directory

  With the fix :
  request id is {printqueue}-37 (1 file(s))

  [Regression]
  None expected. New code path (from upstream patch) is only triggered if
  -h option is used.

  [Original description of the problem]

  1) The release of Ubuntu you are using, via 'lsb_release -rd' or
  System - About Ubuntu

  Description:  Ubuntu 14.04 LTS
  Release:  14.04
  Codename: trusty

  2) The version of the package you are using, via 'apt-cache policy
  pkgname' or by checking in Software Center

  cups-client:
    Installed: 1.7.2-0ubuntu1.1

  3) What you expected to happen

  When using lp -h to send a print job to a printer, I expected to the
  job to be sent to the printer and to override any env variables set or
  conf file setting

  4) What happened instead

  To summarise the behaviour I'm seeing

  1) with CUPS_SERVER env variable unset and no ServerName set in 
/etc/cups/client.conf, we see that using lp -h with hostnames gives us error 
lp: Error - add '/version=1.1' to server name.
  Using IPs sends the job to the printer as expected.

  2) with CUPS_SERVER env variable set using hostname with/without port
  and no ServerName set in /etc/cups/client.conf, we see that the only
  time it sends a job is when an IP and port are used. Otherwise it
  seems to error. I believe it's using the env variable, even though -h
  is being used.

  3) with CUPS_SERVER env variable set using IP address with/without
  port and no ServerName set in /etc/cups/client.conf we see that the
  job is always sent using lp -h but it is always sent to the value in
  env variable, thus ignoring the command line.

  ===

  with CUPS_SERVER env variable unset and no ServerName option set in
  /etc/cups/client.conf

  $ lp test2.txt
  lp: Error - scheduler not responding. (expected as no server is set or 
specified)

  Using hostname
  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  request id is PDF-54 (1 file(s))

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-55 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server1

  $ lp test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using hostname
  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-56 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server:631

  Same results as above

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8:631

  $ lp -h 555.555.555.555 test2.txt
  request id is PDF-66 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8

  Same results as above

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1352809/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1352809] Re: /usr/bin/lp on Trusty using -h option doesn't work as expected

2015-02-09 Thread Louis Bouchard
** Description changed:

+ [SRU justification]
+ The -h option should override the value stored in the CUPS_SERVER environment 
variable
+ 
+ [Impact]
+ Without this fix, user is unable to define the printserver to use with the -h 
option
+ 
+ [Fix]
+ Verify if the -h option has defined the printserver before using CUPS_SERVER 
to define it.
+ 
+ [Test Case]
+ $ export CUPS_SERVER=blah
+ $ lp -h {valid printserver} -d {printqueue} /etc/hosts
+ 
+ Without the fix :
+ lp: No such file or directory
+ 
+ With the fix :
+ request id is {printqueue}-37 (1 file(s))
+ 
+ [Regression]
+ None expected. New code path (from upstream patch) is only triggered if
+ -h option is used.
+ 
+ [Original description of the problem]
+ 
  1) The release of Ubuntu you are using, via 'lsb_release -rd' or System
  - About Ubuntu
  
  Description:  Ubuntu 14.04 LTS
  Release:  14.04
  Codename: trusty
  
- 
- 2) The version of the package you are using, via 'apt-cache policy pkgname' 
or by checking in Software Center
+ 2) The version of the package you are using, via 'apt-cache policy
+ pkgname' or by checking in Software Center
  
  cups-client:
-   Installed: 1.7.2-0ubuntu1.1
+   Installed: 1.7.2-0ubuntu1.1
  
  3) What you expected to happen
  
  When using lp -h to send a print job to a printer, I expected to the job
  to be sent to the printer and to override any env variables set or conf
  file setting
  
  4) What happened instead
  
  To summarise the behaviour I'm seeing
  
  1) with CUPS_SERVER env variable unset and no ServerName set in 
/etc/cups/client.conf, we see that using lp -h with hostnames gives us error 
lp: Error - add '/version=1.1' to server name.
  Using IPs sends the job to the printer as expected.
  
  2) with CUPS_SERVER env variable set using hostname with/without port
  and no ServerName set in /etc/cups/client.conf, we see that the only
  time it sends a job is when an IP and port are used. Otherwise it seems
  to error. I believe it's using the env variable, even though -h is being
  used.
  
  3) with CUPS_SERVER env variable set using IP address with/without port
  and no ServerName set in /etc/cups/client.conf we see that the job is
  always sent using lp -h but it is always sent to the value in env
  variable, thus ignoring the command line.
  
  ===
  
  with CUPS_SERVER env variable unset and no ServerName option set in
  /etc/cups/client.conf
  
  $ lp test2.txt
  lp: Error - scheduler not responding. (expected as no server is set or 
specified)
  
  Using hostname
  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.
  
  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.
  
  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  request id is PDF-54 (1 file(s))
  
  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-55 (1 file(s))
  
  ===
  
  With CUPS_SERVER env variable *set CUPS_SERVER=server1
  
  $ lp test2.txt
  lp: Error - add '/version=1.1' to server name.
  
  Using hostname
  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.
  
  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.
  
  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  lp: Error - add '/version=1.1' to server name.
  
  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-56 (1 file(s))
  
  ===
  
  With CUPS_SERVER env variable *set CUPS_SERVER=server:631
  
  Same results as above
  
  ===
  
  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8:631
  
  $ lp -h 555.555.555.555 test2.txt
  request id is PDF-66 (1 file(s))
  
  ===
  
  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8
  
  Same results as above

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/1352809

Title:
  /usr/bin/lp on Trusty using -h option doesn't work as expected

Status in cups package in Ubuntu:
  Fix Released
Status in cups source package in Trusty:
  In Progress
Status in cups source package in Utopic:
  In Progress

Bug description:
  [SRU justification]
  The -h option should override the value stored in the CUPS_SERVER environment 
variable

  [Impact]
  Without this fix, user is unable to define the printserver to use with the -h 
option

  [Fix]
  Verify if the -h option has defined the printserver before using CUPS_SERVER 
to define it.

  [Test Case]
  $ export CUPS_SERVER=blah
  $ lp -h {valid printserver} -d {printqueue} /etc/hosts

  Without the fix :
  lp: No such file or directory

  With the fix :
  request id is {printqueue}-37 (1 file(s))

  [Regression]
  None expected. New code path (from upstream patch) is only triggered if
  -h option is used.

  [Original description of the problem]

  1) The release of Ubuntu you are using, via 'lsb_release -rd' or
  System - About Ubuntu

  Description:  Ubuntu 14.04 LTS
  Release:  14.04
  Codename: trusty

  2) The 

[Desktop-packages] [Bug 1352809] Re: /usr/bin/lp on Trusty using -h option doesn't work as expected

2015-02-04 Thread Louis Bouchard
Analysis of the upstream patch that fixes the problem

The new _cupsSetDefaults() in cups v2.1 has been refactored to use a 
_cups_client_conf_t structure that holds the user defined value. This structure 
gets populated first by cups_init_client_conf(), that reads the 
/etc/cups/client.conf and ~/.cups/client.conf if present.

It then runs cups_finalize_client_conf() that reads in the environment
variables in the _cups_client_conf_t structure.

A new check is added to verify if a value already exists in cg-server
before calling cupsSetServer() on the ENV variable collected value which
is stored in the structure.

The original patch is :
  if (!cg-server[0] || !cg-ipp_port)
cupsSetServer(cc.server_name);

  if (!cg-ipp_port)
  {
const char  *ipp_port;  /* IPP_PORT environment variable */

if ((ipp_port = getenv(IPP_PORT)) != NULL)
{
  if ((cg-ipp_port = atoi(ipp_port)) = 0)
cg-ipp_port = CUPS_DEFAULT_IPP_PORT;
}
else
  cg-ipp_port = CUPS_DEFAULT_IPP_PORT;
  }

This portion of code is found in 1.7.2 in the cups_read_client_conf()
starting at line 1027.  The server name given to cupsSetServer() is
conditional to the value set in cups_server :

  if ((!cg-server[0] || !cg-ipp_port)  cups_server)
cupsSetServer(cups_server);

So in order to correctly trigger the server name definition, the
following patch is needed :

Index: cups-1.7.5/cups/usersys.c
===
--- cups-1.7.5.orig/cups/usersys.c  2015-02-04 12:58:39.0 +0100
+++ cups-1.7.5/cups/usersys.c   2015-02-04 13:10:54.062431647 +0100
@@ -891,6 +891,12 @@
 }

/*
+* Check if values have been provided as CLI options
+*/
+if (cg-server[0])
+  cups_server = cg-server;
+
+   /*
 * Read the configuration file and apply any environment variables; both
 * functions handle NULL cups_file_t pointers...
 */
~

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/1352809

Title:
  /usr/bin/lp on Trusty using -h option doesn't work as expected

Status in cups package in Ubuntu:
  In Progress
Status in cups source package in Trusty:
  Confirmed
Status in cups source package in Utopic:
  Confirmed

Bug description:
  1) The release of Ubuntu you are using, via 'lsb_release -rd' or
  System - About Ubuntu

  Description:  Ubuntu 14.04 LTS
  Release:  14.04
  Codename: trusty

  
  2) The version of the package you are using, via 'apt-cache policy pkgname' 
or by checking in Software Center

  cups-client:
Installed: 1.7.2-0ubuntu1.1

  3) What you expected to happen

  When using lp -h to send a print job to a printer, I expected to the
  job to be sent to the printer and to override any env variables set or
  conf file setting

  4) What happened instead

  To summarise the behaviour I'm seeing

  1) with CUPS_SERVER env variable unset and no ServerName set in 
/etc/cups/client.conf, we see that using lp -h with hostnames gives us error 
lp: Error - add '/version=1.1' to server name.
  Using IPs sends the job to the printer as expected.

  2) with CUPS_SERVER env variable set using hostname with/without port
  and no ServerName set in /etc/cups/client.conf, we see that the only
  time it sends a job is when an IP and port are used. Otherwise it
  seems to error. I believe it's using the env variable, even though -h
  is being used.

  3) with CUPS_SERVER env variable set using IP address with/without
  port and no ServerName set in /etc/cups/client.conf we see that the
  job is always sent using lp -h but it is always sent to the value in
  env variable, thus ignoring the command line.

  ===

  with CUPS_SERVER env variable unset and no ServerName option set in
  /etc/cups/client.conf

  $ lp test2.txt
  lp: Error - scheduler not responding. (expected as no server is set or 
specified)

  Using hostname
  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  request id is PDF-54 (1 file(s))

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-55 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server1

  $ lp test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using hostname
  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-56 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server:631

  Same results as above

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8:631

  $ 

[Desktop-packages] [Bug 1352809] Re: /usr/bin/lp on Trusty using -h option doesn't work as expected

2015-02-04 Thread Louis Bouchard
debdiff for vivid

** Patch added: lp1352809_option_override_vivid.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1352809/+attachment/4312133/+files/lp1352809_option_override_vivid.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/1352809

Title:
  /usr/bin/lp on Trusty using -h option doesn't work as expected

Status in cups package in Ubuntu:
  In Progress
Status in cups source package in Trusty:
  Confirmed
Status in cups source package in Utopic:
  Confirmed

Bug description:
  1) The release of Ubuntu you are using, via 'lsb_release -rd' or
  System - About Ubuntu

  Description:  Ubuntu 14.04 LTS
  Release:  14.04
  Codename: trusty

  
  2) The version of the package you are using, via 'apt-cache policy pkgname' 
or by checking in Software Center

  cups-client:
Installed: 1.7.2-0ubuntu1.1

  3) What you expected to happen

  When using lp -h to send a print job to a printer, I expected to the
  job to be sent to the printer and to override any env variables set or
  conf file setting

  4) What happened instead

  To summarise the behaviour I'm seeing

  1) with CUPS_SERVER env variable unset and no ServerName set in 
/etc/cups/client.conf, we see that using lp -h with hostnames gives us error 
lp: Error - add '/version=1.1' to server name.
  Using IPs sends the job to the printer as expected.

  2) with CUPS_SERVER env variable set using hostname with/without port
  and no ServerName set in /etc/cups/client.conf, we see that the only
  time it sends a job is when an IP and port are used. Otherwise it
  seems to error. I believe it's using the env variable, even though -h
  is being used.

  3) with CUPS_SERVER env variable set using IP address with/without
  port and no ServerName set in /etc/cups/client.conf we see that the
  job is always sent using lp -h but it is always sent to the value in
  env variable, thus ignoring the command line.

  ===

  with CUPS_SERVER env variable unset and no ServerName option set in
  /etc/cups/client.conf

  $ lp test2.txt
  lp: Error - scheduler not responding. (expected as no server is set or 
specified)

  Using hostname
  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  request id is PDF-54 (1 file(s))

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-55 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server1

  $ lp test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using hostname
  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-56 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server:631

  Same results as above

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8:631

  $ lp -h 555.555.555.555 test2.txt
  request id is PDF-66 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8

  Same results as above

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1352809/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1352809] Re: /usr/bin/lp on Trusty using -h option doesn't work as expected

2015-02-04 Thread Louis Bouchard
The upstream bug (L4561) has been identified as a duplicate of the
following bug :

 https://www.cups.org/str.php?L4528+P-1+S-2+C0+I0+E0+Q

This one provides a fix for CUPS v2.1 which encompass much more than our
current issue. I'm working on backporting the fix for the CUPS_SERVER
issue.

I should be able to propose a fix for Vivid soon.

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/1352809

Title:
  /usr/bin/lp on Trusty using -h option doesn't work as expected

Status in cups package in Ubuntu:
  In Progress
Status in cups source package in Trusty:
  Confirmed
Status in cups source package in Utopic:
  Confirmed

Bug description:
  1) The release of Ubuntu you are using, via 'lsb_release -rd' or
  System - About Ubuntu

  Description:  Ubuntu 14.04 LTS
  Release:  14.04
  Codename: trusty

  
  2) The version of the package you are using, via 'apt-cache policy pkgname' 
or by checking in Software Center

  cups-client:
Installed: 1.7.2-0ubuntu1.1

  3) What you expected to happen

  When using lp -h to send a print job to a printer, I expected to the
  job to be sent to the printer and to override any env variables set or
  conf file setting

  4) What happened instead

  To summarise the behaviour I'm seeing

  1) with CUPS_SERVER env variable unset and no ServerName set in 
/etc/cups/client.conf, we see that using lp -h with hostnames gives us error 
lp: Error - add '/version=1.1' to server name.
  Using IPs sends the job to the printer as expected.

  2) with CUPS_SERVER env variable set using hostname with/without port
  and no ServerName set in /etc/cups/client.conf, we see that the only
  time it sends a job is when an IP and port are used. Otherwise it
  seems to error. I believe it's using the env variable, even though -h
  is being used.

  3) with CUPS_SERVER env variable set using IP address with/without
  port and no ServerName set in /etc/cups/client.conf we see that the
  job is always sent using lp -h but it is always sent to the value in
  env variable, thus ignoring the command line.

  ===

  with CUPS_SERVER env variable unset and no ServerName option set in
  /etc/cups/client.conf

  $ lp test2.txt
  lp: Error - scheduler not responding. (expected as no server is set or 
specified)

  Using hostname
  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  request id is PDF-54 (1 file(s))

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-55 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server1

  $ lp test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using hostname
  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-56 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server:631

  Same results as above

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8:631

  $ lp -h 555.555.555.555 test2.txt
  request id is PDF-66 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8

  Same results as above

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1352809/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1352809] Re: /usr/bin/lp on Trusty using -h option doesn't work as expected

2015-01-13 Thread Louis Bouchard
Hello,

I have proposed a patch upstream and am waiting for their answer. I will
let you know as soon as I hear from them.

** Changed in: cups (Ubuntu)
   Status: Confirmed = In Progress

** Also affects: cups (Ubuntu Utopic)
   Importance: Undecided
   Status: New

** Also affects: cups (Ubuntu Trusty)
   Importance: Undecided
   Status: New

** Changed in: cups (Ubuntu Trusty)
   Importance: Undecided = Medium

** Changed in: cups (Ubuntu Utopic)
   Importance: Undecided = Medium

** Changed in: cups (Ubuntu Trusty)
   Status: New = Confirmed

** Changed in: cups (Ubuntu Utopic)
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/1352809

Title:
  /usr/bin/lp on Trusty using -h option doesn't work as expected

Status in cups package in Ubuntu:
  In Progress
Status in cups source package in Trusty:
  Confirmed
Status in cups source package in Utopic:
  Confirmed

Bug description:
  1) The release of Ubuntu you are using, via 'lsb_release -rd' or
  System - About Ubuntu

  Description:  Ubuntu 14.04 LTS
  Release:  14.04
  Codename: trusty

  
  2) The version of the package you are using, via 'apt-cache policy pkgname' 
or by checking in Software Center

  cups-client:
Installed: 1.7.2-0ubuntu1.1

  3) What you expected to happen

  When using lp -h to send a print job to a printer, I expected to the
  job to be sent to the printer and to override any env variables set or
  conf file setting

  4) What happened instead

  To summarise the behaviour I'm seeing

  1) with CUPS_SERVER env variable unset and no ServerName set in 
/etc/cups/client.conf, we see that using lp -h with hostnames gives us error 
lp: Error - add '/version=1.1' to server name.
  Using IPs sends the job to the printer as expected.

  2) with CUPS_SERVER env variable set using hostname with/without port
  and no ServerName set in /etc/cups/client.conf, we see that the only
  time it sends a job is when an IP and port are used. Otherwise it
  seems to error. I believe it's using the env variable, even though -h
  is being used.

  3) with CUPS_SERVER env variable set using IP address with/without
  port and no ServerName set in /etc/cups/client.conf we see that the
  job is always sent using lp -h but it is always sent to the value in
  env variable, thus ignoring the command line.

  ===

  with CUPS_SERVER env variable unset and no ServerName option set in
  /etc/cups/client.conf

  $ lp test2.txt
  lp: Error - scheduler not responding. (expected as no server is set or 
specified)

  Using hostname
  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  request id is PDF-54 (1 file(s))

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-55 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server1

  $ lp test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using hostname
  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-56 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server:631

  Same results as above

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8:631

  $ lp -h 555.555.555.555 test2.txt
  request id is PDF-66 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8

  Same results as above

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1352809/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1352809] Re: /usr/bin/lp on Trusty using -h option doesn't work as expected

2015-01-12 Thread Louis Bouchard
Opened upstream bug on the matter :
https://www.cups.org/str.php?L4561+P-1+S-2+C0+I0+E0+Q4561

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/1352809

Title:
  /usr/bin/lp on Trusty using -h option doesn't work as expected

Status in cups package in Ubuntu:
  Confirmed

Bug description:
  1) The release of Ubuntu you are using, via 'lsb_release -rd' or
  System - About Ubuntu

  Description:  Ubuntu 14.04 LTS
  Release:  14.04
  Codename: trusty

  
  2) The version of the package you are using, via 'apt-cache policy pkgname' 
or by checking in Software Center

  cups-client:
Installed: 1.7.2-0ubuntu1.1

  3) What you expected to happen

  When using lp -h to send a print job to a printer, I expected to the
  job to be sent to the printer and to override any env variables set or
  conf file setting

  4) What happened instead

  To summarise the behaviour I'm seeing

  1) with CUPS_SERVER env variable unset and no ServerName set in 
/etc/cups/client.conf, we see that using lp -h with hostnames gives us error 
lp: Error - add '/version=1.1' to server name.
  Using IPs sends the job to the printer as expected.

  2) with CUPS_SERVER env variable set using hostname with/without port
  and no ServerName set in /etc/cups/client.conf, we see that the only
  time it sends a job is when an IP and port are used. Otherwise it
  seems to error. I believe it's using the env variable, even though -h
  is being used.

  3) with CUPS_SERVER env variable set using IP address with/without
  port and no ServerName set in /etc/cups/client.conf we see that the
  job is always sent using lp -h but it is always sent to the value in
  env variable, thus ignoring the command line.

  ===

  with CUPS_SERVER env variable unset and no ServerName option set in
  /etc/cups/client.conf

  $ lp test2.txt
  lp: Error - scheduler not responding. (expected as no server is set or 
specified)

  Using hostname
  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  request id is PDF-54 (1 file(s))

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-55 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server1

  $ lp test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using hostname
  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-56 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server:631

  Same results as above

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8:631

  $ lp -h 555.555.555.555 test2.txt
  request id is PDF-66 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8

  Same results as above

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1352809/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1352809] Re: /usr/bin/lp on Trusty using -h option doesn't work as expected

2014-12-18 Thread Louis Bouchard
** Changed in: cups (Ubuntu)
   Importance: Undecided = Medium

** Changed in: cups (Ubuntu)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

** Tags added: cts

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to cups in Ubuntu.
https://bugs.launchpad.net/bugs/1352809

Title:
  /usr/bin/lp on Trusty using -h option doesn't work as expected

Status in cups package in Ubuntu:
  Confirmed

Bug description:
  1) The release of Ubuntu you are using, via 'lsb_release -rd' or
  System - About Ubuntu

  Description:  Ubuntu 14.04 LTS
  Release:  14.04
  Codename: trusty

  
  2) The version of the package you are using, via 'apt-cache policy pkgname' 
or by checking in Software Center

  cups-client:
Installed: 1.7.2-0ubuntu1.1

  3) What you expected to happen

  When using lp -h to send a print job to a printer, I expected to the
  job to be sent to the printer and to override any env variables set or
  conf file setting

  4) What happened instead

  To summarise the behaviour I'm seeing

  1) with CUPS_SERVER env variable unset and no ServerName set in 
/etc/cups/client.conf, we see that using lp -h with hostnames gives us error 
lp: Error - add '/version=1.1' to server name.
  Using IPs sends the job to the printer as expected.

  2) with CUPS_SERVER env variable set using hostname with/without port
  and no ServerName set in /etc/cups/client.conf, we see that the only
  time it sends a job is when an IP and port are used. Otherwise it
  seems to error. I believe it's using the env variable, even though -h
  is being used.

  3) with CUPS_SERVER env variable set using IP address with/without
  port and no ServerName set in /etc/cups/client.conf we see that the
  job is always sent using lp -h but it is always sent to the value in
  env variable, thus ignoring the command line.

  ===

  with CUPS_SERVER env variable unset and no ServerName option set in
  /etc/cups/client.conf

  $ lp test2.txt
  lp: Error - scheduler not responding. (expected as no server is set or 
specified)

  Using hostname
  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  request id is PDF-54 (1 file(s))

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-55 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server1

  $ lp test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using hostname
  $ lp -h server1 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h server1:631 test2.txt
  lp: Error - add '/version=1.1' to server name.

  Using IP address
  $ lp -h 192.168.254.8 test2.txt
  lp: Error - add '/version=1.1' to server name.

  $ lp -h 192.168.254.8:631 test2.txt
  request id is PDF-56 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=server:631

  Same results as above

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8:631

  $ lp -h 555.555.555.555 test2.txt
  request id is PDF-66 (1 file(s))

  ===

  With CUPS_SERVER env variable *set CUPS_SERVER=192.168.254.8

  Same results as above

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cups/+bug/1352809/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266753] Re: Boto backend removes local cache if connection cannot be made

2014-04-23 Thread Louis Bouchard
** Changed in: duplicity (Ubuntu Quantal)
   Status: In Progress = Won't Fix

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266753

Title:
  Boto backend removes local cache if connection cannot be made

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  Fix Released
Status in “duplicity” source package in Quantal:
  Won't Fix
Status in “duplicity” source package in Saucy:
  Fix Released

Bug description:
  N.B. This should not be released until after deja-dup - bug 1281066.

  SRU Justification
  [Impact]
   * When there is no connection to the S3 backend, the local cache files are 
deleted.

  [Test Case]
   1. disable the connection to S3
   2. run a collection-status (basically I run 'duply X status')

  [Regression Potential]
   * Already fixed in latest duplicity. Needs to be fixed in lockstep with 
deja-dup as it Breaks: deja-dup ( 27.3.1-0ubuntu2 ).

  --

  When there is no connection to the S3 backend, the local cache files
  are deleted. To reproduce:

  1. disable the connection to S3
  2. run a collection-status (basically I run 'duply X status')

  You'll get a bunch of these:

  Deleting local 
/srv/duply-cache/duply_srv/duplicity-inc.20140106T010002Z.to.20140107T010002Z.manifest
 (not authoritative at backend).
  Deleting local 
/srv/duply-cache/duply_srv/duplicity-new-signatures.20131211T124323Z.to.20131211T124519Z.sigtar.gz
 (not authoritative at backend).

  This is fatal if you run it in a configuration using GPG and having
  only the public key for encryption as well as a separate signing key.
  Then you cannot backup any more, as the decrypted local cache has been
  deleted and the files on the S3 are encrypted.

  Probably reason:

  There is no check if the connection to the backend could be
  established

  Workaround:

  If you replace at

  http://bazaar.launchpad.net/~duplicity-
  team/duplicity/0.6-series/view/head:/duplicity/backends/_boto_single.py#L270

  the line

  return []

  with

  return None

  Then duplicity will crash instead of deleting the local files. Not the
  proper solution but at least you can do a backup when the connection
  comes back up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266753/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1309535] Re: lightdm: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so

2014-04-22 Thread Louis Bouchard
*** This bug is a duplicate of bug 1293058 ***
https://bugs.launchpad.net/bugs/1293058

** This bug has been marked a duplicate of bug 1293058
   compiz: PAM unable to dlopen(pam_usb.so): /lib/security/pam_usb.so:  cannot 
open shared object file: No such file or directory

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to lightdm in Ubuntu.
https://bugs.launchpad.net/bugs/1309535

Title:
  lightdm: PAM unable to dlopen(pam_kwallet.so):
  /lib/security/pam_kwallet.so

Status in “lightdm” package in Ubuntu:
  Confirmed

Bug description:
  After upgrading to lightdm 1.10.0-0ubuntu2 I started to see this error
  in auth.log:

  Apr 10 14:34:54 simon-laptop lightdm: PAM unable to dlopen(pam_kwallet.so): 
/lib/security/pam_kwallet.so: cannot open shared object file: No such file or 
directory
  Apr 10 14:34:54 simon-laptop lightdm: PAM adding faulty module: pam_kwallet.so

  This seems like a regression because with lightdm 1.10.0-0ubuntu1 or
  before I didn't have this error showing. FYI, I don't have the pam-
  kwallet package installed.

  
  $ lsb_release -rd
  Description:  Ubuntu 14.04 LTS
  Release:  14.04

  $ apt-cache policy lightdm pam-kwallet
  lightdm:
Installed: 1.10.0-0ubuntu3
Candidate: 1.10.0-0ubuntu3
Version table:
   *** 1.10.0-0ubuntu3 0
  500 http://archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
  100 /var/lib/dpkg/status

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: lightdm 1.10.0-0ubuntu3
  ProcVersionSignature: Ubuntu 3.13.0-24.46-generic 3.13.9
  Uname: Linux 3.13.0-24-generic x86_64
  ApportVersion: 2.14.1-0ubuntu3
  Architecture: amd64
  CurrentDesktop: Unity
  Date: Fri Apr 18 09:12:37 2014
  InstallationDate: Installed on 2014-01-26 (81 days ago)
  InstallationMedia: Ubuntu 14.04 LTS Trusty Tahr - Alpha amd64 (20140124)
  SourcePackage: lightdm
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lightdm/+bug/1309535/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1309535] Re: lightdm: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so

2014-04-22 Thread Louis Bouchard
** This bug is no longer a duplicate of bug 1293058
   compiz: PAM unable to dlopen(pam_usb.so): /lib/security/pam_usb.so:  cannot 
open shared object file: No such file or directory

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to lightdm in Ubuntu.
https://bugs.launchpad.net/bugs/1309535

Title:
  lightdm: PAM unable to dlopen(pam_kwallet.so):
  /lib/security/pam_kwallet.so

Status in “lightdm” package in Ubuntu:
  Confirmed

Bug description:
  After upgrading to lightdm 1.10.0-0ubuntu2 I started to see this error
  in auth.log:

  Apr 10 14:34:54 simon-laptop lightdm: PAM unable to dlopen(pam_kwallet.so): 
/lib/security/pam_kwallet.so: cannot open shared object file: No such file or 
directory
  Apr 10 14:34:54 simon-laptop lightdm: PAM adding faulty module: pam_kwallet.so

  This seems like a regression because with lightdm 1.10.0-0ubuntu1 or
  before I didn't have this error showing. FYI, I don't have the pam-
  kwallet package installed.

  
  $ lsb_release -rd
  Description:  Ubuntu 14.04 LTS
  Release:  14.04

  $ apt-cache policy lightdm pam-kwallet
  lightdm:
Installed: 1.10.0-0ubuntu3
Candidate: 1.10.0-0ubuntu3
Version table:
   *** 1.10.0-0ubuntu3 0
  500 http://archive.ubuntu.com/ubuntu/ trusty/main amd64 Packages
  100 /var/lib/dpkg/status

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: lightdm 1.10.0-0ubuntu3
  ProcVersionSignature: Ubuntu 3.13.0-24.46-generic 3.13.9
  Uname: Linux 3.13.0-24-generic x86_64
  ApportVersion: 2.14.1-0ubuntu3
  Architecture: amd64
  CurrentDesktop: Unity
  Date: Fri Apr 18 09:12:37 2014
  InstallationDate: Installed on 2014-01-26 (81 days ago)
  InstallationMedia: Ubuntu 14.04 LTS Trusty Tahr - Alpha amd64 (20140124)
  SourcePackage: lightdm
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lightdm/+bug/1309535/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1307413] [NEW] apport-unpack requires too much main memory to run

2014-04-14 Thread Louis Bouchard
Public bug reported:

when running apport-unpack on large apport reports (linux-image kernel
dumps is a good example), it requires an enormous amount of main memory
to run.

An example is a 1.3Gb apport report that runs for more than 24 hours
with more than 4Gb of RSS.

The command should requires less memory to extract those big reports.

** Affects: apport (Ubuntu)
 Importance: Medium
 Assignee: Louis Bouchard (louis-bouchard)
 Status: In Progress

** Changed in: apport (Ubuntu)
   Status: New = In Progress

** Changed in: apport (Ubuntu)
   Importance: Undecided = Medium

** Changed in: apport (Ubuntu)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to apport in Ubuntu.
https://bugs.launchpad.net/bugs/1307413

Title:
  apport-unpack requires too much main memory to run

Status in “apport” package in Ubuntu:
  In Progress

Bug description:
  when running apport-unpack on large apport reports (linux-image kernel
  dumps is a good example), it requires an enormous amount of main
  memory to run.

  An example is a 1.3Gb apport report that runs for more than 24 hours
  with more than 4Gb of RSS.

  The command should requires less memory to extract those big reports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apport/+bug/1307413/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-04-11 Thread Louis Bouchard
** Changed in: duplicity (Ubuntu Precise)
 Assignee: Louis Bouchard (louis-bouchard) = (unassigned)

** Changed in: duplicity (Ubuntu Saucy)
 Assignee: Louis Bouchard (louis-bouchard) = (unassigned)

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  Fix Released
Status in “duplicity” source package in Quantal:
  Won't Fix
Status in “duplicity” source package in Raring:
  Won't Fix
Status in “duplicity” source package in Saucy:
  Fix Released
Status in “duplicity” source package in Trusty:
  Fix Released

Bug description:
  N.B. This should not be released until after deja-dup bug 1281066.

  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case :
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
  If you are sure that this is the  only instance running you may delete
  the following lockfile and run the command again :
     
/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile.lock

  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned as outlined in the error message

  Original description of the problem :

  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 946988] Re: GnuPG passphrase error after failed backup session

2014-04-11 Thread Louis Bouchard
** Changed in: duplicity (Ubuntu Precise)
 Assignee: Louis Bouchard (louis-bouchard) = (unassigned)

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/946988

Title:
  GnuPG passphrase error after failed backup session

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  Fix Released

Bug description:
  After a failed backup session, duplicity cannot resume the volume
  creation, and seems to fail with bad passphrase. It is, however, the
  same passphrase that is always used.

  The backup set thus seems to be unusable, and the only solution I've
  found is to simply create a new backup set in a new directory, losing
  incremental history.

  Full output attached as duplicity-error-incremental-log-v9.

  Also reported as Debian bug #659009, but it has gotten no attention
  for a month.

  Duplicity 0.6.17
  Python 2.7.2+
  Debian Sid
  Linux

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/946988/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1281066] Re: Needs to backport trusty's fix to handle duplicity's lockfile

2014-04-09 Thread Louis Bouchard
** Tags removed: verification-needed
** Tags added: verification-done-precise verification-needed-saucy

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to deja-dup in Ubuntu.
https://bugs.launchpad.net/bugs/1281066

Title:
  Needs to backport trusty's fix to handle duplicity's lockfile

Status in “deja-dup” package in Ubuntu:
  Fix Committed
Status in “deja-dup” source package in Precise:
  Fix Committed
Status in “deja-dup” source package in Saucy:
  Fix Committed

Bug description:
  SRU justification :

  Duplicity implements a new lockfile that impacts deja-dup
  functionality. A fix to remove the lockfile is already commited
  upstream and in Trusty. The SRU is to make the fix available before
  duplicity's SRU is completed

  Impact :

  Without this SRU, duplicity's SRU for LP: #1266763 will cause errors
  when backing up after having previously cancelled a backup.

  Fix :

  Remove the lockfile since concurrency is handled in DBUS in deja-dup

  Test Case :
  1) Start a backup
  2) Cancel it once it begins backing up
  3) Start the backup again

  If you get an error right at the beginning of (3) about lockfiles,
  this bug isn't fixed.  If the backup proceeds, it's fixed!

  Regression :

  None expected as it remove a file that is only present with newer
  versions of duplicity

  Description of the problem :

  See justification

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/deja-dup/+bug/1281066/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1281066] Re: Needs to backport trusty's fix to handle duplicity's lockfile

2014-04-09 Thread Louis Bouchard
Verified for both Saucy  Precise

** Tags removed: verification-done-precise verification-needed-saucy
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to deja-dup in Ubuntu.
https://bugs.launchpad.net/bugs/1281066

Title:
  Needs to backport trusty's fix to handle duplicity's lockfile

Status in “deja-dup” package in Ubuntu:
  Fix Committed
Status in “deja-dup” source package in Precise:
  Fix Committed
Status in “deja-dup” source package in Saucy:
  Fix Committed

Bug description:
  SRU justification :

  Duplicity implements a new lockfile that impacts deja-dup
  functionality. A fix to remove the lockfile is already commited
  upstream and in Trusty. The SRU is to make the fix available before
  duplicity's SRU is completed

  Impact :

  Without this SRU, duplicity's SRU for LP: #1266763 will cause errors
  when backing up after having previously cancelled a backup.

  Fix :

  Remove the lockfile since concurrency is handled in DBUS in deja-dup

  Test Case :
  1) Start a backup
  2) Cancel it once it begins backing up
  3) Start the backup again

  If you get an error right at the beginning of (3) about lockfiles,
  this bug isn't fixed.  If the backup proceeds, it's fixed!

  Regression :

  None expected as it remove a file that is only present with newer
  versions of duplicity

  Description of the problem :

  See justification

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/deja-dup/+bug/1281066/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-03-31 Thread Louis Bouchard
** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  Fix Committed
Status in “duplicity” source package in Quantal:
  Won't Fix
Status in “duplicity” source package in Raring:
  Won't Fix
Status in “duplicity” source package in Saucy:
  Fix Committed
Status in “duplicity” source package in Trusty:
  Fix Released

Bug description:
  N.B. This should not be released until after deja-dup bug 1281066.

  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case :
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
  If you are sure that this is the  only instance running you may delete
  the following lockfile and run the command again :
     
/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile.lock

  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned as outlined in the error message

  Original description of the problem :

  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266753] Re: Boto backend removes local cache if connection cannot be made

2014-03-31 Thread Louis Bouchard
** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266753

Title:
  Boto backend removes local cache if connection cannot be made

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  Fix Committed
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Saucy:
  Fix Committed

Bug description:
  N.B. This should not be released until after deja-dup - bug 1281066.

  SRU Justification
  [Impact]
   * When there is no connection to the S3 backend, the local cache files are 
deleted.

  [Test Case]
   1. disable the connection to S3
   2. run a collection-status (basically I run 'duply X status')

  [Regression Potential]
   * Already fixed in latest duplicity. Needs to be fixed in lockstep with 
deja-dup as it Breaks: deja-dup ( 27.3.1-0ubuntu2 ).

  --

  When there is no connection to the S3 backend, the local cache files
  are deleted. To reproduce:

  1. disable the connection to S3
  2. run a collection-status (basically I run 'duply X status')

  You'll get a bunch of these:

  Deleting local 
/srv/duply-cache/duply_srv/duplicity-inc.20140106T010002Z.to.20140107T010002Z.manifest
 (not authoritative at backend).
  Deleting local 
/srv/duply-cache/duply_srv/duplicity-new-signatures.20131211T124323Z.to.20131211T124519Z.sigtar.gz
 (not authoritative at backend).

  This is fatal if you run it in a configuration using GPG and having
  only the public key for encryption as well as a separate signing key.
  Then you cannot backup any more, as the decrypted local cache has been
  deleted and the files on the S3 are encrypted.

  Probably reason:

  There is no check if the connection to the backend could be
  established

  Workaround:

  If you replace at

  http://bazaar.launchpad.net/~duplicity-
  team/duplicity/0.6-series/view/head:/duplicity/backends/_boto_single.py#L270

  the line

  return []

  with

  return None

  Then duplicity will crash instead of deleting the local files. Not the
  proper solution but at least you can do a backup when the connection
  comes back up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266753/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 946988] Re: GnuPG passphrase error after failed backup session

2014-03-31 Thread Louis Bouchard
** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/946988

Title:
  GnuPG passphrase error after failed backup session

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  Fix Committed

Bug description:
  After a failed backup session, duplicity cannot resume the volume
  creation, and seems to fail with bad passphrase. It is, however, the
  same passphrase that is always used.

  The backup set thus seems to be unusable, and the only solution I've
  found is to simply create a new backup set in a new directory, losing
  incremental history.

  Full output attached as duplicity-error-incremental-log-v9.

  Also reported as Debian bug #659009, but it has gotten no attention
  for a month.

  Duplicity 0.6.17
  Python 2.7.2+
  Debian Sid
  Linux

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/946988/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266753] Re: Boto backend removes local cache if connection cannot be made

2014-03-26 Thread Louis Bouchard
** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266753

Title:
  Boto backend removes local cache if connection cannot be made

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Saucy:
  Fix Committed

Bug description:
  SRU Justification
  [Impact] 
   * When there is no connection to the S3 backend, the local cache files are 
deleted.

  [Test Case]
   1. disable the connection to S3
   2. run a collection-status (basically I run 'duply X status')

  [Regression Potential] 
   * Already fixed in latest duplicity. Needs to be fixed in lockstep with 
deja-dup as it Breaks: deja-dup ( 27.3.1-0ubuntu2 ).

  --

  When there is no connection to the S3 backend, the local cache files
  are deleted. To reproduce:

  1. disable the connection to S3
  2. run a collection-status (basically I run 'duply X status')

  You'll get a bunch of these:

  Deleting local 
/srv/duply-cache/duply_srv/duplicity-inc.20140106T010002Z.to.20140107T010002Z.manifest
 (not authoritative at backend).
  Deleting local 
/srv/duply-cache/duply_srv/duplicity-new-signatures.20131211T124323Z.to.20131211T124519Z.sigtar.gz
 (not authoritative at backend).

  This is fatal if you run it in a configuration using GPG and having
  only the public key for encryption as well as a separate signing key.
  Then you cannot backup any more, as the decrypted local cache has been
  deleted and the files on the S3 are encrypted.

  Probably reason:

  There is no check if the connection to the backend could be
  established

  Workaround:

  If you replace at

  http://bazaar.launchpad.net/~duplicity-
  team/duplicity/0.6-series/view/head:/duplicity/backends/_boto_single.py#L270

  the line

  return []

  with

  return None

  Then duplicity will crash instead of deleting the local files. Not the
  proper solution but at least you can do a backup when the connection
  comes back up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266753/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-03-19 Thread Louis Bouchard
Any reason why the package for precise has not been uploaded ?

** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  Won't Fix
Status in “duplicity” source package in Raring:
  Won't Fix
Status in “duplicity” source package in Saucy:
  Fix Committed
Status in “duplicity” source package in Trusty:
  Fix Released

Bug description:
  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case :
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
  If you are sure that this is the  only instance running you may delete
  the following lockfile and run the command again :
     
/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile.lock

  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned as outlined in the error message

  Original description of the problem :

  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1288903] Re: Cannot login to lightdm after upgrade to trusty

2014-03-11 Thread Louis Bouchard
** Changed in: lightdm (Ubuntu)
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to lightdm in Ubuntu.
https://bugs.launchpad.net/bugs/1288903

Title:
  Cannot login to lightdm after upgrade to trusty

Status in “lightdm” package in Ubuntu:
  Invalid

Bug description:
  Even after applying the update for lightdm in -proposed I still cannot
  login to lightdm after the upgrade.

  /var/log/lightdm/lightdm.log provides the following :

  [+11.69s] DEBUG: Session pid=1932: Prompt greeter with 1 message(s)
  [+14.82s] DEBUG: User /org/freedesktop/Accounts/User1000 changed
  [+17.48s] DEBUG: Session pid=1932: Continue authentication
  [+17.51s] DEBUG: Session pid=2856: Authentication complete with return value 
0: Success
  [+17.51s] DEBUG: Session pid=1932: Authenticate result for user caribou: 
Success
  [+17.51s] DEBUG: Session pid=1932: User caribou authorized
  [+17.53s] DEBUG: Session pid=1932: Greeter requests session ubuntu
  [+17.53s] DEBUG: Seat: Failed to find session configuration ubuntu
  [+17.53s] DEBUG: Seat: Can't find session 'ubuntu'

  Content of /var/log/lightdm will be attached to the bug

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lightdm/+bug/1288903/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1288903] Re: Cannot login to lightdm after upgrade to trusty

2014-03-07 Thread Louis Bouchard
Hi seb128,

Yes, I used dist-upgrade -d

Indeed, for some obscure reason, ubuntu-session got removed during the
upgrade. Here is what it brought in :

Lecture des listes de paquets…
Construction de l'arbre des dépendances…
Lecture des informations d'état…
Les paquets suivants ont été installés automatiquement et ne sont plus 
nécessaires :
  libexpat1-dev libjson0:i386 libpython3-dev libpython3.3-dev libpython3.4-dev
  libtasn1-3:i386 libtommath0 linux-headers-3.11.0-14
  linux-headers-3.11.0-14-generic linux-image-3.11.0-14-generic
  linux-image-extra-3.11.0-14-generic linux-tools-3.11.0-14
  linux-tools-3.11.0-14-generic python3-dev python3.3-dev python3.4-dev
Veuillez utiliser « apt-get autoremove » pour les supprimer.
Les paquets supplémentaires suivants seront installés : 
  compiz-core compiz-gnome compiz-plugins-default gsettings-ubuntu-schemas
  libcompizconfig0 libdecoration0 libglew1.10 libglewmx1.10 libnux-4.0-0
  libnux-4.0-common libprotobuf8 libtimezonemap1 libunity-control-center1
  libunity-core-6.0-9 libunity-protocol-private0 libunity9 ubuntu-settings
  unity unity-control-center unity-services unity-settings-daemon
Paquets suggérés :
  glew-utils
Les paquets suivants seront ENLEVÉS :
  libunity-core-6.0-8
Les NOUVEAUX paquets suivants seront installés :
  gsettings-ubuntu-schemas libglew1.10 libglewmx1.10 libprotobuf8
  libunity-control-center1 libunity-core-6.0-9 ubuntu-session
  unity-control-center unity-settings-daemon
Les paquets suivants seront mis à jour :
  compiz-core compiz-gnome compiz-plugins-default libcompizconfig0
  libdecoration0 libnux-4.0-0 libnux-4.0-common libtimezonemap1
  libunity-protocol-private0 libunity9 ubuntu-settings unity unity-services
13 mis à jour, 9 nouvellement installés, 1 à enlever et 1973 non mis à jour.
Il est nécessaire de prendre 810 ko/7 661 ko dans les archives.
Après cette opération, 10,8 Mo d'espace disque supplémentaires seront utilisés.
Souhaitez-vous continuer [O/n] ? Réception de : 1 
http://archive.ubuntu.com/ubuntu/ trusty/main libunity-control-center1 amd64 
14.04.3+14.04.20140305.1-0ubuntu1 [81,0 kB]
Réception de : 2 http://archive.ubuntu.com/ubuntu/ trusty/main ubuntu-settings 
all 14.04.5 [4 422 B]
Réception de : 3 http://archive.ubuntu.com/ubuntu/ trusty/main 
unity-control-center amd64 14.04.3+14.04.20140305.1-0ubuntu1 [725 kB]
Committing to: /etc/
modified apt/sources.list
modified cups/printers.conf
Committed revision 296.
810 ko réceptionnés en 0s (934 ko/s)
Sélection du paquet libprotobuf8:amd64 précédemment désélectionné.
(Lecture de la base de données... (Lecture de la base de données... 5%\(Lecture 
de la base de données... 10%
(Lecture de la base de données... 15%
(Lecture de la base de données... 20%
(Lecture de la base de données... 25%
(Lecture de la base de données... 30%
(Lecture de la base de données... 35%
(Lecture de la base de données... 40%
(Lecture de la base de données... 45%
(Lecture de la base de données... 50%
(Lecture de la base de données... 55%
(Lecture de la base de données... 60%
(Lecture de la base de données... 65%
(Lecture de la base de données... 70%
(Lecture de la base de données... 75%
(Lecture de la base de données... 80%
(Lecture de la base de données... 85%
(Lecture de la base de données... 90%
(Lecture de la base de données... 95%
(Lecture de la base de données... 100%
(Lecture de la base de données... 598383 fichiers et répertoires déjà 
installés.)
Preparing to unpack .../libprotobuf8_2.5.0-9ubuntu1_amd64.deb ...
Unpacking libprotobuf8:amd64 (2.5.0-9ubuntu1) ...
Preparing to unpack 
.../libcompizconfig0_1%3a0.9.11+14.04.20140303-0ubuntu1_amd64.deb ...
Unpacking libcompizconfig0 (1:0.9.11+14.04.20140303-0ubuntu1) over 
(1:0.9.10+13.10.20131011-0ubuntu1) ...
Preparing to unpack 
.../compiz-gnome_1%3a0.9.11+14.04.20140303-0ubuntu1_amd64.deb ...
Unpacking compiz-gnome (1:0.9.11+14.04.20140303-0ubuntu1) over 
(1:0.9.10+13.10.20131011-0ubuntu1) ...
Preparing to unpack 
.../compiz-plugins-default_1%3a0.9.11+14.04.20140303-0ubuntu1_amd64.deb ...
Unpacking compiz-plugins-default (1:0.9.11+14.04.20140303-0ubuntu1) over 
(1:0.9.10+13.10.20131011-0ubuntu1) ...
Preparing to unpack 
.../libdecoration0_1%3a0.9.11+14.04.20140303-0ubuntu1_amd64.deb ...
Unpacking libdecoration0 (1:0.9.11+14.04.20140303-0ubuntu1) over 
(1:0.9.10+13.10.20131011-0ubuntu1) ...
Preparing to unpack .../unity_7.1.2+14.04.20140303-0ubuntu1_amd64.deb ...
Unpacking unity (7.1.2+14.04.20140303-0ubuntu1) over 
(7.1.2+13.10.20131014.1-0ubuntu1) ...
Preparing to unpack 
.../compiz-core_1%3a0.9.11+14.04.20140303-0ubuntu1_amd64.deb ...
Unpacking compiz-core (1:0.9.11+14.04.20140303-0ubuntu1) over 
(1:0.9.10+13.10.20131011-0ubuntu1) ...
Sélection du paquet libglew1.10:amd64 précédemment désélectionné.
Preparing to unpack .../libglew1.10_1.10.0-3_amd64.deb ...
Unpacking libglew1.10:amd64 (1.10.0-3) ...
Preparing to unpack .../libnux-4.0-0_4.0.5+14.04.20140226-0ubuntu1_amd64.deb ...
Unpacking libnux-4.0-0 (4.0.5+14.04.20140226-0ubuntu1) over 

[Desktop-packages] [Bug 1288903] Re: Cannot login to lightdm after upgrade to trusty

2014-03-07 Thread Louis Bouchard
yeah, it was a clean install from saucy.

Going back in the log, I did

 $ do-release-upgrade -d

At the end, there were a few broken installs that needed to be fixed
with apt-get -f install

I must be honest, it's the first time I upgrade to the dev release, I
may have totally screwed things up !

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to lightdm in Ubuntu.
https://bugs.launchpad.net/bugs/1288903

Title:
  Cannot login to lightdm after upgrade to trusty

Status in “lightdm” package in Ubuntu:
  Incomplete

Bug description:
  Even after applying the update for lightdm in -proposed I still cannot
  login to lightdm after the upgrade.

  /var/log/lightdm/lightdm.log provides the following :

  [+11.69s] DEBUG: Session pid=1932: Prompt greeter with 1 message(s)
  [+14.82s] DEBUG: User /org/freedesktop/Accounts/User1000 changed
  [+17.48s] DEBUG: Session pid=1932: Continue authentication
  [+17.51s] DEBUG: Session pid=2856: Authentication complete with return value 
0: Success
  [+17.51s] DEBUG: Session pid=1932: Authenticate result for user caribou: 
Success
  [+17.51s] DEBUG: Session pid=1932: User caribou authorized
  [+17.53s] DEBUG: Session pid=1932: Greeter requests session ubuntu
  [+17.53s] DEBUG: Seat: Failed to find session configuration ubuntu
  [+17.53s] DEBUG: Seat: Can't find session 'ubuntu'

  Content of /var/log/lightdm will be attached to the bug

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lightdm/+bug/1288903/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1288903] [NEW] Cannot login to lightdm after upgrade to trusty

2014-03-06 Thread Louis Bouchard
Public bug reported:

Even after applying the update for lightdm in -proposed I still cannot
login to lightdm after the upgrade.

/var/log/lightdm/lightdm.log provides the following :

[+11.69s] DEBUG: Session pid=1932: Prompt greeter with 1 message(s)
[+14.82s] DEBUG: User /org/freedesktop/Accounts/User1000 changed
[+17.48s] DEBUG: Session pid=1932: Continue authentication
[+17.51s] DEBUG: Session pid=2856: Authentication complete with return value 0: 
Success
[+17.51s] DEBUG: Session pid=1932: Authenticate result for user caribou: Success
[+17.51s] DEBUG: Session pid=1932: User caribou authorized
[+17.53s] DEBUG: Session pid=1932: Greeter requests session ubuntu
[+17.53s] DEBUG: Seat: Failed to find session configuration ubuntu
[+17.53s] DEBUG: Seat: Can't find session 'ubuntu'

Content of /var/log/lightdm will be attached to the bug

** Affects: lightdm (Ubuntu)
 Importance: Undecided
 Status: New

** Attachment added: lightdm.tgz
   
https://bugs.launchpad.net/bugs/1288903/+attachment/4010163/+files/lightdm.tgz

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to lightdm in Ubuntu.
https://bugs.launchpad.net/bugs/1288903

Title:
  Cannot login to lightdm after upgrade to trusty

Status in “lightdm” package in Ubuntu:
  New

Bug description:
  Even after applying the update for lightdm in -proposed I still cannot
  login to lightdm after the upgrade.

  /var/log/lightdm/lightdm.log provides the following :

  [+11.69s] DEBUG: Session pid=1932: Prompt greeter with 1 message(s)
  [+14.82s] DEBUG: User /org/freedesktop/Accounts/User1000 changed
  [+17.48s] DEBUG: Session pid=1932: Continue authentication
  [+17.51s] DEBUG: Session pid=2856: Authentication complete with return value 
0: Success
  [+17.51s] DEBUG: Session pid=1932: Authenticate result for user caribou: 
Success
  [+17.51s] DEBUG: Session pid=1932: User caribou authorized
  [+17.53s] DEBUG: Session pid=1932: Greeter requests session ubuntu
  [+17.53s] DEBUG: Seat: Failed to find session configuration ubuntu
  [+17.53s] DEBUG: Seat: Can't find session 'ubuntu'

  Content of /var/log/lightdm will be attached to the bug

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lightdm/+bug/1288903/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1288903] Re: Cannot login to lightdm after upgrade to trusty

2014-03-06 Thread Louis Bouchard
Forgot to mention that I had seen the following bug but it doesn't seem
to have anything to do with it :

https://bugs.launchpad.net/ubuntu/+source/lightdm/+bug/1279428

The upgrade was from a clean install of Saucy to Trusty

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to lightdm in Ubuntu.
https://bugs.launchpad.net/bugs/1288903

Title:
  Cannot login to lightdm after upgrade to trusty

Status in “lightdm” package in Ubuntu:
  New

Bug description:
  Even after applying the update for lightdm in -proposed I still cannot
  login to lightdm after the upgrade.

  /var/log/lightdm/lightdm.log provides the following :

  [+11.69s] DEBUG: Session pid=1932: Prompt greeter with 1 message(s)
  [+14.82s] DEBUG: User /org/freedesktop/Accounts/User1000 changed
  [+17.48s] DEBUG: Session pid=1932: Continue authentication
  [+17.51s] DEBUG: Session pid=2856: Authentication complete with return value 
0: Success
  [+17.51s] DEBUG: Session pid=1932: Authenticate result for user caribou: 
Success
  [+17.51s] DEBUG: Session pid=1932: User caribou authorized
  [+17.53s] DEBUG: Session pid=1932: Greeter requests session ubuntu
  [+17.53s] DEBUG: Seat: Failed to find session configuration ubuntu
  [+17.53s] DEBUG: Seat: Can't find session 'ubuntu'

  Content of /var/log/lightdm will be attached to the bug

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/lightdm/+bug/1288903/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1281066] Re: Needs to backport trusty's fix to handle duplicity's lockfile

2014-02-27 Thread Louis Bouchard
I forgot to mention one important thing for sponsors :

I was not able to build deja-dup from the source pkg with or _WITHOUT_
the patches using sbuild.  pbuilder worked fine, as well as building as
a PPA. Here is the resulting PPA for reference :

https://launchpad.net/~louis-bouchard/+archive/testdup

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to deja-dup in Ubuntu.
https://bugs.launchpad.net/bugs/1281066

Title:
  Needs to backport trusty's fix to handle duplicity's lockfile

Status in “deja-dup” package in Ubuntu:
  Fix Committed
Status in “deja-dup” source package in Precise:
  In Progress
Status in “deja-dup” source package in Saucy:
  In Progress

Bug description:
  SRU justification :

  Duplicity implements a new lockfile that impacts deja-dup
  functionality. A fix to remove the lockfile is already commited
  upstream and in Trusty. The SRU is to make the fix available before
  duplicity's SRU is completed

  Impact :

  Without this SRU, duplicity's SRU for LP: #1266763 cannot be completed
  or it may result in regression issues

  Fix :

  Remove the lockfile since concurrency is handled in DBUS in deja-dup

  Test Case : 
  TBD

  Regression :

  None expected as it remove a file that is only present with newer
  versions of duplicity

  Description of the problem :

  See justification

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/deja-dup/+bug/1281066/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-02-20 Thread Louis Bouchard
Equivalent debdiff with Breaks: statement added

** Changed in: duplicity (Ubuntu Quantal)
   Status: In Progress = Won't Fix

** Changed in: duplicity (Ubuntu Raring)
   Status: In Progress = Won't Fix

** Changed in: duplicity (Ubuntu Quantal)
 Assignee: Louis Bouchard (louis-bouchard) = (unassigned)

** Changed in: duplicity (Ubuntu Raring)
 Assignee: Louis Bouchard (louis-bouchard) = (unassigned)

** Patch added: lp1266763_s3_locking_saucy_with_breaks.debdiff
   
https://bugs.launchpad.net/duplicity/+bug/1266763/+attachment/3988011/+files/lp1266763_s3_locking_saucy_with_breaks.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  Won't Fix
Status in “duplicity” source package in Raring:
  Won't Fix
Status in “duplicity” source package in Saucy:
  In Progress
Status in “duplicity” source package in Trusty:
  Fix Released

Bug description:
  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case :
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
  If you are sure that this is the  only instance running you may delete
  the following lockfile and run the command again :
     
/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile.lock

  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned as outlined in the error message

  Original description of the problem :

  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-02-20 Thread Louis Bouchard
Equivalent debdiff with Breaks: statement added

** Patch added: lp1266763_s3_locking_gpg_precise_with_breaks.debdiff
   
https://bugs.launchpad.net/duplicity/+bug/1266763/+attachment/3988012/+files/lp1266763_s3_locking_gpg_precise_with_breaks.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  Won't Fix
Status in “duplicity” source package in Raring:
  Won't Fix
Status in “duplicity” source package in Saucy:
  In Progress
Status in “duplicity” source package in Trusty:
  Fix Released

Bug description:
  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case :
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
  If you are sure that this is the  only instance running you may delete
  the following lockfile and run the command again :
     
/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile.lock

  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned as outlined in the error message

  Original description of the problem :

  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-02-19 Thread Louis Bouchard
To potential sponsor:

I am preparing a slight modification (add Breaks: statement). debdiff
should be there shortly

** Patch removed: lp1266763_s3_locking_trusty.debdiff
   
https://bugs.launchpad.net/duplicity/+bug/1266763/+attachment/3955924/+files/lp1266763_s3_locking_trusty.debdiff

** Patch removed: lp1266763_s3_locking_saucy.debdiff
   
https://bugs.launchpad.net/duplicity/+bug/1266763/+attachment/3955923/+files/lp1266763_s3_locking_saucy.debdiff

** Patch removed: lp1266763_s3_locking_raring.debdiff
   
https://bugs.launchpad.net/duplicity/+bug/1266763/+attachment/3955922/+files/lp1266763_s3_locking_raring.debdiff

** Patch removed: lp1266763_s3_locking_gpg_precise.debdiff
   
https://bugs.launchpad.net/duplicity/+bug/1266763/+attachment/3955921/+files/lp1266763_s3_locking_gpg_precise.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress
Status in “duplicity” source package in Saucy:
  In Progress
Status in “duplicity” source package in Trusty:
  In Progress

Bug description:
  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case :
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
  If you are sure that this is the  only instance running you may delete
  the following lockfile and run the command again :
     
/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile.lock

  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned as outlined in the error message

  Original description of the problem :

  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-02-19 Thread Louis Bouchard
Same version of the patch with a Breaks: statement in the control file
to signal dependancy with deja-dup

** Patch added: lp1266763_s3_locking_trusty_with_breaks.debdiff
   
https://bugs.launchpad.net/duplicity/+bug/1266763/+attachment/3986714/+files/lp1266763_s3_locking_trusty_with_breaks.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress
Status in “duplicity” source package in Saucy:
  In Progress
Status in “duplicity” source package in Trusty:
  In Progress

Bug description:
  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case :
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
  If you are sure that this is the  only instance running you may delete
  the following lockfile and run the command again :
     
/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile.lock

  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned as outlined in the error message

  Original description of the problem :

  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1281066] Re: Needs to backport trusty's fix to handle duplicity's lockfile

2014-02-18 Thread Louis Bouchard
debdiff for precise

** Patch added: lp1281066-kill-lockfile-precise.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/deja-dup/+bug/1281066/+attachment/3985343/+files/lp1281066-kill-lockfile-precise.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to deja-dup in Ubuntu.
https://bugs.launchpad.net/bugs/1281066

Title:
  Needs to backport trusty's fix to handle duplicity's lockfile

Status in “deja-dup” package in Ubuntu:
  Fix Committed
Status in “deja-dup” source package in Precise:
  In Progress
Status in “deja-dup” source package in Saucy:
  In Progress

Bug description:
  SRU justification :

  Duplicity implements a new lockfile that impacts deja-dup
  functionality. A fix to remove the lockfile is already commited
  upstream and in Trusty. The SRU is to make the fix available before
  duplicity's SRU is completed

  Impact :

  Without this SRU, duplicity's SRU for LP: #1266763 cannot be completed
  or it may result in regression issues

  Fix :

  Remove the lockfile since concurrency is handled in DBUS in deja-dup

  Test Case : 
  TBD

  Regression :

  None expected as it remove a file that is only present with newer
  versions of duplicity

  Description of the problem :

  See justification

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/deja-dup/+bug/1281066/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1281066] Re: Needs to backport trusty's fix to handle duplicity's lockfile

2014-02-18 Thread Louis Bouchard
** Patch added: lp1281066-kill-lockfile-saucy.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/deja-dup/+bug/1281066/+attachment/3985344/+files/lp1281066-kill-lockfile-saucy.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to deja-dup in Ubuntu.
https://bugs.launchpad.net/bugs/1281066

Title:
  Needs to backport trusty's fix to handle duplicity's lockfile

Status in “deja-dup” package in Ubuntu:
  Fix Committed
Status in “deja-dup” source package in Precise:
  In Progress
Status in “deja-dup” source package in Saucy:
  In Progress

Bug description:
  SRU justification :

  Duplicity implements a new lockfile that impacts deja-dup
  functionality. A fix to remove the lockfile is already commited
  upstream and in Trusty. The SRU is to make the fix available before
  duplicity's SRU is completed

  Impact :

  Without this SRU, duplicity's SRU for LP: #1266763 cannot be completed
  or it may result in regression issues

  Fix :

  Remove the lockfile since concurrency is handled in DBUS in deja-dup

  Test Case : 
  TBD

  Regression :

  None expected as it remove a file that is only present with newer
  versions of duplicity

  Description of the problem :

  See justification

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/deja-dup/+bug/1281066/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-02-17 Thread Louis Bouchard
Bug created to SRU trusty's fix to Precise  Saucy

https://bugs.launchpad.net/ubuntu/+source/deja-dup/+bug/1281066

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress
Status in “duplicity” source package in Saucy:
  In Progress
Status in “duplicity” source package in Trusty:
  In Progress

Bug description:
  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case :
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
  If you are sure that this is the  only instance running you may delete
  the following lockfile and run the command again :
     
/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile.lock

  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned as outlined in the error message

  Original description of the problem :

  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1281066] [NEW] Needs to backport trusty's fix to handle duplicity's lockfile

2014-02-17 Thread Louis Bouchard
Public bug reported:

SRU justification :

Duplicity implements a new lockfile that impacts deja-dup functionality.
A fix to remove the lockfile is already commited upstream and in Trusty.
The SRU is to make the fix available before duplicity's SRU is completed

Impact :

Without this SRU, duplicity's SRU for LP: #1266763 cannot be completed
or it may result in regression issues

Fix :

Remove the lockfile since concurrency is handled in DBUS in deja-dup

Test Case : 
TBD

Regression :

None expected as it remove a file that is only present with newer
versions of duplicity

Description of the problem :

See justification

** Affects: deja-dup (Ubuntu)
 Importance: Medium
 Status: Confirmed

** Changed in: deja-dup (Ubuntu)
   Status: New = Confirmed

** Changed in: deja-dup (Ubuntu)
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to deja-dup in Ubuntu.
https://bugs.launchpad.net/bugs/1281066

Title:
  Needs to backport trusty's fix to handle duplicity's lockfile

Status in “deja-dup” package in Ubuntu:
  Confirmed

Bug description:
  SRU justification :

  Duplicity implements a new lockfile that impacts deja-dup
  functionality. A fix to remove the lockfile is already commited
  upstream and in Trusty. The SRU is to make the fix available before
  duplicity's SRU is completed

  Impact :

  Without this SRU, duplicity's SRU for LP: #1266763 cannot be completed
  or it may result in regression issues

  Fix :

  Remove the lockfile since concurrency is handled in DBUS in deja-dup

  Test Case : 
  TBD

  Regression :

  None expected as it remove a file that is only present with newer
  versions of duplicity

  Description of the problem :

  See justification

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/deja-dup/+bug/1281066/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1281066] Re: Needs to backport trusty's fix to handle duplicity's lockfile

2014-02-17 Thread Louis Bouchard
The patch for the development release has already been uploaded by
mterry and is awaiting in -proposed. Marking the dev task Commited

** Changed in: deja-dup (Ubuntu Precise)
   Status: New = In Progress

** Changed in: deja-dup (Ubuntu Precise)
   Importance: Undecided = Medium

** Changed in: deja-dup (Ubuntu Precise)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

** Changed in: deja-dup (Ubuntu Saucy)
   Status: New = In Progress

** Changed in: deja-dup (Ubuntu Saucy)
   Importance: Undecided = Medium

** Changed in: deja-dup (Ubuntu Saucy)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

** Changed in: deja-dup (Ubuntu)
   Status: Confirmed = In Progress

** Changed in: deja-dup (Ubuntu)
   Status: In Progress = Fix Committed

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to deja-dup in Ubuntu.
https://bugs.launchpad.net/bugs/1281066

Title:
  Needs to backport trusty's fix to handle duplicity's lockfile

Status in “deja-dup” package in Ubuntu:
  Fix Committed
Status in “deja-dup” source package in Precise:
  In Progress
Status in “deja-dup” source package in Saucy:
  In Progress

Bug description:
  SRU justification :

  Duplicity implements a new lockfile that impacts deja-dup
  functionality. A fix to remove the lockfile is already commited
  upstream and in Trusty. The SRU is to make the fix available before
  duplicity's SRU is completed

  Impact :

  Without this SRU, duplicity's SRU for LP: #1266763 cannot be completed
  or it may result in regression issues

  Fix :

  Remove the lockfile since concurrency is handled in DBUS in deja-dup

  Test Case : 
  TBD

  Regression :

  None expected as it remove a file that is only present with newer
  versions of duplicity

  Description of the problem :

  See justification

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/deja-dup/+bug/1281066/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-01-24 Thread Louis Bouchard
** Patch added: lp1266763_s3_locking_saucy.debdiff
   
https://bugs.launchpad.net/duplicity/+bug/1266763/+attachment/3955923/+files/lp1266763_s3_locking_saucy.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress
Status in “duplicity” source package in Saucy:
  In Progress
Status in “duplicity” source package in Trusty:
  In Progress

Bug description:
  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case :
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
  If you are sure that this is the  only instance running you may delete
  the following lockfile and run the command again :
     
/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile.lock

  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned as outlined in the error message

  Original description of the problem :

  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-01-24 Thread Louis Bouchard
** Patch added: lp1266763_s3_locking_trusty.debdiff
   
https://bugs.launchpad.net/duplicity/+bug/1266763/+attachment/3955924/+files/lp1266763_s3_locking_trusty.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress
Status in “duplicity” source package in Saucy:
  In Progress
Status in “duplicity” source package in Trusty:
  In Progress

Bug description:
  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case :
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
  If you are sure that this is the  only instance running you may delete
  the following lockfile and run the command again :
     
/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile.lock

  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned as outlined in the error message

  Original description of the problem :

  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-01-24 Thread Louis Bouchard
** Patch added: lp1266763_s3_locking_raring.debdiff
   
https://bugs.launchpad.net/duplicity/+bug/1266763/+attachment/3955922/+files/lp1266763_s3_locking_raring.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress
Status in “duplicity” source package in Saucy:
  In Progress
Status in “duplicity” source package in Trusty:
  In Progress

Bug description:
  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case :
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
  If you are sure that this is the  only instance running you may delete
  the following lockfile and run the command again :
     
/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile.lock

  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned as outlined in the error message

  Original description of the problem :

  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-01-24 Thread Louis Bouchard
** Patch added: lp1266763_s3_locking_gpg_precise.debdiff
   
https://bugs.launchpad.net/duplicity/+bug/1266763/+attachment/3955921/+files/lp1266763_s3_locking_gpg_precise.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress
Status in “duplicity” source package in Saucy:
  In Progress
Status in “duplicity” source package in Trusty:
  In Progress

Bug description:
  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case :
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
  If you are sure that this is the  only instance running you may delete
  the following lockfile and run the command again :
     
/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile.lock

  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned as outlined in the error message

  Original description of the problem :

  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266753] Re: Boto backend removes local cache if connection cannot be made

2014-01-24 Thread Louis Bouchard
** Changed in: duplicity (Ubuntu Quantal)
   Status: New = In Progress

** Changed in: duplicity (Ubuntu Saucy)
   Status: New = In Progress

** Changed in: duplicity (Ubuntu Quantal)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

** Changed in: duplicity (Ubuntu Saucy)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

** Changed in: duplicity (Ubuntu Quantal)
   Importance: Undecided = Medium

** Changed in: duplicity (Ubuntu Saucy)
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266753

Title:
  Boto backend removes local cache if connection cannot be made

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Saucy:
  In Progress

Bug description:
  When there is no connection to the S3 backend, the local cache files
  are deleted. To reproduce:

  1. disable the connection to S3
  2. run a collection-status (basically I run 'duply X status')

  You'll get a bunch of these:

  Deleting local 
/srv/duply-cache/duply_srv/duplicity-inc.20140106T010002Z.to.20140107T010002Z.manifest
 (not authoritative at backend).
  Deleting local 
/srv/duply-cache/duply_srv/duplicity-new-signatures.20131211T124323Z.to.20131211T124519Z.sigtar.gz
 (not authoritative at backend).

  This is fatal if you run it in a configuration using GPG and having
  only the public key for encryption as well as a separate signing key.
  Then you cannot backup any more, as the decrypted local cache has been
  deleted and the files on the S3 are encrypted.

  Probably reason:

  There is no check if the connection to the backend could be
  established

  Workaround:

  If you replace at

  http://bazaar.launchpad.net/~duplicity-
  team/duplicity/0.6-series/view/head:/duplicity/backends/_boto_single.py#L270

  the line

  return []

  with

  return None

  Then duplicity will crash instead of deleting the local files. Not the
  proper solution but at least you can do a backup when the connection
  comes back up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266753/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 946988] Re: GnuPG passphrase error after failed backup session

2014-01-24 Thread Louis Bouchard
** Changed in: duplicity (Ubuntu Precise)
   Status: New = In Progress

** Changed in: duplicity (Ubuntu Precise)
   Importance: Undecided = Medium

** Changed in: duplicity (Ubuntu Precise)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/946988

Title:
  GnuPG passphrase error after failed backup session

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  In Progress

Bug description:
  After a failed backup session, duplicity cannot resume the volume
  creation, and seems to fail with bad passphrase. It is, however, the
  same passphrase that is always used.

  The backup set thus seems to be unusable, and the only solution I've
  found is to simply create a new backup set in a new directory, losing
  incremental history.

  Full output attached as duplicity-error-incremental-log-v9.

  Also reported as Debian bug #659009, but it has gotten no attention
  for a month.

  Duplicity 0.6.17
  Python 2.7.2+
  Debian Sid
  Linux

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/946988/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 946988] Re: GnuPG passphrase error after failed backup session

2014-01-24 Thread Louis Bouchard
The backport of this fix to precise has been submitted for SRU with
other fixes in Bug: #1266763

** Changed in: duplicity (Ubuntu)
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/946988

Title:
  GnuPG passphrase error after failed backup session

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  In Progress

Bug description:
  After a failed backup session, duplicity cannot resume the volume
  creation, and seems to fail with bad passphrase. It is, however, the
  same passphrase that is always used.

  The backup set thus seems to be unusable, and the only solution I've
  found is to simply create a new backup set in a new directory, losing
  incremental history.

  Full output attached as duplicity-error-incremental-log-v9.

  Also reported as Debian bug #659009, but it has gotten no attention
  for a month.

  Duplicity 0.6.17
  Python 2.7.2+
  Debian Sid
  Linux

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/946988/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266753] Re: Boto backend removes local cache if connection cannot be made

2014-01-24 Thread Louis Bouchard
The backport of this fix to precise, Raring, Saucy and Trusty has been
submitted for SRU with other fixes in LP: #1266763

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266753

Title:
  Boto backend removes local cache if connection cannot be made

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Saucy:
  In Progress

Bug description:
  When there is no connection to the S3 backend, the local cache files
  are deleted. To reproduce:

  1. disable the connection to S3
  2. run a collection-status (basically I run 'duply X status')

  You'll get a bunch of these:

  Deleting local 
/srv/duply-cache/duply_srv/duplicity-inc.20140106T010002Z.to.20140107T010002Z.manifest
 (not authoritative at backend).
  Deleting local 
/srv/duply-cache/duply_srv/duplicity-new-signatures.20131211T124323Z.to.20131211T124519Z.sigtar.gz
 (not authoritative at backend).

  This is fatal if you run it in a configuration using GPG and having
  only the public key for encryption as well as a separate signing key.
  Then you cannot backup any more, as the decrypted local cache has been
  deleted and the files on the S3 are encrypted.

  Probably reason:

  There is no check if the connection to the backend could be
  established

  Workaround:

  If you replace at

  http://bazaar.launchpad.net/~duplicity-
  team/duplicity/0.6-series/view/head:/duplicity/backends/_boto_single.py#L270

  the line

  return []

  with

  return None

  Then duplicity will crash instead of deleting the local files. Not the
  proper solution but at least you can do a backup when the connection
  comes back up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266753/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-01-22 Thread Louis Bouchard
** Changed in: duplicity
   Status: In Progress = Fix Committed

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Committed
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress
Status in “duplicity” source package in Saucy:
  In Progress
Status in “duplicity” source package in Trusty:
  In Progress

Bug description:
  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case :
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
  If you are sure that this is the  only instance running you may delete
  the following lockfile and run the command again :
     
/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile.lock

  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned as outlined in the error message

  Original description of the problem :

  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266753] Re: Boto backend removes local cache if connection cannot be made

2014-01-22 Thread Louis Bouchard
** Branch linked: lp:~louis-bouchard/ubuntu/precise/duplicity/fix-s3
-and-add-locking

** Branch linked: lp:~louis-bouchard/ubuntu/raring/duplicity/fix-s3-and-
add-locking

** Branch linked: lp:~louis-bouchard/ubuntu/saucy/duplicity/fix-s3-and-
add-locking

** Branch linked: lp:~louis-bouchard/ubuntu/trusty/duplicity/fix-s3-and-
add-locking

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266753

Title:
  Boto backend removes local cache if connection cannot be made

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Committed
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress

Bug description:
  When there is no connection to the S3 backend, the local cache files
  are deleted. To reproduce:

  1. disable the connection to S3
  2. run a collection-status (basically I run 'duply X status')

  You'll get a bunch of these:

  Deleting local 
/srv/duply-cache/duply_srv/duplicity-inc.20140106T010002Z.to.20140107T010002Z.manifest
 (not authoritative at backend).
  Deleting local 
/srv/duply-cache/duply_srv/duplicity-new-signatures.20131211T124323Z.to.20131211T124519Z.sigtar.gz
 (not authoritative at backend).

  This is fatal if you run it in a configuration using GPG and having
  only the public key for encryption as well as a separate signing key.
  Then you cannot backup any more, as the decrypted local cache has been
  deleted and the files on the S3 are encrypted.

  Probably reason:

  There is no check if the connection to the backend could be
  established

  Workaround:

  If you replace at

  http://bazaar.launchpad.net/~duplicity-
  team/duplicity/0.6-series/view/head:/duplicity/backends/_boto_single.py#L270

  the line

  return []

  with

  return None

  Then duplicity will crash instead of deleting the local files. Not the
  proper solution but at least you can do a backup when the connection
  comes back up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266753/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-01-21 Thread Louis Bouchard
Unsubscribing SRU Team  sponsors for now : there is ongoing discussion
with upstream on the validity of the --allow-concurrency option so
please WAIT AND DO NOT MERGE for now

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Committed
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress
Status in “duplicity” source package in Saucy:
  In Progress
Status in “duplicity” source package in Trusty:
  In Progress

Bug description:
  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case : 
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory

  To disable the locking mechanism, the --allow-concurrency option can be used :
  $ sudo duplicity --allow-concurrency collection-status file:///tmp/backup 

  Local and Remote metadata are synchronized, no sync needed.
  Last full backup date: Mon Jan 20 17:11:39 2014
  Collection Status
  -
  Connecting with backend: LocalBackend
  Archive dir: /home/ubuntu/.cache/duplicity/ba8d32ccb88d13597b4784252744fc75

  Found 0 secondary backup chains.

  Found primary backup chain with matching signature chain:
  -
  Chain start time: Mon Jan 20 17:11:39 2014
  Chain end time: Mon Jan 20 17:11:39 2014
  Number of contained backup sets: 1
  Total number of contained volumes: 3
   Type of backup set:Time:  Num volumes:
  Full Mon Jan 20 17:11:39 2014 3
  -
  No orphaned or incomplete backup sets found.

  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned or the --allow-concurrency option will be 
needed

  Original description of the problem :

  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-01-21 Thread Louis Bouchard
The result of our email discussion is that the --allow-concurrency
option needs to be removed but the locking mechanism stays. A note will
be added to the error message to indicate the lockfile that needs to be
removed to overturn the locking if it has been left behind from a failed
session.

I switched back the upstream status to In Progress to highlight the fact
that the solution that was merge is now incomplete. I'm starting to work
on a new patsh that should come in shortly

** Changed in: duplicity
   Status: Fix Committed = In Progress

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  In Progress
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress
Status in “duplicity” source package in Saucy:
  In Progress
Status in “duplicity” source package in Trusty:
  In Progress

Bug description:
  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case : 
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory

  To disable the locking mechanism, the --allow-concurrency option can be used :
  $ sudo duplicity --allow-concurrency collection-status file:///tmp/backup 

  Local and Remote metadata are synchronized, no sync needed.
  Last full backup date: Mon Jan 20 17:11:39 2014
  Collection Status
  -
  Connecting with backend: LocalBackend
  Archive dir: /home/ubuntu/.cache/duplicity/ba8d32ccb88d13597b4784252744fc75

  Found 0 secondary backup chains.

  Found primary backup chain with matching signature chain:
  -
  Chain start time: Mon Jan 20 17:11:39 2014
  Chain end time: Mon Jan 20 17:11:39 2014
  Number of contained backup sets: 1
  Total number of contained volumes: 3
   Type of backup set:Time:  Num volumes:
  Full Mon Jan 20 17:11:39 2014 3
  -
  No orphaned or incomplete backup sets found.

  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned or the --allow-concurrency option will be 
needed

  Original description of the problem :

  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-01-21 Thread Louis Bouchard
** Description changed:

  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)
  
  Impact : Potential corruption of the local cache
  
  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile
  
- Test Case : 
+ Test Case :
  1) Run one instance of duplicity :
-  $ sudo mkdir /tmp/backup
-  $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup
+  $ sudo mkdir /tmp/backup
+  $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup
  
  While this command is running execute the following in a separate console :
-  $ sudo duplicity collection-status file:///tmp/backup
+  $ sudo duplicity collection-status file:///tmp/backup
  
  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
- 
- To disable the locking mechanism, the --allow-concurrency option can be used :
- $ sudo duplicity --allow-concurrency collection-status file:///tmp/backup 

- Local and Remote metadata are synchronized, no sync needed.
- Last full backup date: Mon Jan 20 17:11:39 2014
- Collection Status
- -
- Connecting with backend: LocalBackend
- Archive dir: /home/ubuntu/.cache/duplicity/ba8d32ccb88d13597b4784252744fc75
- 
- Found 0 secondary backup chains.
- 
- Found primary backup chain with matching signature chain:
- -
- Chain start time: Mon Jan 20 17:11:39 2014
- Chain end time: Mon Jan 20 17:11:39 2014
- Number of contained backup sets: 1
- Total number of contained volumes: 3
-  Type of backup set:Time:  Num volumes:
- Full Mon Jan 20 17:11:39 2014 3
- -
- No orphaned or incomplete backup sets found.
+ If you are sure that this is the  only instance running you may delete
+ the following lockfile and run the command again :
+/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile
  
  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
- directory will have to be cleaned or the --allow-concurrency option will be 
needed
+ directory will have to be cleaned as outlined in the error message
  
  Original description of the problem :
  
  When runnining duply X status while running duply X backup (sorry, I
  don't know which duplicity commands are created by duply) due to a race-
  condition the code of 'sync_archive' might happend to append newly
  created meta-data files to 'local_spurious' and subsequently delete
  them. This delete seems to have been the reason that triggered bug
  1216921.
  
  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and remote
  files independently over a larger time span. This means that a local
  file might already been remote but the status command did not see it a
  few seconds ago.

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  In Progress
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress
Status in “duplicity” source package in Saucy:
  In Progress
Status in “duplicity” source package in Trusty:
  In Progress

Bug description:
  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case :
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
  If you are sure that this is the  only instance running you may delete
  the following lockfile and run the command again :
 /home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile

  Regression : In the case of 

[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-01-21 Thread Louis Bouchard
** Description changed:

  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)
  
  Impact : Potential corruption of the local cache
  
  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile
  
  Test Case :
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup
  
  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup
  
  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
  If you are sure that this is the  only instance running you may delete
  the following lockfile and run the command again :
-/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile
+    
/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile.lock
  
  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned as outlined in the error message
  
  Original description of the problem :
  
  When runnining duply X status while running duply X backup (sorry, I
  don't know which duplicity commands are created by duply) due to a race-
  condition the code of 'sync_archive' might happend to append newly
  created meta-data files to 'local_spurious' and subsequently delete
  them. This delete seems to have been the reason that triggered bug
  1216921.
  
  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and remote
  files independently over a larger time span. This means that a local
  file might already been remote but the status command did not see it a
  few seconds ago.

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  In Progress
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress
Status in “duplicity” source package in Saucy:
  In Progress
Status in “duplicity” source package in Trusty:
  In Progress

Bug description:
  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case :
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory
  If you are sure that this is the  only instance running you may delete
  the following lockfile and run the command again :
     
/home/ubuntu/.cache/duplicity/3fe07cc0f71075f95f411fb55ec60120/lockfile.lock

  Regression : In the case of spurrious interruption of duplicity, the lockfile
  will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
  directory will have to be cleaned as outlined in the error message

  Original description of the problem :

  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : 

[Desktop-packages] [Bug 1266753] Re: Boto backend removes local cache if connection cannot be made

2014-01-20 Thread Louis Bouchard
** Changed in: duplicity (Ubuntu Precise)
   Status: Triaged = In Progress

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266753

Title:
  Boto backend removes local cache if connection cannot be made

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Committed
Status in “duplicity” package in Ubuntu:
  Triaged
Status in “duplicity” source package in Precise:
  In Progress

Bug description:
  When there is no connection to the S3 backend, the local cache files
  are deleted. To reproduce:

  1. disable the connection to S3
  2. run a collection-status (basically I run 'duply X status')

  You'll get a bunch of these:

  Deleting local 
/srv/duply-cache/duply_srv/duplicity-inc.20140106T010002Z.to.20140107T010002Z.manifest
 (not authoritative at backend).
  Deleting local 
/srv/duply-cache/duply_srv/duplicity-new-signatures.20131211T124323Z.to.20131211T124519Z.sigtar.gz
 (not authoritative at backend).

  This is fatal if you run it in a configuration using GPG and having
  only the public key for encryption as well as a separate signing key.
  Then you cannot backup any more, as the decrypted local cache has been
  deleted and the files on the S3 are encrypted.

  Probably reason:

  There is no check if the connection to the backend could be
  established

  Workaround:

  If you replace at

  http://bazaar.launchpad.net/~duplicity-
  team/duplicity/0.6-series/view/head:/duplicity/backends/_boto_single.py#L270

  the line

  return []

  with

  return None

  Then duplicity will crash instead of deleting the local files. Not the
  proper solution but at least you can do a backup when the connection
  comes back up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266753/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-01-20 Thread Louis Bouchard
** Description changed:

+ SRU justification : Race condition exist when two instances of duplicity run 
in the same
+ cache directory (.cache/duplicity)
+ 
+ Impact : Potential corruption of the local cache
+ 
+ Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
+ case of the presence of the lockfile
+ 
+ Test Case : 
+ 1) Run one instance of duplicity :
+  $ sudo mkdir /tmp/backup
+  $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup
+ 
+ While this command is running execute the following in a separate console :
+  $ sudo duplicity collection-status file:///tmp/backup
+ 
+ With the new locking mechanism you will see the following :
+ $ sudo duplicity collection-status file:///tmp/backup
+ Another instance is already running with this archive directory
+ 
+ To disable the locking mechanism, the --allow-concurrency option can be used :
+ $ sudo duplicity --allow-concurrency collection-status file:///tmp/backup 

+ Local and Remote metadata are synchronized, no sync needed.
+ Last full backup date: Mon Jan 20 17:11:39 2014
+ Collection Status
+ -
+ Connecting with backend: LocalBackend
+ Archive dir: /home/ubuntu/.cache/duplicity/ba8d32ccb88d13597b4784252744fc75
+ 
+ Found 0 secondary backup chains.
+ 
+ Found primary backup chain with matching signature chain:
+ -
+ Chain start time: Mon Jan 20 17:11:39 2014
+ Chain end time: Mon Jan 20 17:11:39 2014
+ Number of contained backup sets: 1
+ Total number of contained volumes: 3
+  Type of backup set:Time:  Num volumes:
+ Full Mon Jan 20 17:11:39 2014 3
+ -
+ No orphaned or incomplete backup sets found.
+ 
+ Regression : In the case of spurrious interruption of duplicity, the lockfile
+ will remain in .cache/duplicity which can prevent future use of duplicity. 
The cache
+ directory will have to be cleaned or the --allow-concurrency option will be 
needed
+ 
+ Original description of the problem :
+ 
  When runnining duply X status while running duply X backup (sorry, I
  don't know which duplicity commands are created by duply) due to a race-
  condition the code of 'sync_archive' might happend to append newly
  created meta-data files to 'local_spurious' and subsequently delete
  them. This delete seems to have been the reason that triggered bug
  1216921.
  
  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and remote
  files independently over a larger time span. This means that a local
  file might already been remote but the status command did not see it a
  few seconds ago.

** Changed in: duplicity (Ubuntu Quantal)
   Status: Triaged = In Progress

** Changed in: duplicity (Ubuntu Raring)
   Status: Triaged = In Progress

** Changed in: duplicity (Ubuntu Saucy)
   Status: Triaged = In Progress

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Committed
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress
Status in “duplicity” source package in Saucy:
  In Progress
Status in “duplicity” source package in Trusty:
  In Progress

Bug description:
  SRU justification : Race condition exist when two instances of duplicity run 
in the same
  cache directory (.cache/duplicity)

  Impact : Potential corruption of the local cache

  Fix : Add a lockfile in the local cache  prevent execution of a second 
instance in the
  case of the presence of the lockfile

  Test Case : 
  1) Run one instance of duplicity :
   $ sudo mkdir /tmp/backup
   $ sudo duplicity --exclude=/proc --exclude=/sys --exclude=/tmp / 
file:///tmp/backup

  While this command is running execute the following in a separate console :
   $ sudo duplicity collection-status file:///tmp/backup

  With the new locking mechanism you will see the following :
  $ sudo duplicity collection-status file:///tmp/backup
  Another instance is already running with this archive directory

  To disable the locking mechanism, the --allow-concurrency option can be used :
  $ sudo duplicity --allow-concurrency collection-status file:///tmp/backup 

  Local and Remote metadata are synchronized, no sync needed.
  Last full backup date: Mon Jan 20 17:11:39 2014
  Collection Status
  -
  Connecting with backend: LocalBackend
  Archive dir: 

[Desktop-packages] [Bug 1266753] Re: Boto backend removes local cache if connection cannot be made

2014-01-20 Thread Louis Bouchard
** Changed in: duplicity (Ubuntu)
   Status: Triaged = In Progress

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266753

Title:
  Boto backend removes local cache if connection cannot be made

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Committed
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress

Bug description:
  When there is no connection to the S3 backend, the local cache files
  are deleted. To reproduce:

  1. disable the connection to S3
  2. run a collection-status (basically I run 'duply X status')

  You'll get a bunch of these:

  Deleting local 
/srv/duply-cache/duply_srv/duplicity-inc.20140106T010002Z.to.20140107T010002Z.manifest
 (not authoritative at backend).
  Deleting local 
/srv/duply-cache/duply_srv/duplicity-new-signatures.20131211T124323Z.to.20131211T124519Z.sigtar.gz
 (not authoritative at backend).

  This is fatal if you run it in a configuration using GPG and having
  only the public key for encryption as well as a separate signing key.
  Then you cannot backup any more, as the decrypted local cache has been
  deleted and the files on the S3 are encrypted.

  Probably reason:

  There is no check if the connection to the backend could be
  established

  Workaround:

  If you replace at

  http://bazaar.launchpad.net/~duplicity-
  team/duplicity/0.6-series/view/head:/duplicity/backends/_boto_single.py#L270

  the line

  return []

  with

  return None

  Then duplicity will crash instead of deleting the local files. Not the
  proper solution but at least you can do a backup when the connection
  comes back up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266753/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-01-18 Thread Louis Bouchard
** Changed in: duplicity (Ubuntu Quantal)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

** Changed in: duplicity (Ubuntu Raring)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

** Changed in: duplicity (Ubuntu Saucy)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

** Changed in: duplicity (Ubuntu Quantal)
   Status: New = Triaged

** Changed in: duplicity (Ubuntu Raring)
   Status: New = Triaged

** Changed in: duplicity (Ubuntu Saucy)
   Status: New = Triaged

** Changed in: duplicity (Ubuntu Precise)
   Importance: Undecided = Medium

** Changed in: duplicity (Ubuntu Quantal)
   Importance: Undecided = Medium

** Changed in: duplicity (Ubuntu Raring)
   Importance: Undecided = Medium

** Changed in: duplicity (Ubuntu Saucy)
   Importance: Undecided = Medium

** Changed in: duplicity (Ubuntu Trusty)
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Committed
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  Triaged
Status in “duplicity” source package in Raring:
  Triaged
Status in “duplicity” source package in Saucy:
  Triaged
Status in “duplicity” source package in Trusty:
  In Progress

Bug description:
  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266763] Re: Race condition between status and backup

2014-01-17 Thread Louis Bouchard
** Also affects: duplicity (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: duplicity (Ubuntu)
   Status: New = In Progress

** Changed in: duplicity (Ubuntu)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266763

Title:
  Race condition between status and backup

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  In Progress
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  New
Status in “duplicity” source package in Raring:
  New
Status in “duplicity” source package in Saucy:
  New
Status in “duplicity” source package in Trusty:
  In Progress

Bug description:
  When runnining duply X status while running duply X backup (sorry,
  I don't know which duplicity commands are created by duply) due to a
  race-condition the code of 'sync_archive' might happend to append
  newly created meta-data files to 'local_spurious' and subsequently
  delete them. This delete seems to have been the reason that triggered
  bug 1216921.

  The race condition is that the backup command constantly creates meta-
  data files while the status command queries the list of local and
  remote files independently over a larger time span. This means that a
  local file might already been remote but the status command did not
  see it a few seconds ago.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266763/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1266753] Re: Boto backend removes local cache if connection cannot be made

2014-01-15 Thread Louis Bouchard
I will do the required SRU to precise

** Also affects: duplicity (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: duplicity (Ubuntu)
   Status: New = Triaged

** Changed in: duplicity (Ubuntu)
   Importance: Undecided = Medium

** Changed in: duplicity (Ubuntu)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1266753

Title:
  Boto backend removes local cache if connection cannot be made

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Committed
Status in “duplicity” package in Ubuntu:
  Triaged

Bug description:
  When there is no connection to the S3 backend, the local cache files
  are deleted. To reproduce:

  1. disable the connection to S3
  2. run a collection-status (basically I run 'duply X status')

  You'll get a bunch of these:

  Deleting local 
/srv/duply-cache/duply_srv/duplicity-inc.20140106T010002Z.to.20140107T010002Z.manifest
 (not authoritative at backend).
  Deleting local 
/srv/duply-cache/duply_srv/duplicity-new-signatures.20131211T124323Z.to.20131211T124519Z.sigtar.gz
 (not authoritative at backend).

  This is fatal if you run it in a configuration using GPG and having
  only the public key for encryption as well as a separate signing key.
  Then you cannot backup any more, as the decrypted local cache has been
  deleted and the files on the S3 are encrypted.

  Probably reason:

  There is no check if the connection to the backend could be
  established

  Workaround:

  If you replace at

  http://bazaar.launchpad.net/~duplicity-
  team/duplicity/0.6-series/view/head:/duplicity/backends/_boto_single.py#L270

  the line

  return []

  with

  return None

  Then duplicity will crash instead of deleting the local files. Not the
  proper solution but at least you can do a backup when the connection
  comes back up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1266753/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1216921] Re: util.ignore_missing() does not work

2013-11-12 Thread Louis Bouchard
** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1216921

Title:
  util.ignore_missing() does not work

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Committed
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  Fix Committed
Status in “duplicity” source package in Quantal:
  Fix Committed
Status in “duplicity” source package in Raring:
  Fix Committed

Bug description:
  SRU justification :

  Without this fix, there is a potential for crash during execution of
  duplicity

  Impact :

  Renders duplicity potentially unusable dues to spurious crashes

  Fix :

  Backport upstream fix for this problem merged in 
  https://code.launchpad.net/~mterry/duplicity/ignore-missing

  Test Case :

  A session must be run within the python debugger to systematically
  reproduce the context.

  1) Run a duplicity session as outlined in comment #8 inside the debugger
  2) break at duplicity/path:568 instead of 567
  3) When the program breaks, manually remove the file that ends in 
...manifest.gpg
  4) continue execution

  Without the fix, duplicity will crash with the outlined backtrace.
  With the fix, duplicity will terminate normally.

  Regression :

  Minimal as the modification changes exception handling only for a
  function only used twice to delete files/directories (path.py 
  tempdir.py)

  Description of the problem :

  Duplicity can potentially crash while attempting to delete a file that
  no longer exists.

  Original description

  duplicity version: 0.6.18-0ubuntu3
  python version: 2.7.3
  Distro: ubuntu precise 12.04
  target file system: ftp

  I happen to encounter failed backups with tracebacks like this:

  Traceback (most recent call last):
    File /usr/bin/duplicity, line 1403, in module
   with_tempdir(main)
    File /usr/bin/duplicity, line 1396, in with_tempdir
   fn()
     File /usr/bin/duplicity, line 1366, in main
   full_backup(col_stats)
     File /usr/bin/duplicity, line 504, in full_backup
   sig_outfp.to_remote()
     File /usr/lib/python2.7/dist-packages/duplicity/dup_temp.py, line 184, 
in to_remote
   globals.backend.move(tgt) #@UndefinedVariable
     File /usr/lib/python2.7/dist-packages/duplicity/backend.py, line 364, in 
move
   source_path.delete()
     File /usr/lib/python2.7/dist-packages/duplicity/path.py, line 567, in 
delete
   util.ignore_missing(os.unlink, self.name)
     File /usr/lib/python2.7/dist-packages/duplicity/util.py, line 116, in 
ignore_missing
   fn(filename)
   OSError: [Errno 2] No such file or directory: 
'/BACKUP/.duplycache/duply_foo/duplicity-full-signatures.20130825T140002Z.sigtar.gpg'

  Now running test code like

  #!/usr/bin/env python
  #
  # Do what util.ignore_missing(os.unlink, self.name) tries to do and
  # fails to achieve

  import os
  import sys
  import errno

  if __name__ == __main__:
  try:
  os.unlink(/tmp/doesnotexist)
  except Exception:
  # type is a reserved keyword, replaced with mytype
  mytype, value, tb = sys.exc_info()
  print - * 78
  print mytype: , mytype
  print value: , value
  print value[0]:, value[0]
  print errno.ENOENT: , errno.ENOENT
  print isinstance(mytype, OSError): , isinstance(mytype, 
OSError)
  print - * 78
  if isinstance(mytype, OSError) and value[0] == errno.ENOENT:
  print Gotcha!
  pass
  print Ooops, missed it ...
  raise

  will always raise the exception and not ignore it, because
  isinstance(mytype, OSError) is always False.

  What I expect ignore_missing to look like is:

  def ignore_missing(fn, filename):
  
  Execute fn on filename.  Ignore ENOENT errors, otherwise raise exception.

  @param fn: callable
  @param filename: string
  
  try:
  fn(filename)
  except OSError, ex:
  if ex.errno == errno.ENOENT:
      pass
      else:
      raise
      else:
      raise

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1216921/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1216921] Re: util.ignore_missing() does not work

2013-10-25 Thread Louis Bouchard
Updated quantal debdiff with typo fixed

** Patch added: lp1216921_ignoremissing_quantal.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/duplicity/+bug/1216921/+attachment/3891187/+files/lp1216921_ignoremissing_quantal.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1216921

Title:
  util.ignore_missing() does not work

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Committed
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress

Bug description:
  SRU justification :

  Without this fix, there is a potential for crash during execution of
  duplicity

  Impact :

  Renders duplicity potentially unusable dues to spurious crashes

  Fix :

  Backport upstream fix for this problem merged in 
  https://code.launchpad.net/~mterry/duplicity/ignore-missing

  Test Case :

  A session must be run within the python debugger to systematically
  reproduce the context.

  1) Run a duplicity session as outlined in comment #8 inside the debugger
  2) break at duplicity/path:568 instead of 567
  3) When the program breaks, manually remove the file that ends in 
...manifest.gpg
  4) continue execution

  Without the fix, duplicity will crash with the outlined backtrace.
  With the fix, duplicity will terminate normally.

  Regression :

  Minimal as the modification changes exception handling only for a
  function only used twice to delete files/directories (path.py 
  tempdir.py)

  Description of the problem :

  Duplicity can potentially crash while attempting to delete a file that
  no longer exists.

  Original description

  duplicity version: 0.6.18-0ubuntu3
  python version: 2.7.3
  Distro: ubuntu precise 12.04
  target file system: ftp

  I happen to encounter failed backups with tracebacks like this:

  Traceback (most recent call last):
    File /usr/bin/duplicity, line 1403, in module
   with_tempdir(main)
    File /usr/bin/duplicity, line 1396, in with_tempdir
   fn()
     File /usr/bin/duplicity, line 1366, in main
   full_backup(col_stats)
     File /usr/bin/duplicity, line 504, in full_backup
   sig_outfp.to_remote()
     File /usr/lib/python2.7/dist-packages/duplicity/dup_temp.py, line 184, 
in to_remote
   globals.backend.move(tgt) #@UndefinedVariable
     File /usr/lib/python2.7/dist-packages/duplicity/backend.py, line 364, in 
move
   source_path.delete()
     File /usr/lib/python2.7/dist-packages/duplicity/path.py, line 567, in 
delete
   util.ignore_missing(os.unlink, self.name)
     File /usr/lib/python2.7/dist-packages/duplicity/util.py, line 116, in 
ignore_missing
   fn(filename)
   OSError: [Errno 2] No such file or directory: 
'/BACKUP/.duplycache/duply_foo/duplicity-full-signatures.20130825T140002Z.sigtar.gpg'

  Now running test code like

  #!/usr/bin/env python
  #
  # Do what util.ignore_missing(os.unlink, self.name) tries to do and
  # fails to achieve

  import os
  import sys
  import errno

  if __name__ == __main__:
  try:
  os.unlink(/tmp/doesnotexist)
  except Exception:
  # type is a reserved keyword, replaced with mytype
  mytype, value, tb = sys.exc_info()
  print - * 78
  print mytype: , mytype
  print value: , value
  print value[0]:, value[0]
  print errno.ENOENT: , errno.ENOENT
  print isinstance(mytype, OSError): , isinstance(mytype, 
OSError)
  print - * 78
  if isinstance(mytype, OSError) and value[0] == errno.ENOENT:
  print Gotcha!
  pass
  print Ooops, missed it ...
  raise

  will always raise the exception and not ignore it, because
  isinstance(mytype, OSError) is always False.

  What I expect ignore_missing to look like is:

  def ignore_missing(fn, filename):
  
  Execute fn on filename.  Ignore ENOENT errors, otherwise raise exception.

  @param fn: callable
  @param filename: string
  
  try:
  fn(filename)
  except OSError, ex:
  if ex.errno == errno.ENOENT:
      pass
      else:
      raise
      else:
      raise

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1216921/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1216921] Re: util.ignore_missing() does not work

2013-10-25 Thread Louis Bouchard
Updated raring debdiff with typo fixed

** Patch added: lp1216921_ignoremissing_raring.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/duplicity/+bug/1216921/+attachment/3891188/+files/lp1216921_ignoremissing_raring.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1216921

Title:
  util.ignore_missing() does not work

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Committed
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress

Bug description:
  SRU justification :

  Without this fix, there is a potential for crash during execution of
  duplicity

  Impact :

  Renders duplicity potentially unusable dues to spurious crashes

  Fix :

  Backport upstream fix for this problem merged in 
  https://code.launchpad.net/~mterry/duplicity/ignore-missing

  Test Case :

  A session must be run within the python debugger to systematically
  reproduce the context.

  1) Run a duplicity session as outlined in comment #8 inside the debugger
  2) break at duplicity/path:568 instead of 567
  3) When the program breaks, manually remove the file that ends in 
...manifest.gpg
  4) continue execution

  Without the fix, duplicity will crash with the outlined backtrace.
  With the fix, duplicity will terminate normally.

  Regression :

  Minimal as the modification changes exception handling only for a
  function only used twice to delete files/directories (path.py 
  tempdir.py)

  Description of the problem :

  Duplicity can potentially crash while attempting to delete a file that
  no longer exists.

  Original description

  duplicity version: 0.6.18-0ubuntu3
  python version: 2.7.3
  Distro: ubuntu precise 12.04
  target file system: ftp

  I happen to encounter failed backups with tracebacks like this:

  Traceback (most recent call last):
    File /usr/bin/duplicity, line 1403, in module
   with_tempdir(main)
    File /usr/bin/duplicity, line 1396, in with_tempdir
   fn()
     File /usr/bin/duplicity, line 1366, in main
   full_backup(col_stats)
     File /usr/bin/duplicity, line 504, in full_backup
   sig_outfp.to_remote()
     File /usr/lib/python2.7/dist-packages/duplicity/dup_temp.py, line 184, 
in to_remote
   globals.backend.move(tgt) #@UndefinedVariable
     File /usr/lib/python2.7/dist-packages/duplicity/backend.py, line 364, in 
move
   source_path.delete()
     File /usr/lib/python2.7/dist-packages/duplicity/path.py, line 567, in 
delete
   util.ignore_missing(os.unlink, self.name)
     File /usr/lib/python2.7/dist-packages/duplicity/util.py, line 116, in 
ignore_missing
   fn(filename)
   OSError: [Errno 2] No such file or directory: 
'/BACKUP/.duplycache/duply_foo/duplicity-full-signatures.20130825T140002Z.sigtar.gpg'

  Now running test code like

  #!/usr/bin/env python
  #
  # Do what util.ignore_missing(os.unlink, self.name) tries to do and
  # fails to achieve

  import os
  import sys
  import errno

  if __name__ == __main__:
  try:
  os.unlink(/tmp/doesnotexist)
  except Exception:
  # type is a reserved keyword, replaced with mytype
  mytype, value, tb = sys.exc_info()
  print - * 78
  print mytype: , mytype
  print value: , value
  print value[0]:, value[0]
  print errno.ENOENT: , errno.ENOENT
  print isinstance(mytype, OSError): , isinstance(mytype, 
OSError)
  print - * 78
  if isinstance(mytype, OSError) and value[0] == errno.ENOENT:
  print Gotcha!
  pass
  print Ooops, missed it ...
  raise

  will always raise the exception and not ignore it, because
  isinstance(mytype, OSError) is always False.

  What I expect ignore_missing to look like is:

  def ignore_missing(fn, filename):
  
  Execute fn on filename.  Ignore ENOENT errors, otherwise raise exception.

  @param fn: callable
  @param filename: string
  
  try:
  fn(filename)
  except OSError, ex:
  if ex.errno == errno.ENOENT:
      pass
      else:
      raise
      else:
      raise

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1216921/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1216921] Re: util.ignore_missing() does not work

2013-10-25 Thread Louis Bouchard
Updated precise debdiff with typo fixed

** Patch removed: lp1216921_ignoremissing_precise.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/duplicity/+bug/1216921/+attachment/3838074/+files/lp1216921_ignoremissing_precise.debdiff

** Patch removed: lp1216921_ignoremissing_quantal.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/duplicity/+bug/1216921/+attachment/3838075/+files/lp1216921_ignoremissing_quantal.debdiff

** Patch removed: lp1216921_ignoremissing_raring.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/duplicity/+bug/1216921/+attachment/3838082/+files/lp1216921_ignoremissing_raring.debdiff

** Patch added: lp1216921_ignoremissing_precise.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/duplicity/+bug/1216921/+attachment/3891186/+files/lp1216921_ignoremissing_precise.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1216921

Title:
  util.ignore_missing() does not work

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Committed
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress

Bug description:
  SRU justification :

  Without this fix, there is a potential for crash during execution of
  duplicity

  Impact :

  Renders duplicity potentially unusable dues to spurious crashes

  Fix :

  Backport upstream fix for this problem merged in 
  https://code.launchpad.net/~mterry/duplicity/ignore-missing

  Test Case :

  A session must be run within the python debugger to systematically
  reproduce the context.

  1) Run a duplicity session as outlined in comment #8 inside the debugger
  2) break at duplicity/path:568 instead of 567
  3) When the program breaks, manually remove the file that ends in 
...manifest.gpg
  4) continue execution

  Without the fix, duplicity will crash with the outlined backtrace.
  With the fix, duplicity will terminate normally.

  Regression :

  Minimal as the modification changes exception handling only for a
  function only used twice to delete files/directories (path.py 
  tempdir.py)

  Description of the problem :

  Duplicity can potentially crash while attempting to delete a file that
  no longer exists.

  Original description

  duplicity version: 0.6.18-0ubuntu3
  python version: 2.7.3
  Distro: ubuntu precise 12.04
  target file system: ftp

  I happen to encounter failed backups with tracebacks like this:

  Traceback (most recent call last):
    File /usr/bin/duplicity, line 1403, in module
   with_tempdir(main)
    File /usr/bin/duplicity, line 1396, in with_tempdir
   fn()
     File /usr/bin/duplicity, line 1366, in main
   full_backup(col_stats)
     File /usr/bin/duplicity, line 504, in full_backup
   sig_outfp.to_remote()
     File /usr/lib/python2.7/dist-packages/duplicity/dup_temp.py, line 184, 
in to_remote
   globals.backend.move(tgt) #@UndefinedVariable
     File /usr/lib/python2.7/dist-packages/duplicity/backend.py, line 364, in 
move
   source_path.delete()
     File /usr/lib/python2.7/dist-packages/duplicity/path.py, line 567, in 
delete
   util.ignore_missing(os.unlink, self.name)
     File /usr/lib/python2.7/dist-packages/duplicity/util.py, line 116, in 
ignore_missing
   fn(filename)
   OSError: [Errno 2] No such file or directory: 
'/BACKUP/.duplycache/duply_foo/duplicity-full-signatures.20130825T140002Z.sigtar.gpg'

  Now running test code like

  #!/usr/bin/env python
  #
  # Do what util.ignore_missing(os.unlink, self.name) tries to do and
  # fails to achieve

  import os
  import sys
  import errno

  if __name__ == __main__:
  try:
  os.unlink(/tmp/doesnotexist)
  except Exception:
  # type is a reserved keyword, replaced with mytype
  mytype, value, tb = sys.exc_info()
  print - * 78
  print mytype: , mytype
  print value: , value
  print value[0]:, value[0]
  print errno.ENOENT: , errno.ENOENT
  print isinstance(mytype, OSError): , isinstance(mytype, 
OSError)
  print - * 78
  if isinstance(mytype, OSError) and value[0] == errno.ENOENT:
  print Gotcha!
  pass
  print Ooops, missed it ...
  raise

  will always raise the exception and not ignore it, because
  isinstance(mytype, OSError) is always False.

  What I expect ignore_missing to look like is:

  def ignore_missing(fn, filename):
  
  Execute fn on filename.  Ignore ENOENT errors, otherwise raise exception.

  @param fn: callable
  @param filename: string
  
  try:
  fn(filename)
  except OSError, 

[Desktop-packages] [Bug 1216921] Re: util.ignore_missing() does not work

2013-09-24 Thread Louis Bouchard
debdiff for precise

** Description changed:

- duplicity version: 0.6.18-0ubuntu3 
+ SRU justification :
+ 
+ Without this fix, there is a potential for crash during execution of
+ duplicity
+ 
+ Impact :
+ 
+ Renders duplicity potentially unusable dues to spurious crashes
+ 
+ Fix :
+ 
+ Backport upstream fix for this problem merged in 
+ https://code.launchpad.net/~mterry/duplicity/ignore-missing
+ 
+ Test Case :
+ 
+ A session must be run within the python debugger to systematically
+ reproduce the context.
+ 
+ 1) Run a duplicity session as outlined in comment #8 inside the debugger
+ 2) break at duplicity/path:568 instead of 567
+ 3) When the program breaks, manually remove the file that ends in 
...manifest.gpg
+ 4) continue execution
+ 
+ Without the fix, duplicity will crash with the outlined backtrace. With
+ the fix, duplicity will terminate normally.
+ 
+ Regression :
+ 
+ Minimal as the modification changes exception handling only for a
+ function only used twice to delete files/directories (path.py 
+ tempdir.py)
+ 
+ Description of the problem :
+ 
+ Duplicity can potentially crash while attempting to delete a file that
+ no longer exists.
+ 
+ Original description
+ 
+ duplicity version: 0.6.18-0ubuntu3
  python version: 2.7.3
  Distro: ubuntu precise 12.04
  target file system: ftp
  
  I happen to encounter failed backups with tracebacks like this:
  
  Traceback (most recent call last):
-   File /usr/bin/duplicity, line 1403, in module
-  with_tempdir(main)
-   File /usr/bin/duplicity, line 1396, in with_tempdir
-  fn()
-File /usr/bin/duplicity, line 1366, in main
-  full_backup(col_stats)
-File /usr/bin/duplicity, line 504, in full_backup
-  sig_outfp.to_remote()
-File /usr/lib/python2.7/dist-packages/duplicity/dup_temp.py, line 184, 
in to_remote
-  globals.backend.move(tgt) #@UndefinedVariable
-File /usr/lib/python2.7/dist-packages/duplicity/backend.py, line 364, in 
move
-  source_path.delete()
-File /usr/lib/python2.7/dist-packages/duplicity/path.py, line 567, in 
delete
-  util.ignore_missing(os.unlink, self.name)
-File /usr/lib/python2.7/dist-packages/duplicity/util.py, line 116, in 
ignore_missing
-  fn(filename)
-  OSError: [Errno 2] No such file or directory: 
'/BACKUP/.duplycache/duply_foo/duplicity-full-signatures.20130825T140002Z.sigtar.gpg'
+   File /usr/bin/duplicity, line 1403, in module
+  with_tempdir(main)
+   File /usr/bin/duplicity, line 1396, in with_tempdir
+  fn()
+    File /usr/bin/duplicity, line 1366, in main
+  full_backup(col_stats)
+    File /usr/bin/duplicity, line 504, in full_backup
+  sig_outfp.to_remote()
+    File /usr/lib/python2.7/dist-packages/duplicity/dup_temp.py, line 184, 
in to_remote
+  globals.backend.move(tgt) #@UndefinedVariable
+    File /usr/lib/python2.7/dist-packages/duplicity/backend.py, line 364, in 
move
+  source_path.delete()
+    File /usr/lib/python2.7/dist-packages/duplicity/path.py, line 567, in 
delete
+  util.ignore_missing(os.unlink, self.name)
+    File /usr/lib/python2.7/dist-packages/duplicity/util.py, line 116, in 
ignore_missing
+  fn(filename)
+  OSError: [Errno 2] No such file or directory: 
'/BACKUP/.duplycache/duply_foo/duplicity-full-signatures.20130825T140002Z.sigtar.gpg'
  
  Now running test code like
  
  #!/usr/bin/env python
  #
  # Do what util.ignore_missing(os.unlink, self.name) tries to do and
  # fails to achieve
  
  import os
  import sys
  import errno
  
  if __name__ == __main__:
- try:
- os.unlink(/tmp/doesnotexist)
- except Exception:
- # type is a reserved keyword, replaced with mytype
- mytype, value, tb = sys.exc_info()
- print - * 78
- print mytype: , mytype
- print value: , value
- print value[0]:, value[0]
- print errno.ENOENT: , errno.ENOENT
- print isinstance(mytype, OSError): , isinstance(mytype, 
OSError)
- print - * 78
- if isinstance(mytype, OSError) and value[0] == errno.ENOENT:
- print Gotcha!
- pass
- print Ooops, missed it ...
- raise
+ try:
+ os.unlink(/tmp/doesnotexist)
+ except Exception:
+ # type is a reserved keyword, replaced with mytype
+ mytype, value, tb = sys.exc_info()
+ print - * 78
+ print mytype: , mytype
+ print value: , value
+ print value[0]:, value[0]
+ print errno.ENOENT: , errno.ENOENT
+ print isinstance(mytype, OSError): , isinstance(mytype, 
OSError)
+ print - * 78
+ if isinstance(mytype, OSError) and value[0] == errno.ENOENT:
+ print Gotcha!
+ pass
+ 

[Desktop-packages] [Bug 1216921] Re: util.ignore_missing() does not work

2013-09-24 Thread Louis Bouchard
debdiff for raring

** Patch added: lp1216921_ignoremissing_raring.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/duplicity/+bug/1216921/+attachment/3838082/+files/lp1216921_ignoremissing_raring.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1216921

Title:
  util.ignore_missing() does not work

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Committed
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress

Bug description:
  SRU justification :

  Without this fix, there is a potential for crash during execution of
  duplicity

  Impact :

  Renders duplicity potentially unusable dues to spurious crashes

  Fix :

  Backport upstream fix for this problem merged in 
  https://code.launchpad.net/~mterry/duplicity/ignore-missing

  Test Case :

  A session must be run within the python debugger to systematically
  reproduce the context.

  1) Run a duplicity session as outlined in comment #8 inside the debugger
  2) break at duplicity/path:568 instead of 567
  3) When the program breaks, manually remove the file that ends in 
...manifest.gpg
  4) continue execution

  Without the fix, duplicity will crash with the outlined backtrace.
  With the fix, duplicity will terminate normally.

  Regression :

  Minimal as the modification changes exception handling only for a
  function only used twice to delete files/directories (path.py 
  tempdir.py)

  Description of the problem :

  Duplicity can potentially crash while attempting to delete a file that
  no longer exists.

  Original description

  duplicity version: 0.6.18-0ubuntu3
  python version: 2.7.3
  Distro: ubuntu precise 12.04
  target file system: ftp

  I happen to encounter failed backups with tracebacks like this:

  Traceback (most recent call last):
    File /usr/bin/duplicity, line 1403, in module
   with_tempdir(main)
    File /usr/bin/duplicity, line 1396, in with_tempdir
   fn()
     File /usr/bin/duplicity, line 1366, in main
   full_backup(col_stats)
     File /usr/bin/duplicity, line 504, in full_backup
   sig_outfp.to_remote()
     File /usr/lib/python2.7/dist-packages/duplicity/dup_temp.py, line 184, 
in to_remote
   globals.backend.move(tgt) #@UndefinedVariable
     File /usr/lib/python2.7/dist-packages/duplicity/backend.py, line 364, in 
move
   source_path.delete()
     File /usr/lib/python2.7/dist-packages/duplicity/path.py, line 567, in 
delete
   util.ignore_missing(os.unlink, self.name)
     File /usr/lib/python2.7/dist-packages/duplicity/util.py, line 116, in 
ignore_missing
   fn(filename)
   OSError: [Errno 2] No such file or directory: 
'/BACKUP/.duplycache/duply_foo/duplicity-full-signatures.20130825T140002Z.sigtar.gpg'

  Now running test code like

  #!/usr/bin/env python
  #
  # Do what util.ignore_missing(os.unlink, self.name) tries to do and
  # fails to achieve

  import os
  import sys
  import errno

  if __name__ == __main__:
  try:
  os.unlink(/tmp/doesnotexist)
  except Exception:
  # type is a reserved keyword, replaced with mytype
  mytype, value, tb = sys.exc_info()
  print - * 78
  print mytype: , mytype
  print value: , value
  print value[0]:, value[0]
  print errno.ENOENT: , errno.ENOENT
  print isinstance(mytype, OSError): , isinstance(mytype, 
OSError)
  print - * 78
  if isinstance(mytype, OSError) and value[0] == errno.ENOENT:
  print Gotcha!
  pass
  print Ooops, missed it ...
  raise

  will always raise the exception and not ignore it, because
  isinstance(mytype, OSError) is always False.

  What I expect ignore_missing to look like is:

  def ignore_missing(fn, filename):
  
  Execute fn on filename.  Ignore ENOENT errors, otherwise raise exception.

  @param fn: callable
  @param filename: string
  
  try:
  fn(filename)
  except OSError, ex:
  if ex.errno == errno.ENOENT:
      pass
      else:
      raise
      else:
      raise

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1216921/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1216921] Re: util.ignore_missing() does not work

2013-09-24 Thread Louis Bouchard
debdiff for quantal

** Patch added: lp1216921_ignoremissing_quantal.debdiff
   
https://bugs.launchpad.net/ubuntu/+source/duplicity/+bug/1216921/+attachment/3838075/+files/lp1216921_ignoremissing_quantal.debdiff

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1216921

Title:
  util.ignore_missing() does not work

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Committed
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress

Bug description:
  SRU justification :

  Without this fix, there is a potential for crash during execution of
  duplicity

  Impact :

  Renders duplicity potentially unusable dues to spurious crashes

  Fix :

  Backport upstream fix for this problem merged in 
  https://code.launchpad.net/~mterry/duplicity/ignore-missing

  Test Case :

  A session must be run within the python debugger to systematically
  reproduce the context.

  1) Run a duplicity session as outlined in comment #8 inside the debugger
  2) break at duplicity/path:568 instead of 567
  3) When the program breaks, manually remove the file that ends in 
...manifest.gpg
  4) continue execution

  Without the fix, duplicity will crash with the outlined backtrace.
  With the fix, duplicity will terminate normally.

  Regression :

  Minimal as the modification changes exception handling only for a
  function only used twice to delete files/directories (path.py 
  tempdir.py)

  Description of the problem :

  Duplicity can potentially crash while attempting to delete a file that
  no longer exists.

  Original description

  duplicity version: 0.6.18-0ubuntu3
  python version: 2.7.3
  Distro: ubuntu precise 12.04
  target file system: ftp

  I happen to encounter failed backups with tracebacks like this:

  Traceback (most recent call last):
    File /usr/bin/duplicity, line 1403, in module
   with_tempdir(main)
    File /usr/bin/duplicity, line 1396, in with_tempdir
   fn()
     File /usr/bin/duplicity, line 1366, in main
   full_backup(col_stats)
     File /usr/bin/duplicity, line 504, in full_backup
   sig_outfp.to_remote()
     File /usr/lib/python2.7/dist-packages/duplicity/dup_temp.py, line 184, 
in to_remote
   globals.backend.move(tgt) #@UndefinedVariable
     File /usr/lib/python2.7/dist-packages/duplicity/backend.py, line 364, in 
move
   source_path.delete()
     File /usr/lib/python2.7/dist-packages/duplicity/path.py, line 567, in 
delete
   util.ignore_missing(os.unlink, self.name)
     File /usr/lib/python2.7/dist-packages/duplicity/util.py, line 116, in 
ignore_missing
   fn(filename)
   OSError: [Errno 2] No such file or directory: 
'/BACKUP/.duplycache/duply_foo/duplicity-full-signatures.20130825T140002Z.sigtar.gpg'

  Now running test code like

  #!/usr/bin/env python
  #
  # Do what util.ignore_missing(os.unlink, self.name) tries to do and
  # fails to achieve

  import os
  import sys
  import errno

  if __name__ == __main__:
  try:
  os.unlink(/tmp/doesnotexist)
  except Exception:
  # type is a reserved keyword, replaced with mytype
  mytype, value, tb = sys.exc_info()
  print - * 78
  print mytype: , mytype
  print value: , value
  print value[0]:, value[0]
  print errno.ENOENT: , errno.ENOENT
  print isinstance(mytype, OSError): , isinstance(mytype, 
OSError)
  print - * 78
  if isinstance(mytype, OSError) and value[0] == errno.ENOENT:
  print Gotcha!
  pass
  print Ooops, missed it ...
  raise

  will always raise the exception and not ignore it, because
  isinstance(mytype, OSError) is always False.

  What I expect ignore_missing to look like is:

  def ignore_missing(fn, filename):
  
  Execute fn on filename.  Ignore ENOENT errors, otherwise raise exception.

  @param fn: callable
  @param filename: string
  
  try:
  fn(filename)
  except OSError, ex:
  if ex.errno == errno.ENOENT:
      pass
      else:
      raise
      else:
      raise

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1216921/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1216921] Re: util.ignore_missing() does not work

2013-09-13 Thread Louis Bouchard
@mathias:

I have run over 75 iteration of a full backup to FTP target without any
kind of python issue.

Is there some more specific steps to be used in order to reproduce the
issue ? If so, please share it with me so I can use it for the SRU

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1216921

Title:
  util.ignore_missing() does not work

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Confirmed
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  Triaged
Status in “duplicity” source package in Quantal:
  Triaged
Status in “duplicity” source package in Raring:
  Triaged

Bug description:
  duplicity version: 0.6.18-0ubuntu3 
  python version: 2.7.3
  Distro: ubuntu precise 12.04
  target file system: ftp

  I happen to encounter failed backups with tracebacks like this:

  Traceback (most recent call last):
File /usr/bin/duplicity, line 1403, in module
   with_tempdir(main)
File /usr/bin/duplicity, line 1396, in with_tempdir
   fn()
 File /usr/bin/duplicity, line 1366, in main
   full_backup(col_stats)
 File /usr/bin/duplicity, line 504, in full_backup
   sig_outfp.to_remote()
 File /usr/lib/python2.7/dist-packages/duplicity/dup_temp.py, line 184, 
in to_remote
   globals.backend.move(tgt) #@UndefinedVariable
 File /usr/lib/python2.7/dist-packages/duplicity/backend.py, line 364, in 
move
   source_path.delete()
 File /usr/lib/python2.7/dist-packages/duplicity/path.py, line 567, in 
delete
   util.ignore_missing(os.unlink, self.name)
 File /usr/lib/python2.7/dist-packages/duplicity/util.py, line 116, in 
ignore_missing
   fn(filename)
   OSError: [Errno 2] No such file or directory: 
'/BACKUP/.duplycache/duply_foo/duplicity-full-signatures.20130825T140002Z.sigtar.gpg'

  Now running test code like

  #!/usr/bin/env python
  #
  # Do what util.ignore_missing(os.unlink, self.name) tries to do and
  # fails to achieve

  import os
  import sys
  import errno

  if __name__ == __main__:
  try:
  os.unlink(/tmp/doesnotexist)
  except Exception:
  # type is a reserved keyword, replaced with mytype
  mytype, value, tb = sys.exc_info()
  print - * 78
  print mytype: , mytype
  print value: , value
  print value[0]:, value[0]
  print errno.ENOENT: , errno.ENOENT
  print isinstance(mytype, OSError): , isinstance(mytype, 
OSError)
  print - * 78
  if isinstance(mytype, OSError) and value[0] == errno.ENOENT:
  print Gotcha!
  pass
  print Ooops, missed it ...
  raise

  will always raise the exception and not ignore it, because
  isinstance(mytype, OSError) is always False.

  What I expect ignore_missing to look like is:

  def ignore_missing(fn, filename):
  
  Execute fn on filename.  Ignore ENOENT errors, otherwise raise exception.

  @param fn: callable
  @param filename: string
  
  try:
  fn(filename)
  except OSError, ex:
  if ex.errno == errno.ENOENT:
      pass
      else:
      raise
      else:
      raise

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1216921/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1216921] Re: util.ignore_missing() does not work

2013-09-13 Thread Louis Bouchard
** Changed in: duplicity (Ubuntu Precise)
   Status: Triaged = In Progress

** Changed in: duplicity (Ubuntu Quantal)
   Status: Triaged = In Progress

** Changed in: duplicity (Ubuntu Raring)
   Status: Triaged = In Progress

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1216921

Title:
  util.ignore_missing() does not work

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Committed
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  In Progress
Status in “duplicity” source package in Quantal:
  In Progress
Status in “duplicity” source package in Raring:
  In Progress

Bug description:
  duplicity version: 0.6.18-0ubuntu3 
  python version: 2.7.3
  Distro: ubuntu precise 12.04
  target file system: ftp

  I happen to encounter failed backups with tracebacks like this:

  Traceback (most recent call last):
File /usr/bin/duplicity, line 1403, in module
   with_tempdir(main)
File /usr/bin/duplicity, line 1396, in with_tempdir
   fn()
 File /usr/bin/duplicity, line 1366, in main
   full_backup(col_stats)
 File /usr/bin/duplicity, line 504, in full_backup
   sig_outfp.to_remote()
 File /usr/lib/python2.7/dist-packages/duplicity/dup_temp.py, line 184, 
in to_remote
   globals.backend.move(tgt) #@UndefinedVariable
 File /usr/lib/python2.7/dist-packages/duplicity/backend.py, line 364, in 
move
   source_path.delete()
 File /usr/lib/python2.7/dist-packages/duplicity/path.py, line 567, in 
delete
   util.ignore_missing(os.unlink, self.name)
 File /usr/lib/python2.7/dist-packages/duplicity/util.py, line 116, in 
ignore_missing
   fn(filename)
   OSError: [Errno 2] No such file or directory: 
'/BACKUP/.duplycache/duply_foo/duplicity-full-signatures.20130825T140002Z.sigtar.gpg'

  Now running test code like

  #!/usr/bin/env python
  #
  # Do what util.ignore_missing(os.unlink, self.name) tries to do and
  # fails to achieve

  import os
  import sys
  import errno

  if __name__ == __main__:
  try:
  os.unlink(/tmp/doesnotexist)
  except Exception:
  # type is a reserved keyword, replaced with mytype
  mytype, value, tb = sys.exc_info()
  print - * 78
  print mytype: , mytype
  print value: , value
  print value[0]:, value[0]
  print errno.ENOENT: , errno.ENOENT
  print isinstance(mytype, OSError): , isinstance(mytype, 
OSError)
  print - * 78
  if isinstance(mytype, OSError) and value[0] == errno.ENOENT:
  print Gotcha!
  pass
  print Ooops, missed it ...
  raise

  will always raise the exception and not ignore it, because
  isinstance(mytype, OSError) is always False.

  What I expect ignore_missing to look like is:

  def ignore_missing(fn, filename):
  
  Execute fn on filename.  Ignore ENOENT errors, otherwise raise exception.

  @param fn: callable
  @param filename: string
  
  try:
  fn(filename)
  except OSError, ex:
  if ex.errno == errno.ENOENT:
      pass
      else:
      raise
      else:
      raise

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1216921/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1216921] Re: util.ignore_missing() does not work

2013-09-12 Thread Louis Bouchard
** Project changed: duplicity = duplicity (Ubuntu)

** Changed in: duplicity (Ubuntu)
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1216921

Title:
  util.ignore_missing() does not work

Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  Triaged
Status in “duplicity” source package in Quantal:
  Triaged
Status in “duplicity” source package in Raring:
  Triaged

Bug description:
  duplicity version: 0.6.18-0ubuntu3 
  python version: 2.7.3
  Distro: ubuntu precise 12.04
  target file system: ftp

  I happen to encounter failed backups with tracebacks like this:

  Traceback (most recent call last):
File /usr/bin/duplicity, line 1403, in module
   with_tempdir(main)
File /usr/bin/duplicity, line 1396, in with_tempdir
   fn()
 File /usr/bin/duplicity, line 1366, in main
   full_backup(col_stats)
 File /usr/bin/duplicity, line 504, in full_backup
   sig_outfp.to_remote()
 File /usr/lib/python2.7/dist-packages/duplicity/dup_temp.py, line 184, 
in to_remote
   globals.backend.move(tgt) #@UndefinedVariable
 File /usr/lib/python2.7/dist-packages/duplicity/backend.py, line 364, in 
move
   source_path.delete()
 File /usr/lib/python2.7/dist-packages/duplicity/path.py, line 567, in 
delete
   util.ignore_missing(os.unlink, self.name)
 File /usr/lib/python2.7/dist-packages/duplicity/util.py, line 116, in 
ignore_missing
   fn(filename)
   OSError: [Errno 2] No such file or directory: 
'/BACKUP/.duplycache/duply_foo/duplicity-full-signatures.20130825T140002Z.sigtar.gpg'

  Now running test code like

  #!/usr/bin/env python
  #
  # Do what util.ignore_missing(os.unlink, self.name) tries to do and
  # fails to achieve

  import os
  import sys
  import errno

  if __name__ == __main__:
  try:
  os.unlink(/tmp/doesnotexist)
  except Exception:
  # type is a reserved keyword, replaced with mytype
  mytype, value, tb = sys.exc_info()
  print - * 78
  print mytype: , mytype
  print value: , value
  print value[0]:, value[0]
  print errno.ENOENT: , errno.ENOENT
  print isinstance(mytype, OSError): , isinstance(mytype, 
OSError)
  print - * 78
  if isinstance(mytype, OSError) and value[0] == errno.ENOENT:
  print Gotcha!
  pass
  print Ooops, missed it ...
  raise

  will always raise the exception and not ignore it, because
  isinstance(mytype, OSError) is always False.

  What I expect ignore_missing to look like is:

  def ignore_missing(fn, filename):
  
  Execute fn on filename.  Ignore ENOENT errors, otherwise raise exception.

  @param fn: callable
  @param filename: string
  
  try:
  fn(filename)
  except OSError, ex:
  if ex.errno == errno.ENOENT:
      pass
      else:
      raise
      else:
      raise

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/duplicity/+bug/1216921/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1216921] Re: util.ignore_missing() does not work

2013-09-12 Thread Louis Bouchard
** Changed in: duplicity (Ubuntu Precise)
   Status: New = Confirmed

** Changed in: duplicity (Ubuntu Precise)
   Status: Confirmed = Triaged

** Changed in: duplicity (Ubuntu Quantal)
   Status: New = Triaged

** Changed in: duplicity (Ubuntu Raring)
   Status: New = Triaged

** Also affects: duplicity
   Importance: Undecided
   Status: New

** Changed in: duplicity
   Status: New = Confirmed

** Changed in: duplicity (Ubuntu)
   Status: In Progress = Triaged

** Changed in: duplicity (Ubuntu Precise)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

** Changed in: duplicity (Ubuntu Quantal)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

** Changed in: duplicity (Ubuntu Raring)
 Assignee: (unassigned) = Louis Bouchard (louis-bouchard)

** Changed in: duplicity (Ubuntu Precise)
   Importance: Undecided = High

** Changed in: duplicity (Ubuntu Quantal)
   Importance: Undecided = High

** Changed in: duplicity (Ubuntu Raring)
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1216921

Title:
  util.ignore_missing() does not work

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Confirmed
Status in “duplicity” package in Ubuntu:
  In Progress
Status in “duplicity” source package in Precise:
  Triaged
Status in “duplicity” source package in Quantal:
  Triaged
Status in “duplicity” source package in Raring:
  Triaged

Bug description:
  duplicity version: 0.6.18-0ubuntu3 
  python version: 2.7.3
  Distro: ubuntu precise 12.04
  target file system: ftp

  I happen to encounter failed backups with tracebacks like this:

  Traceback (most recent call last):
File /usr/bin/duplicity, line 1403, in module
   with_tempdir(main)
File /usr/bin/duplicity, line 1396, in with_tempdir
   fn()
 File /usr/bin/duplicity, line 1366, in main
   full_backup(col_stats)
 File /usr/bin/duplicity, line 504, in full_backup
   sig_outfp.to_remote()
 File /usr/lib/python2.7/dist-packages/duplicity/dup_temp.py, line 184, 
in to_remote
   globals.backend.move(tgt) #@UndefinedVariable
 File /usr/lib/python2.7/dist-packages/duplicity/backend.py, line 364, in 
move
   source_path.delete()
 File /usr/lib/python2.7/dist-packages/duplicity/path.py, line 567, in 
delete
   util.ignore_missing(os.unlink, self.name)
 File /usr/lib/python2.7/dist-packages/duplicity/util.py, line 116, in 
ignore_missing
   fn(filename)
   OSError: [Errno 2] No such file or directory: 
'/BACKUP/.duplycache/duply_foo/duplicity-full-signatures.20130825T140002Z.sigtar.gpg'

  Now running test code like

  #!/usr/bin/env python
  #
  # Do what util.ignore_missing(os.unlink, self.name) tries to do and
  # fails to achieve

  import os
  import sys
  import errno

  if __name__ == __main__:
  try:
  os.unlink(/tmp/doesnotexist)
  except Exception:
  # type is a reserved keyword, replaced with mytype
  mytype, value, tb = sys.exc_info()
  print - * 78
  print mytype: , mytype
  print value: , value
  print value[0]:, value[0]
  print errno.ENOENT: , errno.ENOENT
  print isinstance(mytype, OSError): , isinstance(mytype, 
OSError)
  print - * 78
  if isinstance(mytype, OSError) and value[0] == errno.ENOENT:
  print Gotcha!
  pass
  print Ooops, missed it ...
  raise

  will always raise the exception and not ignore it, because
  isinstance(mytype, OSError) is always False.

  What I expect ignore_missing to look like is:

  def ignore_missing(fn, filename):
  
  Execute fn on filename.  Ignore ENOENT errors, otherwise raise exception.

  @param fn: callable
  @param filename: string
  
  try:
  fn(filename)
  except OSError, ex:
  if ex.errno == errno.ENOENT:
      pass
      else:
      raise
      else:
      raise

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1216921/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1005901] Re: cannot change temp dir

2013-08-29 Thread Louis Bouchard
Tested ok with reproduction steps above.

** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1005901

Title:
  cannot change temp dir

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  Fix Committed

Bug description:
  SRU justification :

  Duplicity does not honor the TMPDIR, TEMP, TMP or --tempdir
  redirection as expected.

  Impact :

  If the available size in /tmp is too small, restore may fail to
  complete

  Fix :

  Backport upstream fix applied in 0.6.21 (Merge proposal from the
  upstream task of this bug)

  Test Case : 
  Note: Incremental backup is necessary so both the full  difftar are required 
in /tmp which will require more than 50Mb

  1) Mount a 50Mb file system under /tmp
  ~# df -h /tmp
  Filesystem  Size  Used Avail Use% Mounted on
  /dev/sda148M  794K   45M   2% /tmp

  2) Create a 60Mb file to be backed up under /srv called data
  dd if=/proc/kcore of=/srv/data bs=1M count=60

  3) Do a full backup of /srv into /backup

  duplicity full --name test --encrypt-key A6C785C2 --sign-key A6C785C2
  --volsize 25 /srv file:///backup/duply

  4) Modify the 60Mb file so it can be picked up by the incremental backup
  dd if=/proc/kcore of=/srv/data bs=1M count=10 conv=notrunc oflag=append

  5) Do an incremental backup
  duplicity incr --name 'duply_test' --encrypt-key A6C785C2 --sign-key A6C785C2 
--volsize 25 /srv file:///backup/duply

  6) Restore the flag using TMPDIR :
  TEMPDIR=/mytemp duplicity --name 'duply_test' --encrypt-key A6C785C2 
--sign-key A6C785C2 --verbosity '4' --volsize 25 -t now file:///backup/duply 
/restore

  With the patch, the command will succeed.

  Regression :

  None expected, this code is used in the version currently available in
  Raring

  Description of the problem :

  When /tmp is too small, duplicity is sometimes unable to do a restore.
  Using TMPDIR variable or --tempdir doe not work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1005901/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1005901] Re: cannot change temp dir

2013-08-14 Thread Louis Bouchard
Any chance of having this debdiff sponsored for upload ?

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to duplicity in Ubuntu.
https://bugs.launchpad.net/bugs/1005901

Title:
  cannot change temp dir

Status in Duplicity - Bandwidth Efficient Encrypted Backup:
  Fix Released
Status in “duplicity” package in Ubuntu:
  Fix Released
Status in “duplicity” source package in Precise:
  In Progress

Bug description:
  SRU justification :

  Duplicity does not honor the TMPDIR, TEMP, TMP or --tempdir
  redirection as expected.

  Impact :

  If the available size in /tmp is too small, restore may fail to
  complete

  Fix :

  Backport upstream fix applied in 0.6.21 (Merge proposal from the
  upstream task of this bug)

  Test Case : 
  Note: Incremental backup is necessary so both the full  difftar are required 
in /tmp which will require more than 50Mb

  1) Mount a 50Mb file system under /tmp
  ~# df -h /tmp
  Filesystem  Size  Used Avail Use% Mounted on
  /dev/sda148M  794K   45M   2% /tmp

  2) Create a 60Mb file to be backed up under /srv called data
  dd if=/proc/kcore of=/srv/data bs=1M count=60

  3) Do a full backup of /srv into /backup

  duplicity full --name test --encrypt-key A6C785C2 --sign-key A6C785C2
  --volsize 25 /srv file:///backup/duply

  4) Modify the 60Mb file so it can be picked up by the incremental backup
  dd if=/proc/kcore of=/srv/data bs=1M count=10 conv=notrunc oflag=append

  5) Do an incremental backup
  duplicity incr --name 'duply_test' --encrypt-key A6C785C2 --sign-key A6C785C2 
--volsize 25 /srv file:///backup/duply

  6) Restore the flag using TMPDIR :
  TEMPDIR=/mytemp duplicity --name 'duply_test' --encrypt-key A6C785C2 
--sign-key A6C785C2 --verbosity '4' --volsize 25 -t now file:///backup/duply 
/restore

  With the patch, the command will succeed.

  Regression :

  None expected, this code is used in the version currently available in
  Raring

  Description of the problem :

  When /tmp is too small, duplicity is sometimes unable to do a restore.
  Using TMPDIR variable or --tempdir doe not work.

To manage notifications about this bug go to:
https://bugs.launchpad.net/duplicity/+bug/1005901/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1063706] Re: LibreOffice writer becomes inaccessible after cut and paste from PDF viewer

2012-10-19 Thread Louis Bouchard
Hi Christopher,

Just wanted to let you know that I didn't forget you but I'm busy
dealing with another bug. I will provide what you asked as soon as I get
a minute.

Thanks for your help,

..Louis

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to libreoffice in Ubuntu.
https://bugs.launchpad.net/bugs/1063706

Title:
  LibreOffice writer becomes inaccessible after cut and paste from PDF
  viewer

Status in “libreoffice” package in Ubuntu:
  Incomplete

Bug description:
  This seems to happen systematically when text is cut from a PDF file
  viewed in evince and pasted into Writer.  In the session collected by
  apport, LO calculator was still accessible but Writer was no longer
  accessible.

  ProblemType: Bug
  DistroRelease: Ubuntu 12.04
  Package: libreoffice (not installed)
  ProcVersionSignature: Ubuntu 3.2.0-31.50-generic 3.2.28
  Uname: Linux 3.2.0-31-generic x86_64
  ApportVersion: 2.0.1-0ubuntu13
  Architecture: amd64
  Date: Mon Oct  8 13:37:41 2012
  EcryptfsInUse: Yes
  InstallationMedia: Ubuntu 11.10 Oneiric Ocelot - Release amd64 (20111012)
  SourcePackage: libreoffice
  UpgradeStatus: Upgraded to precise on 2012-06-16 (114 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libreoffice/+bug/1063706/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1063706] [NEW] LibreOffice writer becomes inaccessible after cut and paste from PDF viewer

2012-10-08 Thread Louis Bouchard
Public bug reported:

This seems to happen systematically when text is cut from a PDF file
viewed in evince and pasted into Writer.  In the session collected by
apport, LO calculator was still accessible but Writer was no longer
accessible.

ProblemType: Bug
DistroRelease: Ubuntu 12.04
Package: libreoffice (not installed)
ProcVersionSignature: Ubuntu 3.2.0-31.50-generic 3.2.28
Uname: Linux 3.2.0-31-generic x86_64
ApportVersion: 2.0.1-0ubuntu13
Architecture: amd64
Date: Mon Oct  8 13:37:41 2012
EcryptfsInUse: Yes
InstallationMedia: Ubuntu 11.10 Oneiric Ocelot - Release amd64 (20111012)
SourcePackage: libreoffice
UpgradeStatus: Upgraded to precise on 2012-06-16 (114 days ago)

** Affects: libreoffice (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug precise running-unity

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to libreoffice in Ubuntu.
https://bugs.launchpad.net/bugs/1063706

Title:
  LibreOffice writer becomes inaccessible after cut and paste from PDF
  viewer

Status in “libreoffice” package in Ubuntu:
  New

Bug description:
  This seems to happen systematically when text is cut from a PDF file
  viewed in evince and pasted into Writer.  In the session collected by
  apport, LO calculator was still accessible but Writer was no longer
  accessible.

  ProblemType: Bug
  DistroRelease: Ubuntu 12.04
  Package: libreoffice (not installed)
  ProcVersionSignature: Ubuntu 3.2.0-31.50-generic 3.2.28
  Uname: Linux 3.2.0-31-generic x86_64
  ApportVersion: 2.0.1-0ubuntu13
  Architecture: amd64
  Date: Mon Oct  8 13:37:41 2012
  EcryptfsInUse: Yes
  InstallationMedia: Ubuntu 11.10 Oneiric Ocelot - Release amd64 (20111012)
  SourcePackage: libreoffice
  UpgradeStatus: Upgraded to precise on 2012-06-16 (114 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libreoffice/+bug/1063706/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1063706] Re: LibreOffice writer becomes inaccessible after cut and paste from PDF viewer

2012-10-08 Thread Louis Bouchard
-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to libreoffice in Ubuntu.
https://bugs.launchpad.net/bugs/1063706

Title:
  LibreOffice writer becomes inaccessible after cut and paste from PDF
  viewer

Status in “libreoffice” package in Ubuntu:
  New

Bug description:
  This seems to happen systematically when text is cut from a PDF file
  viewed in evince and pasted into Writer.  In the session collected by
  apport, LO calculator was still accessible but Writer was no longer
  accessible.

  ProblemType: Bug
  DistroRelease: Ubuntu 12.04
  Package: libreoffice (not installed)
  ProcVersionSignature: Ubuntu 3.2.0-31.50-generic 3.2.28
  Uname: Linux 3.2.0-31-generic x86_64
  ApportVersion: 2.0.1-0ubuntu13
  Architecture: amd64
  Date: Mon Oct  8 13:37:41 2012
  EcryptfsInUse: Yes
  InstallationMedia: Ubuntu 11.10 Oneiric Ocelot - Release amd64 (20111012)
  SourcePackage: libreoffice
  UpgradeStatus: Upgraded to precise on 2012-06-16 (114 days ago)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/libreoffice/+bug/1063706/+subscriptions

-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


  1   2   >