[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-28 Thread Launchpad Bug Tracker
This bug was fixed in the package gvfs - 1.28.2-1ubuntu1~16.04.2

---
gvfs (1.28.2-1ubuntu1~16.04.2) xenial; urgency=high

  * Greatly improve file manager performance and fix Nautilus crash by
backporting four patches (debian/patches/lp1133477-*) from upstream
(LP: #1133477).

 -- Simon Quigley   Thu, 03 Aug 2017 23:40:48 -0500

** Changed in: gvfs (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Released
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-23 Thread Ferux
yes yes YES YES YES!!

Finally fixed on Ubuntu 16.04 with the proposed package.

All credits to Brian and the other persons who finally took this serious
and got it solved.

Thank you!

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-23 Thread Peter Passchier
Ubuntu 16.04.3 here, bug confirmed. With gvfs from proposed, the bug is
fixed.

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible with GQueue, and would most likely have an upstream fix
  that could be backported because this fix is in 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-20 Thread Simon Quigley
This has been verification-done since Comment #61, ignore this. :)

** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible with GQueue, and would most likely have an 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-19 Thread schiemanski
Ubuntu MATE 16.04.3

Followed the test case on the Ubuntu MATE forum too:

  * https://ubuntu-mate.community/t/help-test-a-critical-bug-fix-in-
ubuntu-mate-16-04/14624

I confirm that this update resolves the issue. Thanks!

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-19 Thread Bill_MI
Upgraded the following packages:
gvfs (1.28.2-1ubuntu1~16.04.1) to 1.28.2-1ubuntu1~16.04.2
Plus 6 dependents

This fixes the problem on Ubuntu-MATE 16.04.
Tested on MATE 1.12.1 an original install.
Tested on MATE 1.16.2 (Martin's PPA)

Bravo!

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-18 Thread Randy Noseworthy
Ditto:

Confirming that Simon Quigley's packages here:
https://launchpad.net/ubuntu/+source/gvfs/1.28.2-1ubuntu1~16.04.2

Resolve the issue (using the 1024 file transfer method described in the
OP)..

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-18 Thread TenLeftFingers
Ubuntu 16.04 64-bit.

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible with GQueue, and would most likely have an upstream fix
  that could be backported because this fix is in the gvfs release that
  is already in 17.04 and later (so 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-18 Thread TenLeftFingers
Confirming that Simon Quigley's packages here:
https://launchpad.net/ubuntu/+source/gvfs/1.28.2-1ubuntu1~16.04.2

Resolve the issue (using the 1024 file transfer method described in the
OP).

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-18 Thread Michael Dooley
Caja bug confirmed Ubuntu MATE 16.04.3 64 bit. After following the test
and fix described in https://ubuntu-mate.community/t/help-test-a
-critical-bug-fix-in-ubuntu-mate-16-04/14624, the issue is resolved.

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-18 Thread Stephen Cook
Bug confirmed with me on UM 16.04 LTS 64 bit.

Fix that works can be found here:

https://ubuntu-mate.community/t/help-test-a-critical-bug-fix-in-ubuntu-
mate-16-04/14624

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible with GQueue, 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-18 Thread Gabriel Dubatti
Ubuntu MATE 16.04.3

I followed the instructions from Martin Wimpress and it resolves the issue for 
me too.
I'll continue testing it.

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible with GQueue, and would most likely have an 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-18 Thread Martin Wimpress
Ubuntu MATE 16.04.3 also.

Followed the test case on the Ubuntu MATE forum:

  * https://ubuntu-mate.community/t/help-test-a-critical-bug-fix-in-
ubuntu-mate-16-04/14624

I confirm that this update resolves the issue. I've actually been
running the fixed version of gvfs on another Ubuntu MATE 16.04.3
computer for about a week. In general use I've encountered no
regressions.

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-18 Thread Etienne Papegnies
Ubuntu MATE 16.04.3

Reproduced the issue in caja using the test case described by tsimonq2.
Installed gvfs/1.28.2-1ubuntu1~16.04.2 (gvfs/xenial-proposed)
Ran killall caja to reload caja and its components.
Attempted to reproduce the bug after reload: could not reproduce the bug.

The proposed package fixes the bug.

** Tags removed: verification-needed-xenial
** Tags added: verification-done-xenial

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-17 Thread Brian Murray
Hello Teo, or anyone else affected,

Accepted gvfs into xenial-proposed. The package will build now and be
available at
https://launchpad.net/ubuntu/+source/gvfs/1.28.2-1ubuntu1~16.04.2 in a
few hours, and then in the -proposed repository.

Please help us by testing this new package.  See
https://wiki.ubuntu.com/Testing/EnableProposed for documentation on how
to enable and use -proposed.Your feedback will aid us getting this
update out to other Ubuntu users.

If this package fixes the bug for you, please add a comment to this bug,
mentioning the version of the package you tested and change the tag from
verification-needed-xenial to verification-done-xenial. If it does not
fix the bug for you, please add a comment stating that, and change the
tag to verification-failed-xenial. In either case, details of your
testing will help us make a better decision.

Further information regarding the verification process can be found at
https://wiki.ubuntu.com/QATeam/PerformingSRUVerification .  Thank you in
advance!

** Changed in: gvfs (Ubuntu Xenial)
   Status: In Progress => Fix Committed

** Tags added: verification-needed verification-needed-xenial

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-17 Thread Robert Ancell
Given the patches are all upstream commits this looks sane to me. +1.

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible with GQueue, and would most likely have an upstream fix
  that could be backported because this fix is in the gvfs 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-17 Thread Brian Murray
** Changed in: gvfs (Ubuntu Xenial)
   Status: Fix Committed => In Progress

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  In Progress
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible with GQueue, and would most likely have an upstream fix
  that could be backported because this fix is in 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-09 Thread Robie Basak
15:25  tsimonq2: the gvfs fix looks good to me superficially,
though I think the surgery involved in the fix deserves a closer review
which will take me more time. I'll look again later. Thank you again for
driving both of these.

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-09 Thread Simon Quigley
** Description changed:

  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.
  
  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:
  
  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done
  
  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it actually
  writes the zeroes, with those tools, they say they wrote the zeroes to
  the filesystems but it seems they actually didn't (something along the
  lines of that), although I could be wrong)
  
  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before doing
  it in the file manager, which transfers these files flawlessly.
  
  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.
  
  [Breakdown of the patches and the regression potential for each]
- First, I'd like to thank levien for providing the bug report and the patch 
links to fix this. Without that, it would have been a bit more work to hunt 
down which patches are applicable.
+ First, I'd like to thank levien for providing the patch links to fix this. 
Without that, it would have been a bit more work to hunt down which patches are 
applicable.
  
  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.
  
  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of the
  list at a given time, and is incompatible with using GSequence. In this
  case, the package would have something peculiar hardcoded, and would
  most likely have an upstream fix that could be backported because this
  fix is in the gvfs release that is already in 17.04 and later (so they
  should have fixed it already). I can't think of a package that does this
  (from what I can tell, it's not sane code if it isn't compatible with
  this change) but if something does regress, it should be trivial to fix.
  
  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.
  
  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low) because
  it would involve a package hardcoding some things that are incompatible
  with GQueue, and would most likely have an upstream fix that could be
  backported because this fix is in the gvfs release that is already in
  17.04 and later (so they should have fixed it already). I can't think of
  a package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.
  
  lp1133477-metadata-fix-bogus-condition.patch:
  Patch Description: The file->children condition is always true at this point, 
child->children should be there instead in order to speed up processing. This 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-08 Thread Martin Wimpress
** Changed in: gvfs (Ubuntu Xenial)
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the bug report and the patch 
links to fix this. Without that, it would have been a bit more work to hunt 
down which patches are applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible with GQueue, and would most likely have an upstream fix
  that could be backported because 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-05 Thread Amr Ibrahim
Simon, you're right. I double checked. It has only one commit out of the
four.

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the bug report and the patch 
links to fix this. Without that, it would have been a bit more work to hunt 
down which patches are applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible with GQueue, and would most likely have an upstream fix
  that could be backported 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-04 Thread Simon Quigley
> 1.28.4 includes the fixing commit.

I have just checked, and the new upstream release only contains
lp1133477-client-use-sync-methods-instead-of-flushing-dbus.patch, which
fixes the Nautilus crash, but doesn't fix the entire bug, which is that
file moving is extremely slow with more than a couple of files.

So yes, while it would be helpful to make sure people don't lose data, I
still think *all* of these patches are still needed.

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the bug report and the patch 
links to fix this. Without that, it would have been a bit more work to hunt 
down which patches are applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-04 Thread Amr Ibrahim
1.28.4 includes the fixing commit.

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the bug report and the patch 
links to fix this. Without that, it would have been a bit more work to hunt 
down which patches are applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible with GQueue, and would most likely have an upstream fix
  that could be backported because this fix is in the gvfs release that
  

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-04 Thread Amr Ibrahim
Update to 1.28.4 in Xenial LP: #1579505.

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the bug report and the patch 
links to fix this. Without that, it would have been a bit more work to hunt 
down which patches are applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible with GQueue, and would most likely have an upstream fix
  that could be backported because this fix is in the gvfs release 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-04 Thread LocutusOfBorg
** Changed in: pcmanfm (Ubuntu)
   Status: Incomplete => Invalid

** Changed in: pcmanfm (Ubuntu Xenial)
   Status: New => Invalid

** Changed in: nautilus (Ubuntu)
   Status: Confirmed => Invalid

** Changed in: nautilus (Ubuntu Xenial)
   Status: New => Invalid

** Changed in: caja (Ubuntu)
   Status: Confirmed => Invalid

** Changed in: caja (Ubuntu Xenial)
   Status: New => Invalid

** Changed in: gvfs (Ubuntu)
 Assignee: Simon Quigley (tsimonq2) => (unassigned)

** Changed in: gvfs (Ubuntu Xenial)
 Assignee: (unassigned) => Simon Quigley (tsimonq2)

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Invalid
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Invalid
Status in pcmanfm package in Ubuntu:
  Invalid
Status in caja source package in Xenial:
  Invalid
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  Invalid
Status in pcmanfm source package in Xenial:
  Invalid

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the bug report and the patch 
links to fix this. Without that, it would have been a bit more work to hunt 
down which patches are applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-04 Thread LocutusOfBorg
** Changed in: gvfs (Ubuntu)
   Status: In Progress => Fix Released

** Also affects: nautilus (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: pcmanfm (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: gvfs (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: caja (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: gvfs (Ubuntu Xenial)
   Status: New => Fix Committed

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Confirmed
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Confirmed
Status in pcmanfm package in Ubuntu:
  Incomplete
Status in caja source package in Xenial:
  New
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  New
Status in pcmanfm source package in Xenial:
  New

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the bug report and the patch 
links to fix this. Without that, it would have been a bit more work to hunt 
down which patches are applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-04 Thread LocutusOfBorg
sponsored.

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Confirmed
Status in gvfs package in Ubuntu:
  Fix Released
Status in nautilus package in Ubuntu:
  Confirmed
Status in pcmanfm package in Ubuntu:
  Incomplete
Status in caja source package in Xenial:
  New
Status in gvfs source package in Xenial:
  Fix Committed
Status in nautilus source package in Xenial:
  New
Status in pcmanfm source package in Xenial:
  New

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the bug report and the patch 
links to fix this. Without that, it would have been a bit more work to hunt 
down which patches are applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible with GQueue, and would most likely have an upstream fix
  that could be backported because this fix is in the gvfs release that
  is already in 17.04 and later 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-04 Thread Simon Quigley
** Changed in: nautilus (Ubuntu)
   Status: Incomplete => Confirmed

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Confirmed
Status in gvfs package in Ubuntu:
  In Progress
Status in nautilus package in Ubuntu:
  Confirmed
Status in pcmanfm package in Ubuntu:
  Incomplete

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the bug report and the patch 
links to fix this. Without that, it would have been a bit more work to hunt 
down which patches are applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible with GQueue, and would most likely have an upstream fix
  that could be backported because this fix is in the gvfs release that
  is already in 17.04 and later (so they should have fixed it already).
  I can't think of a package that does this (from what I can tell, it's
  not sane code if it isn't 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-04 Thread Simon Quigley
I have also uploaded my patch to ppa:tsimonq2/universe-upload-testing
and have tested it myself, it works 100% as intended from my testing.

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Confirmed
Status in gvfs package in Ubuntu:
  In Progress
Status in nautilus package in Ubuntu:
  Incomplete
Status in pcmanfm package in Ubuntu:
  Incomplete

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the bug report and the patch 
links to fix this. Without that, it would have been a bit more work to hunt 
down which patches are applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible with GQueue, and would most likely have an upstream fix
  that could be backported because this fix is in the gvfs release that
  is already in 17.04 and later (so they should have fixed it already).
  I can't think of a package that 

[Desktop-packages] [Bug 1133477] Re: [SRU] cut-n-paste move files got stuck forever

2017-08-04 Thread Launchpad Bug Tracker
Status changed to 'Confirmed' because the bug affects multiple users.

** Changed in: caja (Ubuntu)
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to nautilus in Ubuntu.
https://bugs.launchpad.net/bugs/1133477

Title:
  [SRU] cut-n-paste move files got stuck forever

Status in caja package in Ubuntu:
  Confirmed
Status in gvfs package in Ubuntu:
  In Progress
Status in nautilus package in Ubuntu:
  Incomplete
Status in pcmanfm package in Ubuntu:
  Incomplete

Bug description:
  [Impact]
  Without these fixes, copying any more than a few files in several file 
managers including but not limited to Nautilus and Caja either completely 
freezes up the file manager (if this happens with a Cut and Paste, it destroys 
user data by leaving files half moved) or slows it down to a crawl. These bugs 
are not found when using the mv command in the terminal, which will copy the 
files in a couple of seconds at maximum. This is a nightmare for people who 
want to Cut and Paste a lot of files and who need to do it quickly.

  [Test Case]
  Either find some files that you would be OK losing (or have a backup for) or 
create the files yourself. You can run the following command to generate 1024 
empty 1 MB files (do this in an empty directory, where you have at least 1 GB 
of free space), which is a valid way to reproduce this bug:

  i=1 && while [ $i -lt 1025 ]; do dd if=/dev/zero of=$i bs=4k
  iflag=fullblock,count_bytes count=1M; i=$[$i+1]; done

  (Credit to https://stackoverflow.com/a/11779492/4123313 and
  http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_09_02.html - also,
  neither truncate or fallocate are used because the bug cannot be
  reproduced when using those tools, my theory is that with dd it
  actually writes the zeroes, with those tools, they say they wrote the
  zeroes to the filesystems but it seems they actually didn't (something
  along the lines of that), although I could be wrong)

  If you open Nautilus or Caja, and Cut and Paste these 1024 files
  somewhere else, the transfer will slow to a crawl, and in one instance
  where I tried this with 1024 empty 512 MB files, Nautilus actually
  crashed. This is very different to using mv in the terminal before
  doing it in the file manager, which transfers these files flawlessly.

  Once this fix is applied and I recreate these files, the transfer in
  Nautilus and Caja is almost as fast as using mv (I'd say within a
  second), which is what should happen.

  [Breakdown of the patches and the regression potential for each]
  First, I'd like to thank levien for providing the bug report and the patch 
links to fix this. Without that, it would have been a bit more work to hunt 
down which patches are applicable.

  lp1133477-metadata-use-sequences-instead-of-lists.patch:
  Patch Description: The metabuilder stores files and data in sorted lists. An 
insertion into a sorted list is done in linear time, which highly affects 
efficiency. In order to fix this, use GSequence instead which does insertion 
and deletion in logarithmic time.

  This patch is in here because it helps with a large amount of files.
  Regardless if the files are 1 MB or 10 GB, the sorting of the files to
  be moved/copied is inefficient. This addresses part of the problem by
  making sure large amount of files can be dealt with efficiently. The
  regression potential for this is low, and the only case that I can see
  where a package would regress is a file manager that depends on (not
  100% on this theory but it's a possibility) monitoring the status of
  the list at a given time, and is incompatible with using GSequence. In
  this case, the package would have something peculiar hardcoded, and
  would most likely have an upstream fix that could be backported
  because this fix is in the gvfs release that is already in 17.04 and
  later (so they should have fixed it already). I can't think of a
  package that does this (from what I can tell, it's not sane code if it
  isn't compatible with this change) but if something does regress, it
  should be trivial to fix.

  lp1133477-metadata-use-queues-instead-of-lists.patch:
  Patch Description: Some of the lists used by the metabuilder have items 
appended to them. Appending to a list is done in linear time and this highly 
affects efficiency. In order to fix this, use GQueue which supports appending 
in constant time.

  This patch is extremely similar in function to the last one, but this
  one addresses items appended to a list instead of files and data in
  sorted lists. As such, the regression potential is similar (low)
  because it would involve a package hardcoding some things that are
  incompatible with GQueue, and would most likely have an upstream fix
  that could be backported because this fix is in the gvfs release that
  is already in 17.04 and later (so they should have fixed it already).
  I can't think of a package that does this