[coreutils] [PATCH] maint: remove a redundant sort parameter from a test

2010-05-24 Thread Pádraig Brady
I just pushed this trivial change in case
others used this test as a template for other scripts.

diff --git a/tests/misc/sort-month b/tests/misc/sort-month
index aee5215..8a8e4fa 100755
--- a/tests/misc/sort-month
+++ b/tests/misc/sort-month
@@ -30,7 +30,7 @@ locale --version /dev/null 21 ||
 for LOC in $LOCALE_FR $LOCALE_FR_UTF8 ja_JP.utf8; do
   mon=$(LC_ALL=$LOC locale abmon 2/dev/null);
   smon=$(LC_ALL=$LOC locale abmon 2/dev/null |
-  tr ';' '\n' | shuf | nl | LC_ALL=$LOC sort -b -k2,2M |
+  tr ';' '\n' | shuf | nl | LC_ALL=$LOC sort -k2,2M |
   cut -f2 | tr '\n' ';')
   test $mon = $smon || { fail=1; break; }
 done



Re: [coreutils] considering uncrustify for automatic indentation: prepare

2010-05-31 Thread Pádraig Brady
On 31/05/10 10:12, Jim Meyering wrote:
   [PATCH 1/4] stat: use gnulib's alignof module
   [PATCH 2/4] maint: correct indentation of case_GETOPT_* macro uses
   [PATCH 3/4] maint: replace each for (;;) with while (true)
   [PATCH 4/4] maint: make spacing around = consistent, even in IF_LINT
 
 I've been considering using uncrustify to automatically
 format coreutils

How does it compare to GNU indent?
I was surprised there was no comparison on the site.
For reference GNU indent sometimes works for coreutils
with these settings.

-Toff_t
-Twchar_t
-Tsize_t
-TFILE
-Tuintmax_t
-Tintmax_t
-nut
-l80

cheers,
Pádraig.



Re: [coreutils] How to check uniqueness on one or multiple columns in a table and print the duplicated rows?

2010-06-17 Thread Pádraig Brady
On 17/06/10 21:53, Peng Yu wrote:
 Hi,
 
 Uniq doesn't have an option to check the uniqueness for a given column
 (although it can exclude the first a few columns). It will need 'cut'
 to check uniqueness on a give number of columns and print the
 duplicated rows. This is not convenient. I'm wondering if there is a
 better way to do so.

Not at the moment no.
It's on the list of tasks to add field support to uniq



bug#6627: Build failure when pthread.h isn’t available

2010-07-14 Thread Pádraig Brady
 From: ludo at gnu.org (Ludovic Courtès)
 Date: Tue, 13 Jul 2010 18:16:30 +0200

 Hello,
 Coreutils fails to build when pthread.h isn’t available:

Thanks Ludo, I thought the fixed version would have built
before you noticed. I'll CC you the next time.
http://git.savannah.gnu.org/gitweb/?p=gnulib.git;a=commit;h=7f2ece89

cheers,
Pádraig.

p.s. I responding manually after I noticed the message
in the bug tracker, but not on the mailing list after 1 hour





Re: [coreutils] [PATCH] fadvise: new module providing a simpler interface to posix_fadvise

2010-07-20 Thread Pádraig Brady
To get the same benefits noticed with `sort`
http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=commit;h=dae35bac
I'm applying the same hint to all appropriate utils in the attached.

$ grep -l SEQUENTIAL *.c
base64.c
cat.c
cksum.c
comm.c
cut.c
expand.c
fmt.c
fold.c
join.c
md5sum.c
nl.c
paste.c
pr.c
ptx.c
shuf.c
sort.c
sum.c
tee.c
tr.c
tsort.c
unexpand.c
uniq.c
wc.c

Linux does quite well without this hint, and is getting better:
  http://article.gmane.org/gmane.linux.kernel.mm/43753
  http://lkml.org/lkml/2010/2/3/27
However I saw a small benefit on my system with
fast flash devices (around 5% less run time) and perhaps
the kernel can use this info in more sophisticated ways in future.

cheers,
Pádraig.
diff --git a/src/base64.c b/src/base64.c
index fddb61c..1a36c91 100644
--- a/src/base64.c
+++ b/src/base64.c
@@ -26,6 +26,7 @@
 
 #include system.h
 #include error.h
+#include fadvise.h
 #include xstrtol.h
 #include quote.h
 #include quotearg.h
@@ -302,6 +303,8 @@ main (int argc, char **argv)
 error (EXIT_FAILURE, errno, %s, infile);
 }
 
+  fadvise (input_fh, FADVISE_SEQUENTIAL);
+
   if (decode)
 do_decode (input_fh, stdout, ignore_garbage);
   else
diff --git a/src/cat.c b/src/cat.c
index c4a2a9e..47b5053 100644
--- a/src/cat.c
+++ b/src/cat.c
@@ -34,6 +34,7 @@
 
 #include system.h
 #include error.h
+#include fadvise.h
 #include full-write.h
 #include quote.h
 #include safe-read.h
@@ -700,6 +701,8 @@ main (int argc, char **argv)
 }
   insize = io_blksize (stat_buf);
 
+  fdadvise (input_desc, 0, 0, FADVISE_SEQUENTIAL);
+
   /* Compare the device and i-node numbers of this input file with
  the corresponding values of the (output file associated with)
  stdout, and skip this input file if they coincide.  Input
diff --git a/src/cksum.c b/src/cksum.c
index d240eb3..282c777 100644
--- a/src/cksum.c
+++ b/src/cksum.c
@@ -43,6 +43,7 @@
 #include sys/types.h
 #include stdint.h
 #include system.h
+#include fadvise.h
 #include xfreopen.h
 
 #ifdef CRCTAB
@@ -205,6 +206,8 @@ cksum (const char *file, bool print_name)
 }
 }
 
+  fadvise (fp, FADVISE_SEQUENTIAL);
+
   while ((bytes_read = fread (buf, 1, BUFLEN, fp))  0)
 {
   unsigned char *cp = buf;
diff --git a/src/comm.c b/src/comm.c
index ff42802..06b80b0 100644
--- a/src/comm.c
+++ b/src/comm.c
@@ -24,6 +24,7 @@
 #include system.h
 #include linebuffer.h
 #include error.h
+#include fadvise.h
 #include hard-locale.h
 #include quote.h
 #include stdio--.h
@@ -273,6 +274,8 @@ compare_files (char **infiles)
   if (!streams[i])
 error (EXIT_FAILURE, errno, %s, infiles[i]);
 
+  fadvise (streams[i], FADVISE_SEQUENTIAL);
+
   thisline[i] = readlinebuffer (all_line[i][alt[i][0]], streams[i]);
   if (ferror (streams[i]))
 error (EXIT_FAILURE, errno, %s, infiles[i]);
diff --git a/src/cut.c b/src/cut.c
index 5fcf074..58776d9 100644
--- a/src/cut.c
+++ b/src/cut.c
@@ -31,6 +31,7 @@
 #include system.h
 
 #include error.h
+#include fadvise.h
 #include getndelim2.h
 #include hash.h
 #include quote.h
@@ -733,6 +734,8 @@ cut_file (char const *file)
 }
 }
 
+  fadvise (stream, FADVISE_SEQUENTIAL);
+
   cut_stream (stream);
 
   if (ferror (stream))
diff --git a/src/expand.c b/src/expand.c
index be50063..249255d 100644
--- a/src/expand.c
+++ b/src/expand.c
@@ -40,6 +40,7 @@
 #include sys/types.h
 #include system.h
 #include error.h
+#include fadvise.h
 #include quote.h
 #include xstrndup.h
 
@@ -243,13 +244,14 @@ next_file (FILE *fp)
   if (STREQ (file, -))
 {
   have_read_stdin = true;
-  prev_file = file;
-  return stdin;
+  fp = stdin;
 }
-  fp = fopen (file, r);
+  else
+fp = fopen (file, r);
   if (fp)
 {
   prev_file = file;
+  fadvise (fp, FADVISE_SEQUENTIAL);
   return fp;
 }
   error (0, errno, %s, file);
diff --git a/src/fmt.c b/src/fmt.c
index 1a268ee..8a5d8bd 100644
--- a/src/fmt.c
+++ b/src/fmt.c
@@ -27,6 +27,7 @@
 
 #include system.h
 #include error.h
+#include fadvise.h
 #include quote.h
 #include xstrtol.h
 
@@ -463,6 +464,7 @@ set_prefix (char *p)
 static void
 fmt (FILE *f)
 {
+  fadvise (f, FADVISE_SEQUENTIAL);
   tabs = false;
   other_indent = 0;
   next_char = get_prefix (f);
diff --git a/src/fold.c b/src/fold.c
index 9364a03..d585856 100644
--- a/src/fold.c
+++ b/src/fold.c
@@ -24,6 +24,7 @@
 
 #include system.h
 #include error.h
+#include fadvise.h
 #include quote.h
 #include xstrtol.h
 
@@ -142,6 +143,8 @@ fold_file (char const *filename, size_t width)
   return false;
 }
 
+  fadvise (stdin, FADVISE_SEQUENTIAL);
+
   while ((c = getc (istream)) != EOF)
 {
   if (offset_out + 1 = allocated_out)
diff --git a/src/join.c b/src/join.c
index c977116..fa18c9d 100644
--- a/src/join.c
+++ b/src/join.c
@@ -24,6 +24,7 @@
 
 #include system.h
 #include error.h
+#include fadvise.h
 #include hard-locale.h
 #include 

Re: [coreutils] [PATCH] sort: fix bug with EOF at buffer refill

2010-07-27 Thread Pádraig Brady
On 27/07/10 04:53, Paul Eggert wrote:
 * src/sort.c (fillbuf): Don't append eol unless the line is nonempty.
 This fixes a bug that was partly but not completely fixed by
 the aadc67dfdb47f28bb8d1fa5e0fe0f52e2a8c51bf commit (dated July 15).
 * tests/misc/sort (realloc-buf-2): New test, which catches this
 bug on 64-bit hosts.

Good catch. The extra blank line is output on
my 32 bit system with: printf %*s\n 40 | sort -S1
Note even thought the *ptrlim++ = eol; line was
restored as it was originally, the extraneous line is
only output since Chen's patch.

cheers,
Pádraig.



Re: [coreutils] [PATCH] sort: -h now handles comparisons such as 6000K vs 5M and 5MiB vs 5MB

2010-07-30 Thread Pádraig Brady
On 30/07/10 08:59, Paul Eggert wrote:
 I discovered a race condition in coreutils sort, when running
 multithreaded, in the check_mixed_SI_IEC function.  Rather than add
 interlocks, I decided to address the underlying problem by fixing
 coreutils to do the right thing with mixed SI and IEC prefixes (modulo
 rounding errors).  To do this I installed the following:

This is more general, but seems slightly overengineered given that coreutils
doesn't produce mixed prefixes or non uninform abbreviations.
Also it's slower and now -h can't sort numbers with thousands
separators which is a common human format. I know we can with -n, but it's
a shame to lose that with -h. Perhaps since strtold() is so heavy weight anyway,
we could strip commas first?

cheers,
Pádraig.



Re: [coreutils] [PATCH] tests: silence 'make syntax-check'

2010-08-11 Thread Pádraig Brady
On 09/08/10 21:59, Eric Blake wrote:
 * gl/tests/test-rand-isaac.c (main): Avoid warnings from
 syntax-check.
 ---
 
 Committing as obvious.  The set_program_name hack is borrowed
 liberally from test-ino-map.c.
 
  gl/tests/test-rand-isaac.c |3 ++-
  1 files changed, 2 insertions(+), 1 deletions(-)
 
 diff --git a/gl/tests/test-rand-isaac.c b/gl/tests/test-rand-isaac.c
 index 330b18f..c1ad01a 100644
 --- a/gl/tests/test-rand-isaac.c
 +++ b/gl/tests/test-rand-isaac.c
 @@ -576,6 +576,7 @@ static isaac_word const expected[2][ISAAC_WORDS] =
  int
  main (int argc, char **argv)
  {
 +  /* set_program_name (argv[0]); placate overzealous syntax-check test.  */
unsigned int i;
isaac_word r[ISAAC_WORDS];
int iterations;

It would be more consistent to update .x-sc_program_name
In fact we can do better and exclude all programs in this directory:


commit 41fe1906319d008af5c5f1d8fbeeadf45cde5333
Author: Pádraig Brady p...@draigbrady.com
Date:   Wed Aug 11 10:49:22 2010 +0100

maint: exclude all tests from the set_program_name syntax-check

* .x-sc_program_name: Exclude all current and future
c files in gl/tests from this check
* gl/tests/test-di-set.c: Remove the hack to work around
the set_program_name syntax-check
* gl/tests/test-ino-map.c: Likewise
* gl/tests/test-rand-isaac.c: Likewise

diff --git a/.x-sc_program_name b/.x-sc_program_name
index 0667044..86cc5c1 100644
--- a/.x-sc_program_name
+++ b/.x-sc_program_name
@@ -1,4 +1,3 @@
 gl/lib/randint.c
 lib/euidaccess-stat.c
-gl/tests/test-mbsalign.c
-gl/tests/test-fadvise.c
+gl/tests/.*\.c
diff --git a/gl/tests/test-di-set.c b/gl/tests/test-di-set.c
index e5fb6cb..7f02e66 100644
--- a/gl/tests/test-di-set.c
+++ b/gl/tests/test-di-set.c
@@ -39,7 +39,6 @@
 int
 main (void)
 {
-  /* set_program_name (argv[0]); placate overzealous syntax-check test.  */
   struct di_set *dis = di_set_alloc ();
   ASSERT (dis);

diff --git a/gl/tests/test-ino-map.c b/gl/tests/test-ino-map.c
index 2b44602..aa7334a 100644
--- a/gl/tests/test-ino-map.c
+++ b/gl/tests/test-ino-map.c
@@ -39,7 +39,6 @@
 int
 main ()
 {
-  /* set_program_name (argv[0]); placate overzealous syntax-check test.  */
   enum { INO_MAP_INIT = 123 };
   struct ino_map *ino_map = ino_map_alloc (INO_MAP_INIT);
   ASSERT (ino_map != NULL);
diff --git a/gl/tests/test-rand-isaac.c b/gl/tests/test-rand-isaac.c
index c1ad01a..03b004c 100644
--- a/gl/tests/test-rand-isaac.c
+++ b/gl/tests/test-rand-isaac.c
@@ -576,7 +576,6 @@ static isaac_word const expected[2][ISAAC_WORDS] =
 int
 main (int argc, char **argv)
 {
-  /* set_program_name (argv[0]); placate overzealous syntax-check test.  */
   unsigned int i;
   isaac_word r[ISAAC_WORDS];
   int iterations;

cheers,
Pádraig.



Re: [coreutils] [PATCH] sort: -R now uses less memory on long lines with internal NULs

2010-08-11 Thread Pádraig Brady
On 05/08/10 00:12, Paul Eggert wrote:

 @@ -2038,41 +2027,117 @@ static int
  compare_random (char *restrict texta, size_t lena,
  char *restrict textb, size_t lenb)
  {
 -  int diff;
 +  uint32_t dig[2][MD5_DIGEST_SIZE / sizeof (uint32_t)];

I know this is just moved code but it confused me.
Is it uint32_t for alignment (speed)?
Would the following be better on 64 bit while
being immune to hash size?

diff --git a/src/sort.c b/src/sort.c
index a8d0c14..ac913b6 100644
--- a/src/sort.c
+++ b/src/sort.c
@@ -2026,7 +2026,8 @@ compare_random (char *restrict texta, size_t lena,
   char *buf = stackbuf;
   size_t bufsize = sizeof stackbuf;
   void *allocated = NULL;
-  uint32_t dig[2][MD5_DIGEST_SIZE / sizeof (uint32_t)];
+  /* align the digests on a size_t boundary for speed.  */
+  size_t dig[2][(MD5_DIGEST_SIZE + sizeof (size_t) - 1) / sizeof (size_t)];
   struct md5_ctx s[2];
   s[0] = s[1] = random_md5_state;

@@ -2121,7 +2122,7 @@ compare_random (char *restrict texta, size_t lena,
   /* Compute and compare the checksums.  */
   md5_process_bytes (texta, lena, s[0]); md5_finish_ctx (s[0], dig[0]);
   md5_process_bytes (textb, lenb, s[1]); md5_finish_ctx (s[1], dig[1]);
-  int diff = memcmp (dig[0], dig[1], sizeof dig[0]);
+  int diff = memcmp (dig[0], dig[1], MD5_DIGEST_SIZE);

   /* Fall back on the tiebreaker if the checksums collide.  */
   if (! diff)

 +  struct md5_ctx s[2];
 +  s[0] = s[1] = random_md5_state;
  
 -  if (! hard_LC_COLLATE)
 -diff = cmp_hashes (texta, lena, textb, lenb);
 -  else
 +  if (hard_LC_COLLATE)
  {
 -  /* Transform the text into the basis of comparison, so that byte
 - strings that would otherwise considered to be equal are
 - considered equal here even if their bytes differ.  */
 -
 -  char *buf = NULL;
 -  char stackbuf[4000];
 -  size_t tlena = xmemxfrm (stackbuf, sizeof stackbuf, texta, lena);

Just off hand ideas...
There are a lot of expensive operations done here.
Would it be worth doing a memcmp() first and only doing
the rest if the bytes differ. Depends on the data I know,
but given caching the up front memcmp may be worth it?
Also I wonder would caching the xfrm() data in the line buffers
be worth it, as the number of comparisons increases?

cheers,
Pádraig.



Re: [coreutils] [PATCH] Fix typo in texinfo annotation.

2010-08-14 Thread Pádraig Brady
On 14/08/10 06:17, Ralf Wildenhues wrote:
 
 Hi coreutils maintainers,
 
 this trivial patch avoids this warning from CVS texinfo's makeinfo:
 
 coreutils.texi:3807: warning: `.' or `,' must follow @xref, not `)'.
 coreutils.texi:3807: warning: for cross-references in parentheses, use @pxref.
 
 Cheers,
 Ralf
 
  doc/coreutils.texi |2 +-
  1 files changed, 1 insertions(+), 1 deletions(-)
 
 diff --git a/doc/coreutils.texi b/doc/coreutils.texi
 index 66309b1..13b6aa5 100644
 --- a/doc/coreutils.texi
 +++ b/doc/coreutils.texi
 @@ -3804,7 +3804,7 @@ converting to floating point.
  @vindex LC_NUMERIC
  Sort numerically, first by numeric sign (negative, zero, or positive);
  then by @acronym{SI} suffix (either empty, or @samp{k} or @samp{K}, or
 -one of @samp{MGTPEZY}, in that order; @xref{Block size}); and finally
 +one of @samp{MGTPEZY}, in that order; @pxref{Block size}); and finally
  by numeric value.  For example, @samp{1023M} sorts before @samp{1G}
  because @samp{M} (mega) precedes @samp{G} (giga) as an @acronym{SI}
  suffix.  This option sorts values that are consistently scaled to the
 -- 1.7.2.1

Thanks Ralf.

I'll apply that.



bug#7042: df --help does not show `-m' option

2010-09-16 Thread Pádraig Brady
On 16/09/10 12:59, Petr Pisar wrote:
 Hello,
 
 I found `df' utility from coreutils-8.5 does not describe `-m' option that is
 mentioned in info page and the program accepts it.
 
 -- Petr

-m was deprecated in 2001
http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=commit;h=d1772031

It is not mentioned in the info page as far as I can see
It's not specified by POSIX
It's not used in solaris
It _is_ used in FreeBSD

So this is much the same argument as supporting `dd bs=1m`
http://lists.gnu.org/archive/html/coreutils/2010-09/msg00028.html

Also there was a previous request for `df -g` and `du -g`
http://lists.gnu.org/archive/html/bug-coreutils/2009-12/msg0.html

The only reason one would support these is for compat with FreeBSD,
and if we did, we wouldn't want to promote their use through documentation.


cheers,
Pádraig.





[coreutils] [PATCH] tests: fix a printf portability issue

2010-09-17 Thread Pádraig Brady
not all printf commands support \xhh

diff --git a/tests/misc/sort-debug-keys b/tests/misc/sort-debug-keys
index 57a52a6..4e8beff 100755
--- a/tests/misc/sort-debug-keys
+++ b/tests/misc/sort-debug-keys
@@ -275,7 +275,7 @@ printf 2.,,3\n2.4\n | sort -s -k1n --debug
 printf 2,,3\n2.4\n | sort -s -k1n --debug

 # -z means we convert \0 to \n
-printf 1a\x002b\x00 | sort -s -n -z --debug
+printf 1a\0002b\000 | sort -s -n -z --debug

 # Check that \0 and \t intermix.
 printf \0\ta\n | sort -s -k2b,2 --debug | tr -d '\0'



Re: [coreutils] [PATCH] tests: fix a printf portability issue

2010-09-19 Thread Pádraig Brady
On 19/09/10 07:50, Jim Meyering wrote:
 Pádraig Brady wrote:
 not all printf commands support \xhh

 diff --git a/tests/misc/sort-debug-keys b/tests/misc/sort-debug-keys
 index 57a52a6..4e8beff 100755
 --- a/tests/misc/sort-debug-keys
 +++ b/tests/misc/sort-debug-keys
 @@ -275,7 +275,7 @@ printf 2.,,3\n2.4\n | sort -s -k1n --debug
  printf 2,,3\n2.4\n | sort -s -k1n --debug

  # -z means we convert \0 to \n
 -printf 1a\x002b\x00 | sort -s -n -z --debug
 +printf 1a\0002b\000 | sort -s -n -z --debug
 
 That would accommodate an inferior printf builtin function,
 with the implication that all printf uses should do the same.
 It may be hard or at least tedious to audit existing uses --
 then it'd be good (but more work) to enforce the no-\xHH policy.
 
 Since this is the coreutils, we have the option of a more sustainable
 policy: use env to ensure the test runs the printf binary that we've
 just built in this package:
 
 # printf '\xHH' is not portable with some built-ins. FIXME: list an 
 offender
 # Use env to force use of the one from coreutils.
 env printf '1a\x002b\x00' | sort -s -n -z --debug
 
 That also gives printf more test coverage.
 
 Also, please use single quotes in cases like this,
 where nothing is intended to be shell-expanded.

I'll change as you suggest (and also in the join-i18n test).

Note I left the double quotes (and octal escapes)
because that's what vim highlights.
vim's shell highlighting is woeful anyway,
especially if you don't change the default like
this in your ~/.vimrc
  let g:is_posix = 1
I should fix up vim rather than tweaking
scripts to accommodate it.

cheers,
Pádraig.



[coreutils] Re: [PATCH] join: support multi-byte character encodings

2010-09-20 Thread Pádraig Brady
On 20/09/10 01:55, Bruno Haible wrote:
 [CCing bug-libunistring. This is a reply to
  http://lists.gnu.org/archive/html/coreutils/2010-09/msg00032.html.]
 
 Hi Pádraig,
 
 I was doing some performance analysis of the above patch
 and noticed it performed very well usually but not when
 ulc_casecoll or ulc_casecmp were called, when not in a
 UTF-8 locale. In fact it seemed to slow down everything
 by about 5 times?

 $ seq -f%010.0f 3 | tee f1  f2

 # with ulc_casecmp
 $ time LANG=en_US join -i f1 f2 /dev/null
 real0m1.281s

 # with ulc_casecoll
 $ time LANG=en_US join -i f1 f2 /dev/null
 real0m2.120s

 # with ulc_casecmp in UTF8 locale
 $ time LANG=en_US.utf8 join -i f1 f2 /dev/null
 real0m0.260s

 # with ulc_casecoll in UTF8 locale
 $ time LANG=en_US.utf8 join -i f1 f2 /dev/null
 real0m0.437s
 
 Thanks for the test case. I've produced a similar test case that operates
 on the same data, in a way similar to 'join', and profiled it:
   $ valgrind --tool=callgrind ./a.out
   $ kcachegrind callgrind.out.*
 
 # Doing encoding outside gives an indication
 # what the performance should be like:
 # with ulc_casecoll in UTF8 locale
 $ time LANG=en_US.utf8 join -i (iconv -fiso-8859-1 -tutf8 f1) \
   (iconv -fiso-8859-1 -t utf8 f2) /dev/null
 real0m0.462s
 
 This is an unfair comparison, because it throws overboard the requirement
 that 'join' has to be able to deal with input that 'iconv' would not accept.
 
 A quick callgraph of ulc_casecoll gives:

 ulc_casecoll(s1,s2)
   ulc_casexfrm(s1)
 u8_conv_from_encoding()
   mem_iconveha(from,UTF8,translit=true)
 mem_iconveh()
   iconveh_open()
   mem_cd_iconveh()
 mem_cd_iconveh_internal()
   iconv()
   iconveh_close()
 u8_casexfrm
   u8_ct_casefold()
 u8_casemap()
   u8_conv_to_encoding()
  ...
   memxfrm()
   ulc_casexfrm(s2)
   memcmp(s1,s2)
 
 Unfortunately such a callgraph is not quantitative. I needed a
 profiler to really make sense out of it.
 
 The profiling showed me three things:
   - The same string is being converted to UTF-8, then case-folded
 according to Unicode and locale rules, then converted back to locale
 encoding, and then strxfrm'ed, more than once. CPU time would be
 minimized (at the expense of more memory use) if this was done only
 once for each input line. It is not clear at this point how much
 it must be done in join.c and how much help for this can be added
 to libunistring.

Well my first thought is that libunistring might expose
a version of these conv functions with a context parameter
to reference the state.

   - There are way too many iconv_open calls. In particular, while
 ulc_casecmp and ulc_casecoll have to convert two string arguments
 and could use the same iconv_t objects for both arguments, they
 open fresh iconv_t objects for each argument.
   - Additionally, every call to iconveh_open with target encoding UTF-8
 calls iconv_open twice, where once would be sufficient.

Good stuff.

 So one can see the extra overhead involved when not in UTF-8.
 Seems like I'll have to efficiently convert fields to utf8
 internally first before calling u8_casecoll()?
 
 If there was a guarantee that all strings can be converted to UTF-8, yes.
 But there is no such guarantee for 'join'. The keycmp function in
 http://lists.gnu.org/archive/html/coreutils/2010-09/msg00029.html is
 therefore actually wrong: It does not always satisfy the requirement
keycmp(A,B)  0   keycmp(B,C)  0  ==  keycmp(A,C)  0
 The required handling of input strings that are not valid in the locale
 encoding is more elaborate that I thought. I need to think more about
 this issue.

These are the sort of edge cases I was worrying about
when looking at normalization and punted until later.

cheers,
Pádraig.



[coreutils] tr: case mapping anomaly

2010-09-24 Thread Pádraig Brady
I was just looking at a bug reported to fedora there where this abort()s

 $ LC_ALL=en_US tr '[:upper:] ' '[:lower:]'

It stems from the fact that there are 56 upper and 59 lower chars in iso-8859-1.
But I also noticed an anomaly which would affect the fix, which is,
that [:upper:] and [:lower:] are extended in string 2
when there are still characters to match in string 1.
I.E. 0xDE (the last upper char) is output from:

 $ echo _ _ | LC_ALL=en_US ./src/tr '[:lower:] ' '[:upper:]'

That seems quite inconsistent given that other classes
are not allowed in string 2 when translating:

 $ echo ab . | LANG=en_US tr '[:digit:]' '[:alpha:]'
 tr: when translating, the only character classes that may appear in
 string2 are `upper' and `lower'

For consistency I think it better to keep the classes
in string 2 just for case mapping, and do something like:

 $ tr '[:upper:] ' '[:lower:]'
 tr: when not truncating set1, a character class can't be
 the last entity in string2

Note BSD allows extending the above, but that's at least
consistent with any class being allowed in string2.
I.E. this is disallowed by coreutils but Ok on BSD:

 $ echo 1 2 | LC_ALL=en_US.iso-8859-1 tr ' ' '[:alpha:]'
 1A2

Is it OK to change tr like this?
I can't see anything depending on that.

cheers,
Pádraig.



Re: [coreutils] tr: case mapping anomaly

2010-09-25 Thread Pádraig Brady
On 25/09/10 00:22, Eric Blake wrote:
 On 09/24/2010 04:47 PM, Pádraig Brady wrote:
 I was just looking at a bug reported to fedora there where this abort()s

   $ LC_ALL=en_US tr '[:upper:] ' '[:lower:]'
 
 Behavior is already unspecified by POSIX when string1 is longer than
 string2.  But given what POSIX does say:
 
 When both the -d and -s options are specified, any of the character
 class names shall be accepted in string2. Otherwise, only character
 class names lower or upper are valid in string2 and then only if the
 corresponding character class ( upper and lower, respectively) is
 specified in the same relative position in string1. Such a specification
 shall be interpreted as a request for case conversion. When [: lower:]
 appears in string1 and [: upper:] appears in string2, the arrays shall
 contain the characters from the toupper mapping in the LC_CTYPE category
 of the current locale. When [: upper:] appears in string1 and [: lower:]
 appears in string2, the arrays shall contain the characters from the
 tolower mapping in the LC_CTYPE category of the current locale. The
 first character from each mapping pair shall be in the array for string1
 and the second character from each mapping pair shall be in the array
 for string2 in the same relative position.
 
 Except for case conversion, the characters specified by a character
 class expression shall be placed in the array in an unspecified order.
 ...
 
 However, in a case conversion, as described previously, such as:
 
 tr -s '[:upper:]' '[:lower:]'
 
 the last operand's array shall contain only those characters defined as
 the second characters in each of the toupper or tolower character pairs,
 as appropriate.
 
 
 
 I interpret this to mean that even though there are 59 lower and 56
 upper in en_US, there are a fixed number of toupper case-mapping pairs,
 and there are likewise a fixed number of tolower case-mapping pairs.
 Therefore, [:upper:] and [:lower:] expand to the same number of array
 entries (whether that is 59 pairs or 56 pairs is irrelevant), and
 mappings like tr '[:lower:] ' '[:upper:]_' must unambiguously convert
 space to underscore and also guarantee that no lower-case letter becomes
 an underscore.

Thanks for digging up the relevant POSIX bits.
Yes I agree that '[:lower:]' '[:upper:]' should
be treated as a unit and not leak into adjacent elements.

 
 Your question is basically what should we do on the unspecified behavior
 of '[:lower:] ' '[:upper:]', where string1 is longer than string2, since
 that falls outside the bounds of POSIX.
 
 I.E. 0xDE (the last upper char) is output from:

   $ echo _ _ | LC_ALL=en_US ./src/tr '[:lower:] ' '[:upper:]'
 
 That matches the behavior we choose in all other instances where string1
 is longer than string2, where GNU tr follows BSD behavior of padding the
 last character of string2 to meet the length of string1.
 
 But, since POSIX is clear that the order of [:upper:] mappings is
 unspecified, I agree that it is not a good guarantee to the user of
 which byte gets duplicated to fill out the conversion, and we are better
 off rejecting that attempted usage.
 

 That seems quite inconsistent given that other classes
 are not allowed in string 2 when translating:

   $ echo ab . | LANG=en_US tr '[:digit:]' '[:alpha:]'
   tr: when translating, the only character classes that may appear in
   string2 are `upper' and `lower'

 For consistency I think it better to keep the classes
 in string 2 just for case mapping, and do something like:

   $ tr '[:upper:] ' '[:lower:]'
   tr: when not truncating set1, a character class can't be
   the last entity in string2
 
 I'd rather see it phrased:
 
 When string2 is shorter than string1, a character class can't be the
 last entity in string2.

OK. That is a bit clearer.

 Note BSD allows extending the above, but that's at least
 consistent with any class being allowed in string2.
 I.E. this is disallowed by coreutils but Ok on BSD:

   $ echo 1 2 | LC_ALL=en_US.iso-8859-1 tr ' ' '[:alpha:]'
   1A2
 
 The BSD behavior violates an explicit POSIX wording; we can't do an
 extension like that without either turning on a POSIXLY_CORRECT check or
 adding a command line option, neither of which I think is necessary.  So
 I see no reason to copy the BSD behavior of allowing any character class.

Yes I agree. I was just pointing out what BSD does here.

 Is it OK to change tr like this?
 I can't see anything depending on that.
 
 Seems reasonable to me, once we decide on the error message wording.

Great, I'll change it as above.

cheers,
Pádraig.



Re: [coreutils] tr: case mapping anomaly

2010-09-29 Thread Pádraig Brady
On 29/09/10 07:45, Jim Meyering wrote:
 Pádraig Brady wrote:
 +# characters in some locales, triggered abort()s and invalid behavior
 +if test $(LC_ALL=en_US.ISO-8859-1 locale charmap 2/dev/null) = 
 ISO-8859-1;
 +then
 +  (
 
 No big deal, but why run this in a subshell?
 It doesn't seem necessary here.

Yes that is overkill in this case.
I had copied the sub-shell bit from my
locally improved join-i18n test.

I've just pushed that.

cheers,
Pádraig.



Re: [coreutils] [PATCH 1/2] stat: support printing birthtime

2010-09-30 Thread Pádraig Brady
On 01/10/10 00:32, Eric Blake wrote:
 * src/stat.c (print_stat): New %w and %W formats.
 (do_stat): Include %w in verbose format.
 (usage): Document them.
 * doc/coreutils.texi (stat invocation): Likewise.
 * NEWS: Likewise.
 Suggested by Andre Osku Schmidt.
 ---
 
 I've tested that this works on cygwin.  On Fedora 13 with an
 ext4 partition, the birthtime appears to not be returned in
 stat().  If the kernel guys will ever commit to a stable
 xstat() interface, which we can then write gnulib wrappers
 for, we can use that instead.  I'm assuming this will also
 work on BSD systems, although I have not yet tested that.
 
 I wasn't sure how to write a test for this - how to tell if
 a filesystem has birthtime support exposed by the kernel?
 Ideas on that front are welcome.
 
 For an example of what it looks like on a file system with
 all four times:
 
 $ src/stat ChangeLog
   File: `ChangeLog'
   Size: 496660488IO Block: 65536  regular file
 Device: 10897b43h/277445443dInode: 562949954522488  Links: 1
 Access: (0644/-rw-r--r--)  Uid: ( 1007/  eblake)   Gid: (  513/None)
 Access: 2010-09-30 16:58:48.85900 -0600
 Modify: 2010-09-30 16:52:50.0 -0600
 Change: 2010-09-30 16:58:06.17625 -0600
  Birth: 2010-09-30 16:58:06.098125000 -0600

So by default on Linux we'll get:

  Birth: -

Might it be better to suppress the line if not present
given it's lack of support by file systems?

 +  %x   Time of file birth, or - if unknown\n\
 +  %X   Time of file birth as seconds since Epoch, or - if unknown\n\

s/x/w/

cheers,
Pádraig.



Re: [coreutils] [PATCH 2/2] stat: print timestamps to full resolution

2010-09-30 Thread Pádraig Brady
On 01/10/10 00:32, Eric Blake wrote:
 * src/stat.c (epoch_time): New function.
 (print_stat): Use it for %[WXYZ].
 * NEWS: Document this.
 * tests/touch/60-seconds: Adjust test to match.
 ---
 
 It bugs me that %x has more information than %X in 'stat --format',
 especially, since we don't support any format modifiers for getting
 at the additional information.  We're already incompatible with
 BSD stat(1) format modifiers, and there is no standard for stat(1),
 so I wasn't too worried about changing the meaning of existing
 modifiers rather than burning new letters just for the nanosecond
 portions.  And now that POSIX 2008 requires nanonsecond resolution
 in stat(2), you could argue that we should always be displaying it.

It looks like ext4 at least supports
nanosecond resolution timestamps

$ date --reference=/ext4/ +%s.%N
1285519989.081870491
$ date --reference=/ext3/ +%s.%N
1266874130.0

There is a fair chance that scripts may break
that assume %X is an integer.
I'd be all on for it otherwise.
As it is I'm 60:40 for the change.

cheers,
Pádraig.

 
 +static char * ATTRIBUTE_WARN_UNUSED_RESULT
 +epoch_time (struct timespec t)
 +{
 +  static char str[INT_STRLEN_BOUND (time_t) + sizeof .N];
 +  if (TYPE_SIGNED (time_t))
 +sprintf (str, % PRIdMAX .%09lu, (intmax_t) t.tv_sec,
 + (unsigned long) t.tv_nsec);
 +  else
 +sprintf (str, % PRIuMAX .%09lu, (uintmax_t) t.tv_sec,
 + (unsigned long) t.tv_nsec);
 +  return str;

time_t can be a float on weird platforms I think?

cheers,
Pádraig.



Re: [coreutils] [PATCH] maint: suppress a bogus used-uninitialized warning in tr.c

2010-10-01 Thread Pádraig Brady
On 01/10/10 10:24, Jim Meyering wrote:
 Without this, I'd get
 
   cc1: warnings being treated as errors
   tr.c: In function 'main':
   tr.c:1400: warning: 'char_to_repeat' may be used uninitialized in this 
 function

There must have been some limitation with my compiler
  gcc (GCC) 4.4.1 20090725 (Red Hat 4.4.1-2)
as it seems fairly obvious a warning should have
been issued here.

Thanks for the cleanup.

Pádraig.



Re: [coreutils] stat and SELinux context

2010-10-01 Thread Pádraig Brady
On 01/10/10 16:49, Eric Blake wrote:
 In working on the birthtime stat support, I noticed that we documented
 that -Z has been removed, on the grounds that you can get at it with
 --format=%C (although NEWS didn't mention the replacement).
 
 Would it make sense to patch the default stat output (with no --format)
 to automatically supply the %C line that Fedora's stat -Z has already
 been patched to provide?  That is, if coreutils is compiled with SELinux
 support, I see no reason why the context information should not be part
 of the default output.  That way, on Fedora, where you now use 'stat -Z'
 because of a distro-specific patch to override -Z being a no-op, you
 would in the future ALWAYS get %C output without having to use -Z.
 
 Besides, now's the perfect time to change the default output to include
 %C where available, given that we're already changing it to add %w
 birthtime ;)

Marginal.

I would vote 60:40 against doing this because
of the non standard format of the context info,
and the less generic nature of SELinux info
(compared to birthtime for example).

cheers,
Pádraig.



Re: [coreutils] [PATCHv2] stat: print SELinux context when available

2010-10-05 Thread Pádraig Brady
The attached fixes a logic inversion issue,
and doesn't print context in file system mode
which is confusing to me at least.

There is also the argument not to print context
in terse mode at all, as we'll have issues
with outputting other extended attributes
like capabilities and ACLs etc?

cheers,
Pádraig.
From fd4ca634513a610309206be3b446a3d11c5b1d3a Mon Sep 17 00:00:00 2001
From: =?utf-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Tue, 5 Oct 2010 08:40:17 +0100
Subject: [PATCH] stat: adjust the printing of SELinux context

* src/stat.c (default_format): Don't print SELinux context
when in file system (-f) mode, as the context is associated
with the file, not the file system.
Fix logic inversion, so that in terse mode, %C is included
only when is_selinux_enabled and not vice versa.
---
 src/stat.c |   26 +-
 1 files changed, 5 insertions(+), 21 deletions(-)

diff --git a/src/stat.c b/src/stat.c
index e13f21f..a82fcb3 100644
--- a/src/stat.c
+++ b/src/stat.c
@@ -1074,12 +1074,7 @@ default_format (bool fs, bool terse, bool device)
   if (fs)
 {
   if (terse)
-{
-  if (0  is_selinux_enabled ())
-format = xstrdup (%n %i %l %t %s %S %b %f %a %c %d %C\n);
-  else
-format = xstrdup (%n %i %l %t %s %S %b %f %a %c %d\n);
-}
+format = xstrdup (%n %i %l %t %s %S %b %f %a %c %d\n);
   else
 {
   /* TRANSLATORS: This string uses format specifiers from
@@ -1091,29 +1086,18 @@ Block size: %-10s Fundamental block size: %S\n\
 Blocks: Total: %-10b Free: %-10f Available: %a\n\
 Inodes: Total: %-10c Free: %d\n\
 ));
-
-  if (0  is_selinux_enabled ())
-{
-  /* TRANSLATORS: This string uses format specifiers from
- 'stat --help' with --file-system, and NOT from printf.  */
-  char *temp = format;
-  format = xasprintf (%s%s, format, _(\
-Context: %C\n\
-));
-  free (temp);
-}
 }
 }
   else /* ! fs */
 {
   if (terse)
 {
-  if (0  is_selinux_enabled ())
+  if (is_selinux_enabled ())
 format = xstrdup (%n %s %b %f %u %g %D %i %h %t %T
-   %X %Y %Z %W %o\n);
+   %X %Y %Z %W %o %C\n);
   else
 format = xstrdup (%n %s %b %f %u %g %D %i %h %t %T
-   %X %Y %Z %W %o %C\n);
+   %X %Y %Z %W %o\n);
 }
   else
 {
@@ -1152,7 +1136,7 @@ Access: (%04a/%10.10A)  Uid: (%5u/%8U)   Gid: (%5g/%8G)\n\
 ));
   free (temp);
 
-  if (0  is_selinux_enabled ())
+  if (is_selinux_enabled ())
 {
   temp = format;
   /* TRANSLATORS: This string uses format specifiers from
-- 
1.6.2.5



Re: [coreutils] [PATCHv2] stat: print SELinux context when available

2010-10-05 Thread Pádraig Brady
On 05/10/10 10:03, Jim Meyering wrote:
 -  if (0  is_selinux_enabled ())
 +  if (is_selinux_enabled ())
 ...
 -  if (0  is_selinux_enabled ())
 +  if (is_selinux_enabled ())
 
 However, the changes to the use of is_selinux_enabled look wrong,
 since that function returns -1 upon failure.
 That's why we test 0  is_selinux_enabled ().
 
From the man page:
 
   DESCRIPTION
  is_selinux_enabled returns 1 if SELinux is running or 0 if it  is
  not.  On error, -1 is returned.

Eep. That man page has changed since Fedora 12.
Fixed, and pushed.

thanks,
Pádraig.



Re: [coreutils] join feature: auto-format

2010-10-06 Thread Pádraig Brady
On 06/10/10 21:41, Assaf Gordon wrote:
 Hello,
 
 I'd like to (re)suggest a feature for the join program - the ability to 
 automatically build an output format line (similar but easier than using 
 -o).
 
 I've previously mentioned it here (but got no favorable responses):
 http://lists.gnu.org/archive/html/bug-coreutils/2009-11/msg00151.html
 
 Several people have been using this option for a year now (on our local 
 servers), so I thought I might try to suggest it again.
 
 The full patch is attached, and also available here:
 http://cancan.cshl.edu/labmembers/gordon/files/join_auto_format_2010_10_06.patch
 
 Here's the common use case:
 
 Given two tabular files, with a common key at first column, and many numeric 
 (or other) values on other columns, the user wants to join them together 
 easily.
 One requirement is that empty/missing values should be populated with 00.
 
 File 1
 ==
 bar 10 13 15 16 11 32
 foo 10 10 11 12 13 14
 
 
 File 2
 ==
 bar 99 91 90 93 91 93
 baz 90 91 99 96 97 95
 
 
 Desired joined output
 ==
 bar 10 13 15 16 11 32 99 91 90 93 91 93
 baz 00 00 00 00 00 00 90 91 99 96 97 95
 foo 10 10 11 12 13 14 00 00 00 00 00 00
 
 There is no technical problem in achieving this, the parameters would be:
 -a1 -a2 -e 00 -o 0,1.2,1.3,1.4,1.5,1.6,1.7,2.2,2.3,2.4,2.5,2.6,2.7
 
 But building the -o parameter is cumbersome, and error-prone (imaging files 
 with dozens of columns, which is very common in my case).
 
 The --auto-format feature simply builds the -o format line automatically, 
 based on the number of columns from both input files.

Thanks for persisting with this and presenting a concise example.
I agree that this is useful and can't think of a simple workaround.
Perhaps the interface would be better as:

-o {all (default), padded, FORMAT}

where padded is the functionality you're suggesting?

cheers,
Pádraig.



Re: [coreutils] join feature: auto-format

2010-10-07 Thread Pádraig Brady
On 07/10/10 01:03, Pádraig Brady wrote:
 On 06/10/10 21:41, Assaf Gordon wrote:
 Hello,

 I'd like to (re)suggest a feature for the join program - the ability to 
 automatically build an output format line (similar but easier than using 
 -o).

 I've previously mentioned it here (but got no favorable responses):
 http://lists.gnu.org/archive/html/bug-coreutils/2009-11/msg00151.html

 Several people have been using this option for a year now (on our local 
 servers), so I thought I might try to suggest it again.

 The full patch is attached, and also available here:
 http://cancan.cshl.edu/labmembers/gordon/files/join_auto_format_2010_10_06.patch

 Here's the common use case:

 Given two tabular files, with a common key at first column, and many numeric 
 (or other) values on other columns, the user wants to join them together 
 easily.
 One requirement is that empty/missing values should be populated with 00.

 File 1
 ==
 bar 10 13 15 16 11 32
 foo 10 10 11 12 13 14


 File 2
 ==
 bar 99 91 90 93 91 93
 baz 90 91 99 96 97 95


 Desired joined output
 ==
 bar 10 13 15 16 11 32 99 91 90 93 91 93
 baz 00 00 00 00 00 00 90 91 99 96 97 95
 foo 10 10 11 12 13 14 00 00 00 00 00 00

 There is no technical problem in achieving this, the parameters would be:
 -a1 -a2 -e 00 -o 0,1.2,1.3,1.4,1.5,1.6,1.7,2.2,2.3,2.4,2.5,2.6,2.7

 But building the -o parameter is cumbersome, and error-prone (imaging 
 files with dozens of columns, which is very common in my case).

 The --auto-format feature simply builds the -o format line 
 automatically, based on the number of columns from both input files.
 
 Thanks for persisting with this and presenting a concise example.
 I agree that this is useful and can't think of a simple workaround.
 Perhaps the interface would be better as:
 
 -o {all (default), padded, FORMAT}
 
 where padded is the functionality you're suggesting?

Thinking more about it, we mightn't need any new options at all.
Currently -e is redundant if -o is not specified.
So how about changing that so that if -e is specified
we operate as above by auto inserting empty fields?
Also I wouldn't base on the number of fields in the first line,
instead auto padding to the biggest number of fields
on the current lines under consideration.

cheers,
Pádraig.



[coreutils] [PATCH] split: fix reporting of read errors

2010-10-07 Thread Pádraig Brady
Ok to push this for the imminent release?

cheers,
Pádraig.
From ac6837d1d76e01126b98fc97df6226fc26ea365f Mon Sep 17 00:00:00 2001
From: =?utf-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Thu, 7 Oct 2010 13:12:36 +0100
Subject: [PATCH] split: fix reporting of read errors

The bug was introduced with commit 23f6d41f, 19-02-2003.

* src/split.c (bytes_split, lines_split, line_bytes_split):
Correctly check the return from full_read().
* NEWS: Mention the fix.
---
 NEWS|3 +++
 src/split.c |6 +++---
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/NEWS b/NEWS
index 719ac9c..7136c2a 100644
--- a/NEWS
+++ b/NEWS
@@ -15,6 +15,9 @@ GNU coreutils NEWS-*- outline -*-
   found to be part of a directory cycle.  Before, du would issue a
   NOTIFY YOUR SYSTEM MANAGER diagnostic and fail.
 
+  split now diagnoses read errors rather than silently exiting.
+  [bug introduced in coreutils-4.5.8]
+
   tac would perform a double-free when given an input line longer than 16KiB.
   [bug introduced in coreutils-8.3]
 
diff --git a/src/split.c b/src/split.c
index 5be7207..90e2c76 100644
--- a/src/split.c
+++ b/src/split.c
@@ -229,7 +229,7 @@ bytes_split (uintmax_t n_bytes, char *buf, size_t bufsize)
   do
 {
   n_read = full_read (STDIN_FILENO, buf, bufsize);
-  if (n_read == SAFE_READ_ERROR)
+  if (n_read  bufsize  errno)
 error (EXIT_FAILURE, errno, %s, infile);
   bp_out = buf;
   to_read = n_read;
@@ -273,7 +273,7 @@ lines_split (uintmax_t n_lines, char *buf, size_t bufsize)
   do
 {
   n_read = full_read (STDIN_FILENO, buf, bufsize);
-  if (n_read == SAFE_READ_ERROR)
+  if (n_read  bufsize  errno)
 error (EXIT_FAILURE, errno, %s, infile);
   bp = bp_out = buf;
   eob = bp + n_read;
@@ -325,7 +325,7 @@ line_bytes_split (size_t n_bytes)
   /* Fill up the full buffer size from the input file.  */
 
   n_read = full_read (STDIN_FILENO, buf + n_buffered, n_bytes - n_buffered);
-  if (n_read == SAFE_READ_ERROR)
+  if (n_read  (n_bytes - n_buffered)  errno)
 error (EXIT_FAILURE, errno, %s, infile);
 
   n_buffered += n_read;
-- 
1.6.2.5



Re: [coreutils] join feature: auto-format

2010-10-07 Thread Pádraig Brady
On 07/10/10 18:43, Assaf Gordon wrote:
 Pádraig Brady wrote, On 10/07/2010 06:22 AM:
 On 07/10/10 01:03, Pádraig Brady wrote:
 On 06/10/10 21:41, Assaf Gordon wrote:

 The --auto-format feature simply builds the -o format line 
 automatically, based on the number of columns from both input files.

 Thanks for persisting with this and presenting a concise example.
 I agree that this is useful and can't think of a simple workaround.
 Perhaps the interface would be better as:

 -o {all (default), padded, FORMAT}

 where padded is the functionality you're suggesting?

 Thinking more about it, we mightn't need any new options at all.
 Currently -e is redundant if -o is not specified.
 So how about changing that so that if -e is specified
 we operate as above by auto inserting empty fields?
 Also I wouldn't base on the number of fields in the first line,
 instead auto padding to the biggest number of fields
 on the current lines under consideration.
 
 My concern is the principle of least surprise - if there are existing 
 scripts/programs that specify -e without -o (doesn't make sense, but 
 still possible) - this change will alter their behavior.
 
 Also, implying/forcing 'auto-format' when -e is used without -o might be 
 a bit confusing.

Well seeing as -e without -o currently does nothing,
I don't think we need to worry too much about changing that behavior.
Also to me, specifying -e EMPTY implicitly means I want
fields missing from one of the files replaced with EMPTY.

Note POSIX is more explicit, and describes our current operation:

-e EMPTY
  Replace empty output fields in the list selected by -o with EMPTY

So changing that would be an extension to POSIX.
But I still think it makes sense.
I'll prepare a patch soon, to do as I describe above,
unless there are objections.

cheers,
Pádraig.



Re: [coreutils] [PATCH] split: fix reporting of read errors

2010-10-07 Thread Pádraig Brady
On 07/10/10 13:38, Jim Meyering wrote:
 Pádraig Brady wrote:
 Ok to push this for the imminent release?
 Subject: [PATCH] split: fix reporting of read errors

 The bug was introduced with commit 23f6d41f, 19-02-2003.

 * src/split.c (bytes_split, lines_split, line_bytes_split):
 Correctly check the return from full_read().
 * NEWS: Mention the fix.
 ...
 @@ -325,7 +325,7 @@ line_bytes_split (size_t n_bytes)
/* Fill up the full buffer size from the input file.  */

n_read = full_read (STDIN_FILENO, buf + n_buffered, n_bytes - 
 n_buffered);
 -  if (n_read == SAFE_READ_ERROR)
 +  if (n_read  (n_bytes - n_buffered)  errno)
 
 Yes, thanks!
 Good catch.  How did you find it?

I was testing various failure modes for the split --number patch
and noticed split was silent when passed a directory.

 However, I'd prefer to avoid parentheses like those in the final chunk,
 as well as the duplicated expression -- there are too many n_-prefixed
 names in the vicinity.  What do you think of this instead?
 
 size_t n =  n_bytes - n_buffered;
 n_read = full_read (STDIN_FILENO, buf + n_buffered, n);
 if (n_read  n  errno)

Yes I refactored a little, added a test and pushed.

cheers,
Pádraig.



Re: [Coreutils] Sort enhancement request: 1 1K 1M 1G etc.

2010-10-08 Thread Pádraig Brady
On 08/10/10 04:16, Philip Ganchev wrote:
 It would be useful if sort could understand numeric abbreviations, for
 example 1k = 1,000 and 1K = 1024. This need arises for me very often
 when I want a sorted list of human-readable file sizes like the output
 of du -h. Currently, to use sort you have to resort to raw
 numbers, which are hard to read if they have 7 or more digits.  I'm
 sure the feature would useful more generally.
 
 This feature can be implemented as part of the --general-numeric
 option to sort.
 
 Alternatively, maybe coreutils should include a general program to
 parse numbers in text and re-format them, while outputting the rest of
 the text unchanged.

from the NEWS for coreutils 7.5

  sort accepts a new option, --human-numeric-sort (-h): sort numbers
  while honoring human readable suffixes like KiB and MB etc.

cheers,
Pádraig.



Re: [coreutils] new snapshot available: coreutils-8.5.188-9af44

2010-10-11 Thread Pádraig Brady
On 10/10/10 20:57, Jim Meyering wrote:
 Here is a snapshot of the latest coreutils development sources.
 Please build it and run make check on any systems you can, and
 report any problems to bug-coreut...@gnu.org.
 
 I expect to make a stable release in two or three days.

I just noticed tail-2/inotify-hash-abuse hang on my system,
which is due to a 2.6.24 kernel bug where inotify_add_watch()
returns ENOSPC all the time. This causes tail -F to just
wait in vain.

10s fix is:

--- tail.c  2010-09-30 07:47:45.0 +
+++ /home/padraig/tail.c2010-10-11 16:22:37.218039056 +
@@ -1311,7 +1311,8 @@
   /* Map an inotify watch descriptor to the name of the file it's watching.  */
   Hash_table *wd_to_name;

-  bool found_watchable = false;
+  bool found_watchable_file = false;
+  bool found_watchable_dir = false;
   bool writer_is_dead = false;
   int prev_wd;
   size_t evlen = 0;
@@ -1359,6 +1360,7 @@
  quote (f[i].name));
   continue;
 }
+  found_watchable_dir = true;
 }

   f[i].wd = inotify_add_watch (wd, f[i].name, inotify_wd_mask);
@@ -1373,11 +1375,12 @@
   if (hash_insert (wd_to_name, (f[i])) == NULL)
 xalloc_die ();

-  found_watchable = true;
+  found_watchable_file = true;
 }
 }

-  if (follow_mode == Follow_descriptor  !found_watchable)
+  if ((follow_mode == Follow_descriptor  !found_watchable_file)
+  || !found_watchable_dir)
 return;

   prev_wd = f[n_files - 1].wd;


Note the above will also handle this case, where tail also just waits in vain:
  tail -F /missing/dir/file
I might amend the above patch to timeout/recheck on ENOENT,
rather than return.  I'll also fix the test to not busy loop.
Catching the train now...

cheers,
Pádraig.



[coreutils] [PATCH] tail: fix checking of currently unavailable directories

2010-10-11 Thread Pádraig Brady
I'll apply the attached tomorrow at some stage.

cheers,
Pádraig.
From 7730599b6984d670760dd8fcacae54d7ed0a1496 Mon Sep 17 00:00:00 2001
From: =?utf-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Tue, 12 Oct 2010 01:39:58 +0100
Subject: [PATCH] tail: fix checking of currently unavailable directories

* src/tail.c (tail_forever_inotify): Handle the case where
tail --follow=name with inotify, is not able to add a watch on
a specified directory.  This may happen due to inotify resource
limits or if the directory is currently missing or inaccessible.
In all these cases, revert to polling which will try to reopen
the file later.
* tests/tail-2/F-vs-rename: Fix the endless loop triggered by
the above issue.
* tests/tail-2/inotify-hash-abuse: Likewise.
* tests/tail-2/F-vs-missing: A new test for this failure mode
which was until not just triggered on older buggy linux kernels
which returned ENOSPC constantly from inotify_add_watch().
* NEWS: Mention the fix.
---
 NEWS|3 ++
 src/tail.c  |   38 -
 tests/Makefile.am   |1 +
 tests/tail-2/F-vs-missing   |   49 +++
 tests/tail-2/F-vs-rename|   35 ---
 tests/tail-2/inotify-hash-abuse |   27 -
 6 files changed, 110 insertions(+), 43 deletions(-)
 create mode 100755 tests/tail-2/F-vs-missing

diff --git a/NEWS b/NEWS
index 7ee18cd..7cbe7bb 100644
--- a/NEWS
+++ b/NEWS
@@ -21,6 +21,9 @@ GNU coreutils NEWS-*- outline -*-
   tac would perform a double-free when given an input line longer than 16KiB.
   [bug introduced in coreutils-8.3]
 
+  tail -F again notices later changes to a currently unavailable directory.
+  [bug introduced in coreutils-7.5]
+
   tr now consistently handles case conversion character classes.
   In some locales, valid conversion specifications caused tr to abort,
   while in all locales, some invalid specifications were undiagnosed.
diff --git a/src/tail.c b/src/tail.c
index 75e9d53..14c17de 100644
--- a/src/tail.c
+++ b/src/tail.c
@@ -1300,9 +1300,10 @@ check_fspec (struct File_spec *fspec, int wd, int *prev_wd)
 error (EXIT_FAILURE, errno, _(write error));
 }
 
-/* Tail N_FILES files forever, or until killed.
-   Check modifications using the inotify events system.  */
-static void
+/* Attempt to tail N_FILES files forever, or until killed.
+   Check modifications using the inotify events system.
+   Return false on error, or true to revert to polling.  */
+static bool
 tail_forever_inotify (int wd, struct File_spec *f, size_t n_files,
   double sleep_interval)
 {
@@ -1311,7 +1312,9 @@ tail_forever_inotify (int wd, struct File_spec *f, size_t n_files,
   /* Map an inotify watch descriptor to the name of the file it's watching.  */
   Hash_table *wd_to_name;
 
-  bool found_watchable = false;
+  bool found_watchable_file = false;
+  bool found_unwatchable_dir = false;
+  bool no_inotify_resources = false;
   bool writer_is_dead = false;
   int prev_wd;
   size_t evlen = 0;
@@ -1357,7 +1360,10 @@ tail_forever_inotify (int wd, struct File_spec *f, size_t n_files,
 {
   error (0, errno, _(cannot watch parent directory of %s),
  quote (f[i].name));
-  continue;
+  found_unwatchable_dir = true;
+  /* We revert to polling below.  Note invalid uses
+ of the inotify API will still be diagnosed.  */
+  break;
 }
 }
 
@@ -1367,18 +1373,28 @@ tail_forever_inotify (int wd, struct File_spec *f, size_t n_files,
 {
   if (errno != f[i].errnum)
 error (0, errno, _(cannot watch %s), quote (f[i].name));
+  no_inotify_resources |= (errno == ENOSPC);
   continue;
 }
 
   if (hash_insert (wd_to_name, (f[i])) == NULL)
 xalloc_die ();
 
-  found_watchable = true;
+  found_watchable_file = true;
 }
 }
 
-  if (follow_mode == Follow_descriptor  !found_watchable)
-return;
+  /* Linux kernel 2.6.24 at least has a bug where eventually, ENOSPC is always
+ returned by inotify_add_watch.  In any case we should revert to polling
+ when there are no inotify resources.  Also a specified directory may not
+ be currently present or accessible, so revert to polling.  */
+  if (no_inotify_resources || found_unwatchable_dir)
+{
+  /* FIXME: release hash and inotify resources allocated above.  */
+  return true;
+}
+  if (follow_mode == Follow_descriptor  !found_watchable_file)
+return false;
 
   prev_wd = f[n_files - 1].wd;
 
@@ -2157,10 +2173,8 @@ main (int argc, char **argv)
   if (fflush (stdout) != 0)
 error (EXIT_FAILURE, errno, _(write error));
 
-  tail_forever_inotify 

Re: [coreutils] [PATCH] tail: fix checking of currently unavailable directories

2010-10-11 Thread Pádraig Brady
On 12/10/10 02:18, Pádraig Brady wrote:
 I'll apply the attached tomorrow at some stage.

With this squashed in to replace the confusing
no space left on device error, with the
reverting to polling information.
This has the side effect of tail-2/wait
passing on effected systems.

diff --git a/src/tail.c b/src/tail.c
index 14c17de..1fea8d1 100644
--- a/src/tail.c
+++ b/src/tail.c
@@ -1356,7 +1356,7 @@ tail_forever_inotify (int wd, struct File_spec *f, size_t 
n_files,

   f[i].name[dirlen] = prev;

-  if (f[i].parent_wd  0)
+  if (f[i].parent_wd  0  errno != ENOSPC)
 {
   error (0, errno, _(cannot watch parent directory of %s),
  quote (f[i].name));
@@ -1371,7 +1371,7 @@ tail_forever_inotify (int wd, struct File_spec *f, size_t 
n_files,

   if (f[i].wd  0)
 {
-  if (errno != f[i].errnum)
+  if (errno != f[i].errnum  errno != ENOSPC)
 error (0, errno, _(cannot watch %s), quote (f[i].name));
   no_inotify_resources |= (errno == ENOSPC);
   continue;
@@ -2175,6 +2175,9 @@ main (int argc, char **argv)

   if (!tail_forever_inotify (wd, F, n_files, sleep_interval))
 exit (EXIT_FAILURE);
+  else
+error (0, errno,
+   _(inotify cannot be used, reverting to polling));
 }
 }
 #endif





Re: [coreutils] [PATCH] tail: fix checking of currently unavailable directories

2010-10-12 Thread Pádraig Brady
On 12/10/10 02:44, Pádraig Brady wrote:
 On 12/10/10 02:18, Pádraig Brady wrote:
 I'll apply the attached tomorrow at some stage.
 
 With this squashed in to replace the confusing
 no space left on device error, with the
 reverting to polling information.
 This has the side effect of tail-2/wait
 passing on effected systems.

OK I pushed that, with the change that,
rather than suppressing the confusing ENOSPC error,
I converted to inotify resources exhausted.

cheers,
Pádraig.



Re: [coreutils] Re: [PATCH 2/2] stat: print timestamps to full resolution

2010-10-21 Thread Pádraig Brady
On 21/10/10 15:01, Jim Meyering wrote:
 Andreas Schwab wrote:
 Jim Meyering j...@meyering.net writes:

 And besides, with coreutils-8.6 already released, reverting the
 change is no longer an option.

 Why?  I'm pretty sure more breakage will pop up over time.
 
 Hmm... I see what you mean.
 Anyone using a distribution with coreutils-8.5 or older will
 think it's fine to treat $(stat -c %X) as an integer with no
 decimal or trailing fraction.
 
 Is it worthwhile to create a new format, say %...:X (and same for Y and Z)
 that expands to seconds.nanoseconds and to revert %...X to the old
 seconds-only semantics?
 
 That would make it so people could use stat's %X %Y %Z portably
 while new scripts can still get nanosecond accuracy when needed.
 
 Here's PoC code:
 
   $ for i in %X %:X %Y %:Y %Z %:Z; do src/stat -c $i .; done
   1256665918
   1256665918.441784225

Or use '.' rather than ':' which also has the
advantage of being backward compat with older stats,



Re: [coreutils] Re: [PATCH 2/2] stat: print timestamps to full resolution

2010-10-21 Thread Pádraig Brady
On 21/10/10 15:18, Pádraig Brady wrote:
 On 21/10/10 15:01, Jim Meyering wrote:
 Andreas Schwab wrote:
 Jim Meyering j...@meyering.net writes:

 And besides, with coreutils-8.6 already released, reverting the
 change is no longer an option.

 Why?  I'm pretty sure more breakage will pop up over time.

 Hmm... I see what you mean.
 Anyone using a distribution with coreutils-8.5 or older will
 think it's fine to treat $(stat -c %X) as an integer with no
 decimal or trailing fraction.

 Is it worthwhile to create a new format, say %...:X (and same for Y and Z)
 that expands to seconds.nanoseconds and to revert %...X to the old
 seconds-only semantics?

 That would make it so people could use stat's %X %Y %Z portably
 while new scripts can still get nanosecond accuracy when needed.

 Here's PoC code:

   $ for i in %X %:X %Y %:Y %Z %:Z; do src/stat -c $i .; done
   1256665918
   1256665918.441784225
 
 Or use '.' rather than ':' which also has the
 advantage of being backward compat with older stats,

To clarify, coreutils = 8.5 output these timestamps
using an int format internally, and so ignored any specified precision.

coreutils 8.6 treats these timestamps as strings and
therefore %.X will not output anything which is a pity,
but if we're considering making 8.6 special in it's
handling of %[WXYZ], then perhaps this is OK.

cheers,
Pádraig.



Re: [coreutils] Re: [PATCH 2/2] stat: print timestamps to full resolution

2010-10-21 Thread Pádraig Brady
On 21/10/10 15:49, Eric Blake wrote:
 On 10/21/2010 08:42 AM, Pádraig Brady wrote:
 Or use '.' rather than ':' which also has the
 advantage of being backward compat with older stats,

 To clarify, coreutils= 8.5 output these timestamps
 using an int format internally, and so ignored any specified precision.
 
 Not quite:
 
 $ stat -c%0.20X .
 001287615247

Fair enough, but inconsequential to the special case (%.X) we're talking about.

 
 coreutils 8.6 treats these timestamps as strings and
 therefore %.X will not output anything which is a pity,
 but if we're considering making 8.6 special in it's
 handling of %[WXYZ], then perhaps this is OK.
 
 I'm still wary of special-casing precision like this; should it behave
 more like printf()s %.d or %.f?

 What you are arguing for is that %X has
 no . or subsecond digits, %.X has nine subsecond digits

Right

 but what about %.*X?

Well that's a separate but related issue.
Currently we're treating like %.s which is a little confusing
as one might guess first that %.f was being used. Using %s
doesn't allow getting millisecond resolution for example.
Also this is another backwards compat issue as we previously used %.j

 At this point, I'm thinking that %:X is nicer than %.X, to avoid
 these types of confusion, and given that date(1) already supports %:z.

Yep, that avoids the issue, but means one can use %.X to mean:-
get the best resolution timestamp available, and have it work
on all versions of coreutils (except 8.6 which may be deemed special).

Jim's suggestion of splitting the nanoseconds away from %[WXYZ]
altogether, elsewhere in this thread, is the most flexible and compatible,
albeit not as discoverable.

cheers,
Pádraig.



Re: [coreutils] stat: reverting recent %X %Y %Z change

2010-10-22 Thread Pádraig Brady
On 22/10/10 18:43, Jim Meyering wrote:
 Part of reverting this change:
 
 stat now outputs the full sub-second resolution for the atime,
 mtime, and ctime values since the Epoch, when using the %X, %Y, and
 %Z directives of the --format option.  This matches the fact that
 %x, %y, and %z were already doing so for the human-readable variant.
 
 means considering whether to do the same sort of thing with the
 newly-added %W (birth time/crtime).  Note that %W currently expands to -
 on systems with no support (e.g., linux).
 
 I don't particularly like the idea of making stat do this:
 
 $ src/stat -c %W.%:W .
 -.-
 
 Rather, I have a slight preference to make it do this:
 
 $ src/stat -c %W.%:W .
 0.0

I prefer that as it would mean less special
cases in code that uses the output.

 In any case, I think %w should still expand to -.
 
 The alternative is to leave %W separate, with no %:W variant.
 I.e., %W would continue to print floating point seconds.
 In that case, it would be inconsistent with %X, %Y and %Z.

I don't like that.

cheers,
Pádraig.



[coreutils] [PATCH] truncate: fix a very unlikely case for undiagnosed errors

2010-10-24 Thread Pádraig Brady
The attached is more to suppress any errors from
possible future static analysis, than for any
practical reason.
From b1d246341525b9f3e32914bd29764439528620ea Mon Sep 17 00:00:00 2001
From: =?utf-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Sun, 24 Oct 2010 22:44:52 +0100
Subject: [PATCH] truncate: fix a very unlikely case for undiagnosed errors

src/truncate.c (main): Use a bool to store if an error occurred,
rather than an int, to protect against overflow.
(do_ftruncate): Likewise.   Also change so that 0/false means
failure, and not success.
---
 src/truncate.c |   29 +++--
 1 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/src/truncate.c b/src/truncate.c
index 610787d..faa83f1 100644
--- a/src/truncate.c
+++ b/src/truncate.c
@@ -129,8 +129,8 @@ SIZE may also be prefixed by one of the following modifying characters:\n\
   exit (status);
 }
 
-/* return 1 on error, 0 on success */
-static int
+/* return true on success, false on error.  */
+static bool
 do_ftruncate (int fd, char const *fname, off_t ssize, off_t rsize,
   rel_mode_t rel_mode)
 {
@@ -140,7 +140,7 @@ do_ftruncate (int fd, char const *fname, off_t ssize, off_t rsize,
   if ((block_mode || (rel_mode  rsize  0))  fstat (fd, sb) != 0)
 {
   error (0, errno, _(cannot fstat %s), quote (fname));
-  return 1;
+  return false;
 }
   if (block_mode)
 {
@@ -152,7 +152,7 @@ do_ftruncate (int fd, char const *fname, off_t ssize, off_t rsize,
 * % PRIdMAX  byte blocks for file %s),
  (intmax_t) ssize, (intmax_t) blksize,
  quote (fname));
-  return 1;
+  return false;
 }
   ssize *= blksize;
 }
@@ -165,7 +165,7 @@ do_ftruncate (int fd, char const *fname, off_t ssize, off_t rsize,
   if (!S_ISREG (sb.st_mode)  !S_TYPEISSHM (sb))
 {
   error (0, 0, _(cannot get the size of %s), quote (fname));
-  return 1;
+  return false;
 }
   if (sb.st_size  0)
 {
@@ -173,7 +173,7 @@ do_ftruncate (int fd, char const *fname, off_t ssize, off_t rsize,
  this would ever go negative. */
   error (0, 0, _(%s has unusable, apparently negative size),
  quote (fname));
-  return 1;
+  return false;
 }
 }
 
@@ -193,7 +193,7 @@ do_ftruncate (int fd, char const *fname, off_t ssize, off_t rsize,
 {
   error (0, 0, _(overflow rounding up size of file %s),
  quote (fname));
-  return 1;
+  return false;
 }
   nsize = overflow;
 }
@@ -203,7 +203,7 @@ do_ftruncate (int fd, char const *fname, off_t ssize, off_t rsize,
 {
   error (0, 0, _(overflow extending size of file %s),
  quote (fname));
-  return 1;
+  return false;
 }
   nsize = fsize + ssize;
 }
@@ -218,21 +218,22 @@ do_ftruncate (int fd, char const *fname, off_t ssize, off_t rsize,
   error (0, errno,
  _(failed to truncate %s at % PRIdMAX  bytes), quote (fname),
  (intmax_t) nsize);
-  return 1;
+  return false;
 }
 
-  return 0;
+  return true;
 }
 
 int
 main (int argc, char **argv)
 {
   bool got_size = false;
+  bool errors = false;
   off_t size IF_LINT ( = 0);
   off_t rsize = -1;
   rel_mode_t rel_mode = rm_abs;
   mode_t omode;
-  int c, errors = 0, fd = -1, oflags;
+  int c, fd = -1, oflags;
   char const *fname;
 
   initialize_main (argc, argv);
@@ -374,7 +375,7 @@ main (int argc, char **argv)
 {
   error (0, errno, _(cannot open %s for writing),
  quote (fname));
-  errors++;
+  errors = true;
 }
   continue;
 }
@@ -382,11 +383,11 @@ main (int argc, char **argv)
 
   if (fd != -1)
 {
-  errors += do_ftruncate (fd, fname, size, rsize, rel_mode);
+  errors |= !do_ftruncate (fd, fname, size, rsize, rel_mode);
   if (close (fd) != 0)
 {
   error (0, errno, _(closing %s), quote (fname));
-  errors++;
+  errors = true;
 }
 }
 }
-- 
1.6.2.5



Re: [coreutils] cp/reflink-perm fails on btrfs (and probably on ocfs2, too)

2010-10-26 Thread Pádraig Brady
On 26/10/10 14:54, Jim Meyering wrote:
 Pádraig Brady wrote:
 On 26/10/10 12:48, Pádraig Brady wrote:
 So in summary error if any of --link, --symbolic-link,
 --reflink or --attributes-only are combined.

 I.E. leave the docs alone and do:
 
 Thanks.  That sounds good.
 Do you feel like writing the NEWS entry, too?

I'll apply the attached this evening some time.

cheers,
Pádraig.
From 9540b44e5dbae1bc8125bd1aeadbb40d8944fe3c Mon Sep 17 00:00:00 2001
From: =?utf-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Tue, 26 Oct 2010 15:25:28 +0100
Subject: [PATCH] cp: disallow combinations of --*link options and --attributes-only

* src/cp.c (main): Disallow combining --reflink with --link or
--symbolic-link as they override --reflink.  Also disallow --attr
in combination as it's only pertinent to the --reflink=auto case,
and even then there is no reason the user would need to specify
both of those options.
* tests/cp/reflink-perm: Verify the combinations are not allowed.
* NEWS: Mention the change in behavior.
---
 NEWS  |6 ++
 src/cp.c  |9 +++--
 tests/cp/reflink-perm |5 +++--
 3 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/NEWS b/NEWS
index 7dbbf1f..3c2134e 100644
--- a/NEWS
+++ b/NEWS
@@ -7,6 +7,12 @@ GNU coreutils NEWS-*- outline -*-
   tail -F once again notices changes in a currently unavailable
   remote directory [bug introduced in coreutils-7.5]
 
+** Changes in behavior
+
+  cp now disallows any combination of --reflink, --symbolic-link, --link
+  and --attributes-only.  Previously --attributes-only was overridden by
+  --reflink which in turn was overridden by the other linking modes.
+
 
 * Noteworthy changes in release 8.6 (2010-10-15) [stable]
 
diff --git a/src/cp.c b/src/cp.c
index 5b14f3a..07774fe 100644
--- a/src/cp.c
+++ b/src/cp.c
@@ -1097,9 +1097,14 @@ main (int argc, char **argv)
 }
 }
 
-  if (x.hard_link  x.symbolic_link)
+  if (1  ((x.reflink_mode != REFLINK_NEVER) + x.hard_link + x.symbolic_link
+  + !x.data_copy_required))
 {
-  error (0, 0, _(cannot make both hard and symbolic links));
+  error (0, 0, %s,
+ (x.data_copy_required
+  ? _(cannot combine linking modes)
+  : _(cannot combine linking modes with --attributes-only)));
+
   usage (EXIT_FAILURE);
 }
 
diff --git a/tests/cp/reflink-perm b/tests/cp/reflink-perm
index 77f119f..7f48a24 100755
--- a/tests/cp/reflink-perm
+++ b/tests/cp/reflink-perm
@@ -39,8 +39,9 @@ test $mode = -rwxrwxrwx || fail=1
 
 test copy -nt file  fail=1
 
+# reflink is incompatible with other linking modes and --attributes-only
 echo  file2 # file with data
-cp --reflink=auto --preserve --attributes-only file2 empty_copy || fail=1
-test -s empty_copy  fail=1
+cp --reflink=auto --attributes-only file2 empty_copy  fail=1
+cp --reflink=auto --symbolic-link file2 empty_copy  fail=1
 
 Exit $fail
-- 
1.6.2.5



Re: [coreutils][patch] seq: User specified terminator

2010-10-27 Thread Pádraig Brady
On 27/10/10 18:27, William Plusnick wrote:
 I added a way to specify the string printed after the last number in the
 seq command, as suggested by a comment. I am not sure how helpful this
 option is, because I personally don't use the seq command much. I also
 added documentation to both usage (and, in effect, the man page) and
 coreutils.texi. It was so small that I didn't see any need for a test.
 It went over the ten line limit, but is pretty trivial and the large
 majority is documentation, so I am unsure if I need to sign anything. I
 also forgot to add a NEWS entry, but I am not even sure this will get
 accepted, yet. :-)

Thanks for the patch.
We usually need a pretty strong use case for adding a new option,
but I can't think of a use for this to be honest.
I realise this was a FIXME in the code,
so perhaps someone can think of a use.

cheers,
Pádraig.



Re: [coreutils] [PATCH] Cater for extra strace output when building 32-on-64.

2010-11-01 Thread Pádraig Brady
On 30/10/10 14:20, Nix wrote:
 When building 32-bit coreutils on a 64-bit Linux platform, the
 stat-free-symlinks test fails because the strace output it diffs
 against contains an extra informative line emitted by strace
 of the general form
 
 [ Process PID=28429 runs in 32 bit mode. ]
 
 So dike that line out, if it exists, before running the comparison.
 ---
  tests/ls/stat-free-symlinks |4 
  1 files changed, 4 insertions(+), 0 deletions(-)
 
 diff --git a/tests/ls/stat-free-symlinks b/tests/ls/stat-free-symlinks
 index 6843b97..3f1e732 100755
 --- a/tests/ls/stat-free-symlinks
 +++ b/tests/ls/stat-free-symlinks
 @@ -42,6 +42,10 @@ LS_COLORS='or=0:mi=0:ex=01;32:ln=01;35' \
  # line showing ls had called stat on x.
  grep '^stat(x' err  fail=1
  
 +# Eliminate warnings from 'out' relating to 32-bit mode on 64-bit platforms.
 +grep -v 'Process PID=[1-9][0-9]* runs in 32 bit mode.'  out  out-destrace
 +mv -f out-destrace out
 +
  # Check that output is colorized, as requested, too.
  {
printf '\033[0m\033[01;35mlink-to-x\033...@\n'

That looks like a buglet in strace, patch below.
There doesn't look to be any more cases of
incorrectly outputting to stdout:

I'll apply your workaround. but by directly
filtering through | grep -v Process PID

thanks!
Pádraig.

$ cd /rpmbuild/BUILD/strace-4.5.19
$ find -name *.[ch] | xargs grep -E [^ftsn]printf[ (]
./ioctlsort.c:  printf({\%s\, \%s\, %#lx},\n,
./linux/ioctlsort.c:printf(\t{\%s\,\t\%s\,\t%#lx},\n,
./strace.c: printf(%s -- version %s\n, PACKAGE_NAME, 
VERSION);
./syscall.c:printf(ptrace_peektext failed: %s\n,
./syscall.c:printf([ Process PID=%d runs in %s mode. ]\n,

--- syscall.c.orig  2010-11-01 14:46:41.292576453 +
+++ syscall.c   2010-11-01 14:47:10.164576378 +
@@ -953,7 +953,7 @@

call = ptrace(PTRACE_PEEKTEXT, pid, (char *)rip, (char *)0);
if (errno)
-   printf(ptrace_peektext failed: %s\n,
+   fprintf(stderr, ptrace_peektext failed: %s\n,
strerror(errno));
switch (call  0x) {
/* x86-64: syscall = 0x0f 0x05 */
@@ -972,7 +972,7 @@
if (currpers != current_personality) {
static const char *const names[] = {64 bit, 32 bit};
set_personality(currpers);
-   printf([ Process PID=%d runs in %s mode. ]\n,
+   fprintf(stderr, [ Process PID=%d runs in %s mode. ]\n,
pid, names[current_personality]);
}
}



Re: [coreutils] [PATCH] Cater for extra strace output when building 32-on-64.

2010-11-01 Thread Pádraig Brady
On 01/11/10 23:05, Nix wrote:
 On 1 Nov 2010, Pádraig Brady told this:
 
 That looks like a buglet in strace, patch below.
 There doesn't look to be any more cases of
 incorrectly outputting to stdout:
 
 Ah, yeah, oops, it probably was meant to go to stderr, wasn't it. I
 didn't think of that.
 
 I'll apply your workaround. but by directly
 filtering through | grep -v Process PID
 
 That's much better (though I'd use the fuller regex I provided simply
 because 'Process PID' seems a bit short and possible to occur in
 legitimate output to me).
 

For general ls output yes.
For the test which just does ls on 2 files,
anything more complicated is overkill.

cheers,
Pádraig.



[coreutils] Re: tests/backref-multibyte-slow timeout 5s is too short for my imac 400

2010-11-04 Thread Pádraig Brady
On 04/11/10 09:40, Jim Meyering wrote:
 Gilles Espinasse wrote:
 backref-multibyte-slow test fail on my imac 400
 Fail with timeout 5s, pass when timeout is changed to 10s
 I have no faster machine to test on ppc.

 Thanks for the report.
 What is the ratio of the two times when running the
 latest with LC_ALL=en_US.UTF-8 / LC_ALL=C ?
 For me, it's approximately 4x, whereas pre-fix, the LC_ALL=en_US.UTF-8
 run took 29 times as long as the latest with LC_ALL=C.
 
 $ awk 'BEGIN {for (i=0; i13000; i++) print aba}' /dev/null  in
 $ LC_ALL=C   env time --format %e grep -E '^([a-z]).\1$' in  out
 0.06
 $ LC_ALL=en_US.UTF-8 env time --format %e grep -E '^([a-z]).\1$' in  out
 0.23
 $ LC_ALL=en_US.UTF-8 env time --format %e /bin/grep -E '^([a-z]).\1$' in  out
 6.81
 
 Testing that ratio would be better than using
 some arbitrary, absolute limit.
 The question is how to do it.
 
 Using Perl would work...
 I've wanted this sort of ability for other tests (e.g., coreutils'
 tests/rm/ext3-perf), so maybe it's worth adding an option to GNU timeout:
 run a reference command, measuring how long it takes, S, and use F*S
 as the timeout.  This brings up the issue of whether timeout could take
 a floating-point timeout.  I think it should.

Maybe. I was wary about adding sub second support
as it would complicate the implementation somewhat,
while also being less guaranteed by the system.
For what it's worth, my previous shell version
supports sub second resolution:
http://www.pixelbeat.org/scripts/timeout

 Whether this hypothetical version of timeout is installed isn't a big deal.
 grep's test script could simply test for the new option and skip
 the test if it's not available, just as it currently does if timeout
 itself is not installed.  The important thing is that it be installed
 by developers, and we can arrange for that.
 
 Pádraig Brady wrote timeout for coreutils, so I've Cc'd him and that list.
 Pádraig, what do you think?
 
 Now that I ask that, I think the better approach, for now at least,
 is to write a little perl script that does what I'm proposing.

I don't think timeout should be running any reference command.
That can always be done outside with `time`.

cheers,
Pádraig.



Re: [coreutils] [patch] Re: Install enhancement request: capabilities

2010-11-04 Thread Pádraig Brady
Thanks for the patch!
I think the feature is worth it.

Currently install does not preserve xattrs
and so looses any previous capabilities
associated with a file.

In any case, capabilities don't need to be implemented
using xattrs, and might not be on tmpfs on Linux
for example when support is eventually added there.

One tricky thing I noticed with capabilities,
is that one needs to do after setting any ownership,
which you do correctly in the patch.

cheers,
Pádraig.



Re: [coreutils] [patch] Re: Install enhancement request: capabilities

2010-11-04 Thread Pádraig Brady
On 04/11/10 11:08, Pádraig Brady wrote:
 Thanks for the patch!
 I think the feature is worth it.
 
 Currently install does not preserve xattrs
 and so looses any previous capabilities
 associated with a file.
 
 In any case, capabilities don't need to be implemented
 using xattrs, and might not be on tmpfs on Linux
 for example when support is eventually added there.
 
 One tricky thing I noticed with capabilities,
 is that one needs to do after setting any ownership,
 which you do correctly in the patch.

On the other hand one can always just call
`setcap` after `install` for the few files that require it.
Having `install` support it means you don't need a separate
setcap util, but it also means that one can't just
grep for setcap in a bunch of rpms for example
to see what capabilities are set on the system.
Also using the `setcap` util is slightly more flexible
in failure modes (optionally failing if all/some/none are set)

So I'm back to 55:45 against this one.

cheers,
Pádraig.



[coreutils] test: fix a dash portability problem with redirected symlinked ttys

2010-11-09 Thread Pádraig Brady
Another bug I noticed with dash-0.5.6-2.fc11.i586
is that it doesn't redirect from symlinks correctly
for background processes.

$ dash -c tty  /dev/stdin
$ dash -c tty  /dev/stdin
/dev/pts/3
$ bash -c tty  /dev/stdin
/dev/pts/3
$ dash -c tty  $(readlink -f /dev/stdin)
/dev/pts/3

OK to apply the following?

Note redirecting from /dev/tty works also,
but I don't know if that's better or worse.

--- a/tests/mv/i-3
+++ b/tests/mv/i-3
@@ -34,10 +34,13 @@ chmod 0 g i || framework_failure
 ls /dev/stdin /dev/null 21 \
   || skip_test_ 'there is no /dev/stdin file'

-test -r /dev/stdin 21 \
+# work around a dash bug redirecting from symlinks
+tty=$(readlink -f /dev/stdin)
+
+test -r $tty 21 \
   || skip_test_ '/dev/stdin is not readable'

-mv f g  /dev/stdin  out 21  pid=$!
+mv f g  $tty  out 21  pid=$!

 # Wait up to 3.1s for the expected prompt
 check_overwrite_prompt()



[coreutils] [PATCH] maint: add a missed fadvise-tests module

2010-11-15 Thread Pádraig Brady
commit b19733bb42e1897afd93f40aeca83e76013f7075
Author: Pádraig Brady p...@draigbrady.com
Date:   Mon Nov 15 09:36:16 2010 +

maint: add a missed fadvise-tests module

* gl/modules/fadvise-tests: Add the module previously missed
in commit 63b5e816, 2010-07-14, fadvise: new module 
* gl/tests/test-fadvise.c: Add a comment as to why we don't
check return values.

diff --git a/gl/modules/fadvise-tests b/gl/modules/fadvise-tests
new file mode 100644
index 000..61e223d
--- /dev/null
+++ b/gl/modules/fadvise-tests
@@ -0,0 +1,10 @@
+Files:
+tests/test-fadvise.c
+
+Depends-on:
+
+configure.ac:
+
+Makefile.am:
+TESTS += test-fadvise
+check_PROGRAMS += test-fadvise
diff --git a/gl/tests/test-fadvise.c b/gl/tests/test-fadvise.c
index 10b448f..cd2f8bf 100644
--- a/gl/tests/test-fadvise.c
+++ b/gl/tests/test-fadvise.c
@@ -21,6 +21,13 @@

 #include fadvise.h

+/* We ignore any errors as these hints are only advisory.
+ * There is the chance one can pass invalid ADVICE, which will
+ * not be indicated, but given the simplicity of the interface
+ * this is unlikely.  Also not returning errors allows the
+ * unconditional passing of descriptors to non standard files,
+ * which will just be ignored if unsupported.  */
+
 int
 main (void)
 {



[coreutils] [PATCH] build: add `patch` as a bootstrap dependency

2010-11-16 Thread Pádraig Brady
FYI.

commit 79adacc43178916392af3dd581ded87f3aea1655
Author: Pádraig Brady p...@draigbrady.com
Date:   Tue Nov 16 07:32:32 2010 +

build: add `patch` as a bootstrap dependency

* bootstrap.conf (buildreq): require `patch` as it's used
by gnulib-tool to apply local diffs to gnulib modules

diff --git a/bootstrap.conf b/bootstrap.conf
index 18ef914..2da0883 100644
--- a/bootstrap.conf
+++ b/bootstrap.conf
@@ -315,6 +315,7 @@ git1.4.4
 gperf  -
 gzip   -
 makeinfo   -
+patch  -
 perl   5.5
 rsync  -
 tar-



Re: [coreutils] [PATCH 1/5] tests: factor out VERBOSE-only --version-printing code

2010-11-17 Thread Pádraig Brady
On 17/11/10 21:31, Jim Meyering wrote:
 FYI, here are 5 change-sets to make a few hundred (mostly automated)
 changes like these:
 
 -test $VERBOSE = yes  chown --version
 +print_ver_ chown
 
 -test $VERBOSE = yes  { cp --version; mv --version; }
 +print_ver_ cp mv
 
 -test $VERBOSE = yes  { env -- pwd --version; readlink --version; }
 +print_ver_ pwd readlink

A nice refactoring. Looks good.

cheers,
Pádraig.



[coreutils] over aggressive threads in sort

2010-11-22 Thread Pádraig Brady
FYI https://bugzilla.redhat.com/show_bug.cgi?id=655096



Re: [coreutils] [PATCH] head: optionally indicate underrun of set limit

2010-11-24 Thread Pádraig Brady
On 23/11/10 16:34, Pádraig Brady wrote:
 On 23/11/10 16:24, Stefan Tomanek wrote:
 Dies schrieb Stefan Tomanek (stefan.toma...@wertarbyte.de):

 It is often convinient to detect whether head has in fact printed the
 requested number of lines or if EOF was reached before reaching that
 point. This patch adds the option --indicate-underrun, which makes
 head exit with a status of 4 instead of 0 if no more data could be
 read from its input.

 Any thoughts about this change? It's a rather small patch, but would
 be quite useful (at least for me).


 
 This does seem useful on the face of it.
 I need to do a little further investigation
 to see if there are existing ways to achieve the same.

I was wondering about the logic in your example BTW.
If there is no input then you'll process an empty chunk
or if the input is an exact multiple of the chunk size
you'll process an empty chunk at the end.

The following addresses both issues and
also uses existing coreutils functionality:

process_part() { echo processing $(wc -c) bytes; }
while true; do
  c=$(od -tx1 -An -N1)
  test $c || break
  c=$(echo $c) #strip leading ' '
  { printf \x$c; head -c9; } | process_part
done

cheers,
Pádraig.

p.s. \x is not a printf standard so the
script probably needs tweaking for dash

p.p.s. I noticed the related ifne util from moreutils



Re: [coreutils] [PATCH] Cater for extra strace output when building 32-on-64.

2010-12-01 Thread Pádraig Brady
On 30/11/10 18:09, Dmitry V. Levin wrote:
 On Mon, Nov 01, 2010 at 02:53:29PM +, Pádraig Brady wrote:
 [...]
 --- syscall.c.orig  2010-11-01 14:46:41.292576453 +
 +++ syscall.c   2010-11-01 14:47:10.164576378 +
 @@ -953,7 +953,7 @@

 call = ptrace(PTRACE_PEEKTEXT, pid, (char *)rip, (char *)0);
 if (errno)
 -   printf(ptrace_peektext failed: %s\n,
 +   fprintf(stderr, ptrace_peektext failed: %s\n,
 strerror(errno));
 switch (call  0x) {
 /* x86-64: syscall = 0x0f 0x05 */
 
 Yes, this is definitely a bug, thank you.
 
 @@ -972,7 +972,7 @@
 if (currpers != current_personality) {
 static const char *const names[] = {64 bit, 32 
 bit};
 set_personality(currpers);
 -   printf([ Process PID=%d runs in %s mode. ]\n,
 +   fprintf(stderr, [ Process PID=%d runs in %s mode. 
 ]\n,
 pid, names[current_personality]);
 }
 }
 
 I'm not quite sure whether this message should go to stdout or stderr.
 If it is a useful output, then it is the first case.  If it is just a
 diagnostics, then it is the second case.

I got the impression that strace doesn't output anything else to stdout,
and hence this was an oversight.

 The message in its current form was introduced along with personality
 switching support more than eight years ago.
 I wonder is there any script that relies on the current behaviour.

Hardly, and they can be easily updated if so.

cheers,
Pádraig.



[coreutils] [PATCH] split: fix a case where --elide-empty-files causes invalid chunking

2010-12-08 Thread Pádraig Brady
This is a fairly esoteric issue I noticed in passing.
No need for a NEWS entry I think.
I was wondering about adding ---io-blksize when writing
the test originally, and now that it can be used to
trigger this bug I think it's warranted?

cheers,
Pádraig.
From 016fc96d400062193a69299c4e8e89f16c010951 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Wed, 8 Dec 2010 08:33:15 +
Subject: [PATCH] split: fix a case where --elide-empty-files causes invalid chunking

When -n l/N is used and long lines are present that both
span partitions and multiple buffers, one would get
inconsistent chunk sizes.

* src/split.c (main): Add a new undocumented ---io-blksize option
to support full testing with varied buffer sizes.
(cwrite): Refactor most handling of --elide-empty to here.
(bytes_split): Remove handling of --elide-empty.
(lines_chunk_split): Likewise.  The specific issue here
was the first handling of elide_empty_files interfered
with the replenishing of the input buffer.
* test/misc/split-lchunk: Add -e and the new --io-blksize
combinations to the test.
---
 src/split.c |   31 +-
 tests/misc/split-lchunk |   65 +-
 2 files changed, 59 insertions(+), 37 deletions(-)

diff --git a/src/split.c b/src/split.c
index 49a7a1c..50d9760 100644
--- a/src/split.c
+++ b/src/split.c
@@ -82,7 +82,8 @@ static bool unbuffered;
non-character as a pseudo short option, starting with CHAR_MAX + 1.  */
 enum
 {
-  VERBOSE_OPTION = CHAR_MAX + 1
+  VERBOSE_OPTION = CHAR_MAX + 1,
+  IO_BLKSIZE_OPTION
 };
 
 static struct option const longopts[] =
@@ -96,6 +97,8 @@ static struct option const longopts[] =
   {suffix-length, required_argument, NULL, 'a'},
   {numeric-suffixes, no_argument, NULL, 'd'},
   {verbose, no_argument, NULL, VERBOSE_OPTION},
+  {-io-blksize, required_argument, NULL,
+IO_BLKSIZE_OPTION}, /* do not document */
   {GETOPT_HELP_OPTION_DECL},
   {GETOPT_VERSION_OPTION_DECL},
   {NULL, 0, NULL, 0}
@@ -236,7 +239,6 @@ next_file_name (void)
 }
 
 /* Create or truncate a file.  */
-
 static int
 create (const char* name)
 {
@@ -255,6 +257,8 @@ cwrite (bool new_file_flag, const char *bp, size_t bytes)
 {
   if (new_file_flag)
 {
+  if (!bp  bytes == 0  elide_empty_files)
+return;
   if (output_desc = 0  close (output_desc)  0)
 error (EXIT_FAILURE, errno, %s, outfile);
   next_file_name ();
@@ -315,7 +319,7 @@ bytes_split (uintmax_t n_bytes, char *buf, size_t bufsize, uintmax_t max_files)
   /* Ensure NUMBER files are created, which truncates
  any existing files or notifies any consumers on fifos.
  FIXME: Should we do this before EXIT_FAILURE?  */
-  while (!elide_empty_files  opened++  max_files)
+  while (opened++  max_files)
 cwrite (true, NULL, 0);
 }
 
@@ -506,7 +510,7 @@ lines_chunk_split (uintmax_t k, uintmax_t n, char *buf, size_t bufsize,
 chunk_end = file_size - 1; /* = chunk_size.  */
   else
 chunk_end += chunk_size;
-  if (!elide_empty_files  chunk_end = n_written - 1)
+  if (chunk_end = n_written - 1)
 cwrite (true, NULL, 0);
   else
 next = false;
@@ -517,7 +521,7 @@ lines_chunk_split (uintmax_t k, uintmax_t n, char *buf, size_t bufsize,
   /* Ensure NUMBER files are created, which truncates
  any existing files or notifies any consumers on fifos.
  FIXME: Should we do this before EXIT_FAILURE?  */
-  while (!k  !elide_empty_files  chunk_no++ = n)
+  while (!k  chunk_no++ = n)
 cwrite (true, NULL, 0);
 }
 
@@ -780,7 +784,7 @@ main (int argc, char **argv)
   type_undef, type_bytes, type_byteslines, type_lines, type_digits,
   type_chunk_bytes, type_chunk_lines, type_rr
 } split_type = type_undef;
-  size_t in_blk_size;		/* optimal block size of input file device */
+  size_t in_blk_size = 0;	/* optimal block size of input file device */
   char *buf;			/* file i/o buffer */
   size_t page_size = getpagesize ();
   uintmax_t k_units = 0;
@@ -941,6 +945,18 @@ main (int argc, char **argv)
   elide_empty_files = true;
   break;
 
+case IO_BLKSIZE_OPTION:
+  {
+uintmax_t tmp_blk_size;
+if (xstrtoumax (optarg, NULL, 10, tmp_blk_size,
+multipliers) != LONGINT_OK
+|| tmp_blk_size == 0 || SIZE_MAX - page_size  tmp_blk_size)
+  error (0, 0, _(%s: invalid IO block size), optarg);
+else
+  in_blk_size = tmp_blk_size;
+  }
+  break;
+
 case VERBOSE_OPTION:
   verbose = true;
   break;
@@ -997,7 +1013,8 @@ main (int argc, char **argv)
 
   if (fstat (STDIN_FILENO, stat_buf) != 0)
 error (EXIT_FAILURE, errno, %s, infile);
-  in_blk_size = io_blksize (stat_buf);
+  if (in_blk_size == 0)
+in_blk_size = io_blksize (stat_buf);
   file_size = 

Re: [coreutils] [PATCH] sort: fix some --compress reaper bugs

2010-12-14 Thread Pádraig Brady
On 14/12/10 09:55, Jim Meyering wrote:
 Paul Eggert wrote:

 +# Give compression subprocesses time to react when 'sort' exits.
 +# Otherwise, under NFS, when 'sort' unlinks the temp files and they
 +# are renamed to .nfs instead of being removed, the parent cleanup
 +# of this directory will fail because the files are still open.
 +sleep 1
 
 It'd be a shame to make everyone sleep even one second for NFS.
 And on a heavily loaded system, 1 second might not be enough.
 Can you formulate that as a loop that sleeps say 0.1 seconds
 at a time for up to 5 seconds and that checks whether it's
 safe to exit?

is_local_dir_ is available.

The more general sleep loop could be implemented
with retry_delay_ I think?

If the subprocesses were reparented to the shell,
then one could just `wait`.

cheers,
Pádraig.



Re: [coreutils] [PATCH] sort: fix some --compress reaper bugs

2010-12-14 Thread Pádraig Brady
On 14/12/10 18:20, Paul Eggert wrote:
 On 14/12/10 09:55, Jim Meyering wrote:
 It'd be a shame to make everyone sleep even one second for NFS.
 
 Well, it is marked as an _expensive_ test.  :-)
 
 On 12/14/10 02:07, Pádraig Brady wrote:
 If the subprocesses were reparented to the shell,
 then one could just `wait`.
 
 I don't know how to do reparenting like that.

Me neither. I don't know why I thought
they might be reparented to the grand parent.

  But this raises a
 more general issue: is it OK that when 'sort' is interrupted and
 exits, that it does not kill off its children?  That is, it relies on
 compressors nicely exiting (because 'sort' exits and they read EOF),
 and it relies on decompressors nicely exiting (because they write to a
 dangling pipe and get SIGPIPE or a write failure).  If either
 assumption is false, 'sort' will leave behind orphan processes that
 never exit.

To handle this, you'd have to do something like what `timeout` does.
It creates a process group, so that if you kill it,
you kill everything underneath. This could be handy
in test scripts actually to kill everything in a timely
manner, by just killing a timeout process.
Your test script is fine as is though.

 
 I'm sort of inclined to say that's OK,

Definitely agreed, considering the type of processes invoked.

cheers,
Pádraig.



[coreutils] Re: bug#7597: multi-threaded sort can segfault (unrelated to the sort -u segfault)

2010-12-16 Thread Pádraig Brady
On 16/12/10 22:06, Paul Eggert wrote:
 On 12/12/10 07:41, Jim Meyering wrote:
 Paul Eggert wrote:
 There are also some test cases I need to add for the
 (unrelated) sort-compression bug, which is next on my
 list of coreutils bugs to look at.

 It would be great to fix that for 8.8, too,
 but don't worry if you don't get to it soon.
 
 I've fixed that now, with the patch enclosed

From a quick inspection, it looks good!

All sort tests pass on 32 bit.

$ time (cd tests  make check TESTS=misc/sort-compress-hang 
RUN_VERY_EXPENSIVE_TESTS=yes VERBOSE=yes)
PASS: misc/sort-compress-hang

real1m53.543s
user0m18.475s
sys 1m5.758s

cheers,
Pádraig.



[coreutils] cppcheck

2010-12-17 Thread Pádraig Brady
I just ran cppcheck 1.46 on the latest coreutils.
It ran for a _long_ time and reported these false positives

[coreutils/src/system.h:355]: (error) Resource leak: fd
[coreutils/src/chown-core.c:212]: (error) Resource leak: fd
[coreutils/src/ls.c:1963]: (error) Unusual pointer arithmetic
[coreutils/src/sleep.c:133]: (error) Uninitialized variable: p

I'll see can I fix/report these issues in cppcheck
at some stage.

cheers,
Pádraig.



[coreutils] clang-analyzer

2010-12-17 Thread Pádraig Brady
I just ran this on the latest coreutils:

scan-build -o clang ./configure
scan-build -o clang make

and it flagged a possible problem in wc
where it could spin if it got a read error
on a large file containing file names to process.
I think the following may address this:

diff --git a/src/wc.c b/src/wc.c
index a1922ba..ae29f10 100644
--- a/src/wc.c
+++ b/src/wc.c
@@ -726,8 +726,8 @@ main (int argc, char **argv)
   switch (ai_err)
 {
 case AI_ERR_READ:
-  error (0, errno, _(%s: read error), quote (files_from));
-  skip_file = true;
+  error (EXIT_FAILURE, errno, _(%s: read error),
+ quote (files_from));
   continue;
 case AI_ERR_MEM:
   xalloc_die ();

I also notice more warnings and a possible
uninitialized stat buf in cp.c.
I'll have a look at these later...

cheers,
Pádraig.



Re: [coreutils] RE: cp command performance

2010-12-24 Thread Pádraig Brady
Didn't anyone get my previous reply on this.
I'm wondering about my email server now.

Anyway in summary what I said was the
3 commands you presented would perform about the same.

Things to consider.

The deletion/truncation bit will proceed more quickly on some
file system types.

If both files are on the same disk, then cp will
do a lot of disk head seeking between the two during the copy.
Try to put the temporary file on a different disk if possible.

If that is not possible, then try to process/buffer
more at a time. dd may be useful for this, or the `pv`
command which will give you a progress bar as well
as buffer control.

cheers,
Pádraig.



bug#7759: split(1) regression in v8.7-25-gbe10739

2010-12-30 Thread Pádraig Brady
On 30/12/10 11:15, Jim Meyering wrote:
 
 Hi Pádraig
 Thanks for the quick fix.
 Please include the bugzilla URL and credit Sergey and Dmitry
 in the commit log.  It'd be good to list Sergey's
 name/email in THANKS, too.

Done and pushed.

thanks,
Pádraig.





Re: [coreutils] join feature: auto-format

2011-01-07 Thread Pádraig Brady
On 06/01/11 12:05, Pádraig Brady wrote:
 On 07/10/10 19:25, Pádraig Brady wrote:
 On 07/10/10 18:43, Assaf Gordon wrote:
 Pádraig Brady wrote, On 10/07/2010 06:22 AM:
 On 07/10/10 01:03, Pádraig Brady wrote:
 On 06/10/10 21:41, Assaf Gordon wrote:

 The --auto-format feature simply builds the -o format line 
 automatically, based on the number of columns from both input files.

 Thanks for persisting with this and presenting a concise example.
 I agree that this is useful and can't think of a simple workaround.
 Perhaps the interface would be better as:

 -o {all (default), padded, FORMAT}

 where padded is the functionality you're suggesting?

 Thinking more about it, we mightn't need any new options at all.
 Currently -e is redundant if -o is not specified.
 So how about changing that so that if -e is specified
 we operate as above by auto inserting empty fields?
 Also I wouldn't base on the number of fields in the first line,
 instead auto padding to the biggest number of fields
 on the current lines under consideration.

 My concern is the principle of least surprise - if there are existing 
 scripts/programs that specify -e without -o (doesn't make sense, but 
 still possible) - this change will alter their behavior.

 Also, implying/forcing 'auto-format' when -e is used without -o might 
 be a bit confusing.

 Well seeing as -e without -o currently does nothing,
 I don't think we need to worry too much about changing that behavior.
 Also to me, specifying -e EMPTY implicitly means I want
 fields missing from one of the files replaced with EMPTY.

 Note POSIX is more explicit, and describes our current operation:

 -e EMPTY
   Replace empty output fields in the list selected by -o with EMPTY

 So changing that would be an extension to POSIX.
 But I still think it makes sense.
 I'll prepare a patch soon, to do as I describe above,
 unless there are objections.
 
 The attached changes `join` (from what's done on other platforms) so that...
 
 `join -e` will automatically pad missing fields from one file
 so that the same number of fields are output from each file.
 Previously -e was only used for missing fields specified with -o or -j.
 
 With this change join now does:
 
 $ cat file1
 a 1 2
 b 1
 d 1 2
 
 $ cat file2
 a 3 4
 b 3 4
 c 3 4
 
 $ join -a1 -a2 -1 1 -2 1 -e. file1 file2
 a 1 2 3 4
 b 1 . 3 4
 c . . 3 4
 d 1 2 . .
 
 $ join -a1 -a2 -1 1 -2 4 -e. file1 file2
 . . . . a 3 4
 . . . . b 3 4
 . . . . c 3 4
 a 1 2 . .
 b 1 .
 d 1 2 . .
 
 $ join -a1 -a2 -1 4 -2 1 -e. file1 file2
 . a 1 2 . . .
 . b 1 . .
 . d 1 2 . . .
 a . . 3 4
 b . . 3 4
 c . . 3 4
 
 $ join -a1 -a2 -1 4 -2 4 -e. file1 file2
 . a 1 2 a 3 4
 . a 1 2 b 3 4
 . a 1 2 c 3 4
 . b 1 . a 3 4
 . b 1 . b 3 4
 . b 1 . c 3 4
 . d 1 2 a 3 4
 . d 1 2 b 3 4
 . d 1 2 c 3 4
 
 While -e without -o was previously a noop, and so could safely be extended 
 IMHO,
 this will also change the behavior when with -e and -j are specified.
 Previously if -j  1 was specified, and that field was missing,
 then -e would be used in its place, rather than the empty string.
 This still does that, but also does the padding.
 Without the -j issue I'd be 80:20 for just extending -e to auto pad,
 but given -j I'm 50:50.  The alternative it to select this with
 say '-o padded', but that's less discoverable, and complicates
 the interface somewhat.

Considering this more, I think it's safer to auto pad only
when '-o padded' is specified. I notice the plan9 `join` man page
has an example that uses -e '' to explicitly specify the NUL string as filler,
which would have triggered our auto pad if we left it as above.

cheers,
Pádraig.



Re: [coreutils] Re: Bug#609049: du: process_file: Assertion `level == prev_level - 1' failed

2011-01-09 Thread Pádraig Brady
On 09/01/11 22:05, Jim Meyering wrote:
 diff --git a/tests/du/move-dir-while-traversing 
 b/tests/du/move-dir-while-traversing

 +# We use a python-inotify script, so...
 +python -m pyinotify -h  /dev/null \
 +  || skip_ 'python-inotify package not installed'

A small point is that error is a bit fedora specific.
The package is python-pyinotify on debian/ubuntu for example.

 +
 +# Move a directory up while du is processing its sub-directories.
 +# While du is processing a hierarchy .../B/C/D/... this script
 +# detects when du opens D/, and then moves C/ up one level
 +# so that it is a sibling of B/.
 +# Given the inherent race condition, we have to add enough weight
 +# under D/ so that in most cases, the monitor performs the single
 +# rename syscall before du finishes processing the subtree under D/.
 +
 +cat 'EOF'  inotify-watch-for-dir-access.py
 +#!/usr/bin/env python
 +from pyinotify import *

I generally don't include everything from a module,
to keep the namespace clean.
I'd do instead:

import pyinotify as pn
import os,sys
...


 +dir = sys.argv[1]
 +dest_parent = os.path.dirname(os.path.dirname(dir))
 +dest = os.path.join(dest_parent, os.path.basename(dir))
 +
 +class ProcessDir(ProcessEvent):
 +
 +def process_IN_OPEN(self, event):
 +os.rename(dir, dest)
 +sys.exit(0)
 +
 +def process_default(self, event):
 +pass
 +
 +wm = WatchManager()
 +notifier = Notifier(wm)
 +wm.watch_transient_file(dir, IN_OPEN, ProcessDir)
 +print 'started'

The above print is buffered by default.
As we're using it for synchronization I'd

sys.stdout.write('started\n')
sys.stdout.flush()

 +notifier.loop()
 +EOF
 +chmod a+x inotify-watch-for-dir-access.py
 +
 +long=d/e/f/g/h/i/j/k/l/m/n/o/p/q/r/s/t/u/v/w/x/y/z
 +t=T/U
 +mkdir -p $t/3/a/b/c/$long d2 || framework_failure
 +timeout 6 ./inotify-watch-for-dir-access.py $t/3/a/b  start-msg 
 +
 +# Wait for the watcher to start...
 +nonempty() { test -s start-msg  return 0; sleep $1; }
 +retry_delay_ nonempty .1 5

I think the above may iterate only once?
How about:

nonempty() {
  test -s start-msg || { sleep $1; return 1; }
}
retry_delay_ nonempty .1 5 || framework_failure

 +
 +# The above delay is insufficient in ~50% of my trials.
 +# Sometimes, when under heavy load, a parallel make check would
 +# fail this test when sleeping just 0.1 seconds here.
 +sleep 1

Hopefully this extra sleep is not required now

Nice fix BTW!

cheers,
Pádraig.



[coreutils] [PATCH] maint: trivial system header file cleanups

2011-01-13 Thread Pádraig Brady
From e1aaf8903db97f3240b1551fd6936ccdc652dfc8 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Thu, 13 Jan 2011 09:36:38 +
Subject: [PATCH] maint: trivial system header file cleanups

* src/system.h: Note where it should be included, and
make ordering check portable to GLIBC  2
* src/copy.c: Move sys/ioctl.h along with other system headers
as is done elsewhere.
* src/install.c: Move sys/wait.h along with other system headers
as is done elsewhere.
* src/ptx.c: Include regex.h rather than regex.h as
is done elsewhere.  Note regex.h is kept after system.h
as per commit dba300a0.
---
 src/copy.c|3 +--
 src/install.c |3 +--
 src/ptx.c |2 +-
 src/system.h  |8 ++--
 4 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/src/copy.c b/src/copy.c
index a160952..9a014ad 100644
--- a/src/copy.c
+++ b/src/copy.c
@@ -19,6 +19,7 @@
 #include config.h
 #include stdio.h
 #include assert.h
+#include sys/ioctl.h
 #include sys/types.h
 #include selinux/selinux.h
 
@@ -61,8 +62,6 @@
 # include verror.h
 #endif
 
-#include sys/ioctl.h
-
 #ifndef HAVE_FCHOWN
 # define HAVE_FCHOWN false
 # define fchown(fd, uid, gid) (-1)
diff --git a/src/install.c b/src/install.c
index c34e4dc..cebb642 100644
--- a/src/install.c
+++ b/src/install.c
@@ -24,6 +24,7 @@
 #include pwd.h
 #include grp.h
 #include selinux/selinux.h
+#include sys/wait.h
 
 #include system.h
 #include backupfile.h
@@ -48,8 +49,6 @@
 
 #define AUTHORS proper_name (David MacKenzie)
 
-#include sys/wait.h
-
 static int selinux_enabled = 0;
 static bool use_default_selinux_context = true;
 
diff --git a/src/ptx.c b/src/ptx.c
index 940f6e8..6465618 100644
--- a/src/ptx.c
+++ b/src/ptx.c
@@ -22,13 +22,13 @@
 #include getopt.h
 #include sys/types.h
 #include system.h
+#include regex.h
 #include argmatch.h
 #include diacrit.h
 #include error.h
 #include fadvise.h
 #include quote.h
 #include quotearg.h
-#include regex.h
 #include stdio--.h
 #include xstrtol.h
 
diff --git a/src/system.h b/src/system.h
index 3b32cbd..b86e570 100644
--- a/src/system.h
+++ b/src/system.h
@@ -14,11 +14,15 @@
You should have received a copy of the GNU General Public License
along with this program.  If not, see http://www.gnu.org/licenses/.  */
 
+/* Include this file _after_ system headers if possible.  */
+
 #include alloca.h
 
-/* Include sys/types.h before this file.  */
+/* Include sys/types.h before this file.
+   Note this doesn't warn if we're included
+   before all system headers.  */
 
-#if 2 = __GLIBC__  2 = __GLIBC_MINOR__
+#if 2  __GLIBC__ || ( 2 == ___GLIBC__  2 = __GLIBC_MINOR__ )
 # if ! defined _SYS_TYPES_H
 you must include sys/types.h before including this file
 # endif
-- 
1.7.3.4



Re: [coreutils] basename/dirname can't handle stdin?

2011-01-13 Thread Pádraig Brady
On 13/01/11 23:15, Jeff Blaine wrote:
 So then, please review.  One question at the bottom.
 
 # Most basic usage
 basename /foo/bar.txt   = bar.txt
 
 
 # Old/current basename NAME SUFFIX compat-
 # ibility
 basename /foo/bar.txt .txt  = bar
 
 
 # ERROR, one too many operands
 basename /foo/bar.txt /x.txt /y.txt = ERROR
 
 
 # BSD-adopted flag to signify no args are a
 # suffix, process all
 basename -a /foo/bar.txt /x/y.txt   = bar.txt
y.txt
 
 
 # For completeness, showing 3 args with -a
 basename -a /foo/bar.txt /x/y.txt /a/b.txt  = bar.txt
y.txt
b.txt
 
 
 basename -s .txt -a /foo/bar.txt /x/y.txt /a/b.txt  = bar
y
b
 
 
 # No args means read stdin (-f,--filter mode)
 cat filelist.txt | basename = bar.txt
y.txt
b.txt
 
 
 # Only -s arg means read stdin (-f,--filter
 # mode)
 cat filelist.txt | basename -s .txt = bar
y
b
 
 
 # Handle NUL-terminated stdin
 find / -print | basename --file0-from=- = bar.txt
y.txt
b.txt
 
 
 # Handle NUL-terminated stdin with suffix strip
 # (assuming /hh has our 3 files in it and is
 # readable)
 find /hh -print | basename --file0-from=- -s .txt   = bar
y
b
 
 
 # Handle NUL-terminated FILE input
 find / -print | basename --file0-from=FILE  = bar.txt
y.txt
b.txt
 
 etc...
 
 Is -f,--filter necessary?
 
 

It would be nice not to mandate it, but without it
a script containing the following for example could hang:

path=$(basename $path 2/dev/null || echo default_path)

Perhaps we should only support --files0-from
and the normal filtering case can be handled with:

find / | xargs basename -a

In fact thinking more about it we might not need --files0-from either.
It's used in du,wc, and sort as they need to deal with all files
from a single process invocation (to generate totals etc.)
But that's not the case for basename.

So in summary, just implement -a and -s like BSD does?

cheers,
Pádraig.



Re: [coreutils] basename/dirname can't handle stdin?

2011-01-13 Thread Pádraig Brady
On 14/01/11 00:02, Eric Blake wrote:
 On 01/13/2011 04:53 PM, Pádraig Brady wrote:
 So in summary, just implement -a and -s like BSD does?
 
 I think you've persuaded me that a filter mode doesn't buy us much (it's
 only useful if unambiguous, but xargs -0 works just as well at giving us
 unambiguous input), but we do need -z in addition to BSD's -a and -s.

ack





Re: [coreutils] [PATCH] uniq: don't continue field processing after end of line

2011-01-17 Thread Pádraig Brady
On 16/01/11 23:53, Sami Kerola wrote:
 Hi,
 
 I notice uniq -f 'insane_large_number' will make process to be busy
 long time without good reason. Attached patch should fix this symptom.

I'd slightly amend that to the following,
to match the other limit checks in the function.

diff --git a/src/uniq.c b/src/uniq.c
index 7bdbc4f..9c7e37c 100644
--- a/src/uniq.c
+++ b/src/uniq.c
@@ -214,7 +214,7 @@ find_field (struct linebuffer const *line)
   size_t size = line-length - 1;
   size_t i = 0;

-  for (count = 0; count  skip_fields; count++)
+  for (count = 0; count  skip_fields  i  size; count++)
 {
   while (i  size  isblank (to_uchar (lp[i])))

 I found the bug after friend of mine asked why uniq does not allow
 specifying field separator, similar way sort -t. I could not answer
 anything rational, so I look the code and tested how it works. That
 inspired me to send bug fix, which is obvious thing to do. But how
 about that -t, do you think this would be worth while addition to
 uniq?

yes. Basically `uniq` should support the same -k and -t
functionality that `sort` does. See also http://debbugs.gnu.org/5832

cheers,
Pádraig.



Re: [coreutils] [PATCH] doc: show how to shred using a single zero-writing pass

2011-01-17 Thread Pádraig Brady
On 17/01/11 10:36, Jim Meyering wrote:
From 7dc6335653afcdad9a3ffa327877571734644285 Mon Sep 17 00:00:00 2001
 From: Jim Meyering meyer...@redhat.com
 Date: Mon, 17 Jan 2011 11:32:35 +0100
 Subject: [PATCH] doc: show how to shred using a single zero-writing pass
 
 * doc/coreutils.texi (shred invocation): Give an example showing how
 to invoke shred in its most basic (fastest) write-only-zeros mode.
 ---
  doc/coreutils.texi |9 +
  1 files changed, 9 insertions(+), 0 deletions(-)
 
 diff --git a/doc/coreutils.texi b/doc/coreutils.texi
 index 9c3e2ed..8fb9f0c 100644
 --- a/doc/coreutils.texi
 +++ b/doc/coreutils.texi
 @@ -8892,6 +8892,15 @@ shred invocation
  shred --verbose /dev/sda5
  @end example
 
 +On modern disks, a single pass that writes only zeros may be enough,
 +and it will be much faster than the default.

Well only 3 times, due to the disk being the bottleneck
(since we changed to the fast internal PRNG by default).
Also for security, writing random data would probably be more effective.
So I'd reword the above sentence to:

To simply clear a disk

 +Use a command like this to tell @command{shred} to skip all random
 +passes and to perform only a final zero-writing pass:
 +
 +@example
 +shred --verbose -n0 --zero /dev/sda5
 +@end example

It's probably not worth noting the equivalent:
dd conv=fdatasync bs=2M  /dev/zero  /dev/sda5

cheers,
Pádraig.



[coreutils] [PATCH] don't warn about disorder against empty input

2011-01-26 Thread Pádraig Brady
I notice that join doesn't warn about disorder
when there are no mismatches between the two files,
which is a good thing.

$ join -a1 (printf '1\n0\n') (printf '1\n0\n')
1
0

However it does warn if there is no input
in one of the files. I'm inclined to think that
the following should be treated as above?

$ join -a1 (printf '1\n0\n') /dev/null
1
join: file 1 is not in sorted order
0

Doing that would allow one to use join to
extract fields with awk like blank handling,
and reordering support, without needing
to specify --nocheck-order.

$ printf 1 2 3\n | join -a1 - /dev/null -o 1.3,1.1
3 1

cheers,
Pádraig.
diff --git a/src/join.c b/src/join.c
index afda5a1..8fcd47b 100644
--- a/src/join.c
+++ b/src/join.c
@@ -354,7 +354,7 @@ keycmp (struct line const *line1, struct line const *line2,
 
 /* Check that successive input lines PREV and CURRENT from input file
WHATFILE are presented in order, unless the user may be relying on
-   the GNU extension that input lines may be out of order if no input
+   the extension that input lines may be out of order if no input
lines are unpairable.
 
If the user specified --nocheck-order, the check is not made.
@@ -711,7 +711,7 @@ join (FILE *fp1, FILE *fp2)
 seq2.count = 0;
 }
 
-  /* If the user did not specify --check-order, then we read the
+  /* If the user did not specify --nocheck-order, then we read the
  tail ends of both inputs to verify that they are in order.  We
  skip the rest of the tail once we have issued a warning for that
  file, unless we actually need to print the unpairable lines.  */
@@ -726,7 +726,8 @@ join (FILE *fp1, FILE *fp2)
 {
   if (print_unpairables_1)
 prjoin (seq1.lines[0], uni_blank);
-  seen_unpairable = true;
+  if (seq2.count)
+seen_unpairable = true;
   while (get_line (fp1, line, 1))
 {
   if (print_unpairables_1)
@@ -740,7 +741,8 @@ join (FILE *fp1, FILE *fp2)
 {
   if (print_unpairables_2)
 prjoin (uni_blank, seq2.lines[0]);
-  seen_unpairable = true;
+  if (seq1.count)
+seen_unpairable = true;
   while (get_line (fp2, line, 2))
 {
   if (print_unpairables_2)


[PATCH] cp: use a bigger buffer size when writing zeros

2011-01-31 Thread Pádraig Brady
Another buglet:

commit 4a64fd8efe00629b0424bb59c8ec8d9d3ef4542f
Author: Pádraig Brady p...@draigbrady.com
Date:   Mon Jan 31 22:04:35 2011 +

cp: use a bigger buffer size when writing zeros

* src/copy.c (write_zeros): This bug caused 4 or 8 bytes to
be written at a time which is very inefficient.  One could
trigger the issue with `cp --sparse=never sparse non-sparse`
on a file system that supports fiemap.

diff --git a/src/copy.c b/src/copy.c
index 4e73e1b..65c18f4 100644
--- a/src/copy.c
+++ b/src/copy.c
@@ -291,7 +291,7 @@ write_zeros (int fd, uint64_t n_bytes)

   while (n_bytes)
 {
-  uint64_t n = MIN (sizeof nz, n_bytes);
+  uint64_t n = MIN (nz, n_bytes);
   if ((full_write (fd, zeros, n)) != n)
 return false;
   n_bytes -= n;



Re: RFC: new cp option: --efficient-sparse=HOW

2011-01-31 Thread Pádraig Brady
On 31/01/11 21:46, Jim Meyering wrote:
 Now that we have can read sparse files efficiently,
 what if I want to copy a 20PiB sparse file, and yet I want to
 be sure that it does so efficiently.  Few people can afford
 to wait around while a normal processor and storage system process
 that much raw data.  But if it's a sparse file and the src and dest
 file systems have the right support (FIEMAP ioctl), then it'll be
 copied in the time it takes to make a few syscalls.
 
 Currently, when the efficient sparse copy fails, cp falls back
 on the regular, expensive, read-every-byte approach.
 
 This proposal adds an option, --efficient-sparse=required,
 to make cp fail if the initial attempt to read the sparse file fails,
 rather than resorting to the regular (very slow in the above case) copy
 procedure.
 
 The default is --efficient-sparse=auto, and for symmetry,
 I've provided --efficient-sparse=never, in case someone finds
 a reason to want to skip the ioctl.

I'm 50:50 on whether we should add this option.
If a user doesn't want to copy the file if it's going
to be too slow, then what other options do they have?
Also the sparseness of a file can change over time.

Given the fairly specialized need for such an option,
perhaps it's best to foist to other tools, like:

test $(du f) -lt $limit  df -T -t myfs f  cp f f2

or perhaps

timeout $time_limit cp f f2 || { rm f2; echo oops 2; }

cheers,
Pádraig.



Re: RFC: new cp option: --efficient-sparse=HOW

2011-02-04 Thread Pádraig Brady
On 03/02/11 20:29, Jim Meyering wrote:
 Does anyone know how to determine if a file system (say the one with .)
 supports the FIEMAP ioctl, but without compiling/running a C program?
 I.e., via perl or python?  I've written a tiny C program that works
 and a Perl one that is supposed to be identical (at least the buffer
 arg has identical content), yet does not work.
 The goal is to make the tests more robust by replacing the kludgey
 FS-name-based test with an actual syscall-based test.

This python snippet seems to work.
A caveat with using '.' rather than a file,
is that ext3 returns not supported for directories.

cheers,
Pádraig.

import struct, fcntl, sys, os

def sizeof(t): return struct.calcsize(t)
IOCPARM_MASK = 0x7f
IOC_OUT = 0x4000
IOC_IN = 0x8000
IOC_INOUT = (IOC_IN|IOC_OUT)
def _IOWR(x,y,t): return (IOC_INOUT|((sizeof(t)IOCPARM_MASK)16)|((x)8)|y)
struct_fiemap = '=qq'
FS_IOC_FIEMAP = _IOWR (ord ('f'), 11, struct_fiemap)

buf = struct.pack (struct_fiemap, 0,~0,0,0,0,0)

try:
fd = os.open (len (sys.argv) == 2 and sys.argv[1] or '.', os.O_RDONLY)
fcntl.ioctl (fd, FS_IOC_FIEMAP, buf)
except:
sys.exit (1)



Re: coreutils-8.10 released [stable]

2011-02-07 Thread Pádraig Brady
On 05/02/11 13:59, Jim Meyering wrote:
 Pádraig Brady wrote:
 Yep, just did that, and got a couple of failures for sparse-fiemap
 on and ext3 and loopback ext4 file systems.
 On a very quick glance, I think cp is OK and that the filefrag
 matching is a bit brittle.
 Attached are filefrag outputs.
 
 I saw similar differences, but I think they were due to the fact that
 the kernel had not yet forced cp's metadata update to disk when filefrag
 does its FIEMAP ioctl
 
 Uncommenting the sync just after the cp in the sparse-fiemap test
 solved the problem (for me it arose only on rawhide's btrfs) but made
 the test take a lot more time, as mentioned in the comment.

It seems that the sync is needed for ext4 loop back also, on my system.
However for ext3, the extents between src and dst still don't match up.
My systems uses a 4K block size and is 2.6.35.10-72.fc14.i686
For now, I've disabled (indirectly) using ext3 for fiemap-perf and
sparse-fiemap in the attached.

 Instead of that sync, using filefrag's -s option may
 have the same result without the unwanted overhead.

That doesn't work actually which baffles me.
I got the e2fsprogs source rpm to verify that
FIEMAP_FLAG_SYNC was set, so I guess this is a kernel issue
(on ext4 loop back at least).
Though I was able to get a a working sync restricted to the file
by using dd in the attached patch.
Even with that, the test only takes about 10s on my old laptop, so
I didn't bother tagging the test as EXPENSIVE.

cheers,
Pádraig.
From 1da62d67ce4d4c78b98bc947c9367a10f1bdba9d Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Fri, 4 Feb 2011 22:05:20 +
Subject: [PATCH] test: improve the fiemap_capable_ check

* tests/cp/fiemap-2: Enable the fiemap check for files, which
will enable the test on ext3.
* tests/cp/fiemap-perf: Comment why we're not enabling for ext3.
* tests/cp/sparse-fiemap: Ditto.
* tests/fiemap-capable: A new python script to determine
if a specified path supports fiemap.
* tests/init.cfg (fiemap_capable_): Use the new python script.
* tests/Makefile.am (EXTRA_DIST): Include the new python script.
---
 tests/Makefile.am  |1 +
 tests/cp/fiemap-2  |3 ++-
 tests/cp/fiemap-perf   |2 ++
 tests/cp/sparse-fiemap |   12 
 tests/fiemap-capable   |   16 
 tests/init.cfg |9 -
 6 files changed, 33 insertions(+), 10 deletions(-)
 create mode 100644 tests/fiemap-capable

diff --git a/tests/Makefile.am b/tests/Makefile.am
index 751b327..8aa56cd 100644
--- a/tests/Makefile.am
+++ b/tests/Makefile.am
@@ -11,6 +11,7 @@ EXTRA_DIST =		\
   check.mk		\
   envvar-check		\
   filefrag-extent-compare \
+  fiemap-capable	\
   init.cfg		\
   init.sh		\
   lang-default		\
diff --git a/tests/cp/fiemap-2 b/tests/cp/fiemap-2
index a17076c..691ead2 100755
--- a/tests/cp/fiemap-2
+++ b/tests/cp/fiemap-2
@@ -20,7 +20,8 @@
 print_ver_ cp
 
 # Require a fiemap-enabled FS.
-fiemap_capable_ . \
+touch fiemap_chk # check a file rather than current dir for best coverage
+fiemap_capable_ fiemap_chk \
   || skip_ this file system lacks FIEMAP support
 
 # Exercise the code that handles a file ending in a hole.
diff --git a/tests/cp/fiemap-perf b/tests/cp/fiemap-perf
index 7369a7d..dbb2a81 100755
--- a/tests/cp/fiemap-perf
+++ b/tests/cp/fiemap-perf
@@ -20,6 +20,8 @@
 print_ver_ cp
 
 # Require a fiemap-enabled FS.
+# Note we don't check a file here as that could enable
+# the test on ext3 where emulated extent scanning can be slow.
 fiemap_capable_ . \
   || skip_ this file system lacks FIEMAP support
 
diff --git a/tests/cp/sparse-fiemap b/tests/cp/sparse-fiemap
index f224b5b..fc27869 100755
--- a/tests/cp/sparse-fiemap
+++ b/tests/cp/sparse-fiemap
@@ -19,6 +19,8 @@
 . ${srcdir=.}/init.sh; path_prepend_ ../src
 print_ver_ cp
 
+# Note we don't check a file here as that could enable
+# the test on ext3 where this test is seen to fail.
 if fiemap_capable_ . ; then
   : # Current dir is on a partition with working extents.  Good!
 else
@@ -66,11 +68,13 @@ for i in $(seq 1 2 21); do
 $PERL -e 'BEGIN { $n = '$i' * 1024; *F = *STDOUT }' \
   -e 'for (1..'$j') { sysseek (*F, $n, 1)' \
   -e ' syswrite (*F, chr($_)x$n) or die $!}'  j1 || fail=1
-# sync
+
+# Note the explicit fdatasync is used here as
+# it was seen that `filefrag -s` (FIEMAP_FLAG_SYNC) was
+# ineffective on ext4 loopback on Linux 2.6.35.10-72.fc14.i686
+dd if=/dev/null of=j1 conv=notrunc,fdatasync
 cp --sparse=always j1 j2 || fail=1
-# sync
-# Technically we may need the 'sync' uses above, but
-# uncommenting them makes this test take much longer.
+dd if=/dev/null of=j2 conv=notrunc,fdatasync
 
 cmp j1 j2 || fail=1
 filefrag -v j1 | grep extent \
diff --git a/tests/fiemap-capable b/tests/fiemap-capable
new file mode 100644
index 000..05c6926
--- /dev/null
+++ b/tests/fiemap-capable
@@ -0,0 +1,16 @@
+import struct, fcntl, sys, os
+
+def sizeof(t

Re: coreutils-8.10 released [stable]

2011-02-08 Thread Pádraig Brady
On 08/02/11 08:42, Jim Meyering wrote:
 Do you think it's worth resorting to the FS-name-based
 test when python is not available?

I don't think so, because df can currently allow the test to proceed 
erroneously:
http://lists.gnu.org/archive/html/coreutils/2011-02/msg00018.html
Also the overlap of fiemap capable systems with python should be good.

cheers,
Pádraig.



[PATCH] copy: adjust fiemap handling of sparse files

2011-02-08 Thread Pádraig Brady
I was looking at adding fallocate() to copy.c,
now that the fiemap code has gone in and
I noticed that if there was allocated space
at the end of a file, not accounted for
in st_size, then any holes would not be detected.
In what other cases does the sparse detection
heuristic fail BTW?

Anwyay, we don't need the heuristic with fiemap,
so I changed accordingly in the attached.

cheers,
Pádraig.
From a96d650ae634585ef46f96636d47c44297ce513a Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Tue, 8 Feb 2011 19:16:55 +
Subject: [PATCH] copy: adjust fiemap handling of sparse files

Don't depend on heuristics to detect sparse files
if fiemap is available.  Also don't introduce new
holes unless --sparse=always has been specified.

* src/copy.c (extent_copy): Pass the user specified
sparse mode, and handle as described above.
Also a redundant lseek has been suppressed when
there is no hole between two extents.
---
 src/copy.c |   25 ++---
 1 files changed, 10 insertions(+), 15 deletions(-)

diff --git a/src/copy.c b/src/copy.c
index 9182c16..63a7b1e 100644
--- a/src/copy.c
+++ b/src/copy.c
@@ -294,7 +294,7 @@ write_zeros (int fd, uint64_t n_bytes)
return false.  */
 static bool
 extent_copy (int src_fd, int dest_fd, char *buf, size_t buf_size,
- off_t src_total_size, bool make_holes,
+ off_t src_total_size, enum Sparse_type sparse_mode,
  char const *src_name, char const *dst_name,
  bool *require_normal_copy)
 {
@@ -343,7 +343,8 @@ extent_copy (int src_fd, int dest_fd, char *buf, size_t buf_size,
   return false;
 }
 
-  if (make_holes)
+  uint64_t hole_size = ext_start - last_ext_start - last_ext_len;
+  if (hole_size  sparse_mode != SPARSE_NEVER)
 {
   if (lseek (dest_fd, ext_start, SEEK_SET)  0)
 {
@@ -351,21 +352,15 @@ extent_copy (int src_fd, int dest_fd, char *buf, size_t buf_size,
   goto fail;
 }
 }
-  else
+  else if (hole_size)
 {
   /* When not inducing holes and when there is a hole between
  the end of the previous extent and the beginning of the
  current one, write zeros to the destination file.  */
-  if (last_ext_start + last_ext_len  ext_start)
+  if (! write_zeros (dest_fd, hole_size))
 {
-  uint64_t hole_size = (ext_start
-- last_ext_start
-- last_ext_len);
-  if (! write_zeros (dest_fd, hole_size))
-{
-  error (0, errno, _(%s: write failed), quote (dst_name));
-  goto fail;
-}
+  error (0, errno, _(%s: write failed), quote (dst_name));
+  goto fail;
 }
 }
 
@@ -374,7 +369,7 @@ extent_copy (int src_fd, int dest_fd, char *buf, size_t buf_size,
 
   off_t n_read;
   if ( ! sparse_copy (src_fd, dest_fd, buf, buf_size,
-  make_holes, src_name, dst_name,
+  sparse_mode == SPARSE_ALWAYS, src_name, dst_name,
   ext_len, n_read,
   wrote_hole_at_eof))
 return false;
@@ -397,7 +392,7 @@ extent_copy (int src_fd, int dest_fd, char *buf, size_t buf_size,
  just converted them to a hole in the destination, we must call ftruncate
  here in order to record the proper length in the destination.  */
   if ((dest_pos  src_total_size || wrote_hole_at_eof)
-   (make_holes
+   (sparse_mode != SPARSE_NEVER
   ? ftruncate (dest_fd, src_total_size)
   : ! write_zeros (dest_fd, src_total_size - dest_pos)))
 {
@@ -968,7 +963,7 @@ copy_reg (char const *src_name, char const *dst_name,
  '--sparse=never' option is specified, write all data but use
  any extents to read more efficiently.  */
   if (extent_copy (source_desc, dest_desc, buf, buf_size,
-   src_open_sb.st_size, make_holes,
+   src_open_sb.st_size, x-sparse_mode,
src_name, dst_name, normal_copy_required))
 goto preserve_metadata;
 
-- 
1.7.3.4



Re: [PATCH] copy: adjust fiemap handling of sparse files

2011-02-10 Thread Pádraig Brady
On 09/02/11 23:35, Pádraig Brady wrote:
 Subject: [PATCH] copy: adjust fiemap handling of sparse files

 Don't depend on heuristics to detect sparse files
 if fiemap is available.  Also don't introduce new
 holes unless --sparse=always has been specified.

 * src/copy.c (extent_copy): Pass the user specified
 sparse mode, and handle as described above.
 Also a redundant lseek has been suppressed when
 there is no hole between two extents.

 Could that be done in two separate patches?

Here is the first patch split out to suppress redundant lseek()s.
It's expanded now to suppress lseek in the source as well as dest.

$ fallocate -l $((129)) /tmp/t.f
$ strace -e _llseek cp-before /tmp/t.f /dev/null 21 | wc -l
180
$ strace -e _llseek cp /tmp/t.f /dev/null 21 | wc -l
0

Hmm, the above suggests to me that we could use the
FIEMAP_EXTENT_UNWRITTEN flag, so as to skip the read()
for these allocated but unwritten (zero) extents.
We could treat such extents like holes, except we
would only generate holes in the dest with SPARSE_ALWAYS.
I'll do patch for that this evening.

Anyway the following small reorg will also simplify
possible future changes to introduce fallocate().

Note the attached mostly changes indenting,
with the significant change being just:

$ git diff -w --no-ext-diff HEAD~

diff --git a/src/copy.c b/src/copy.c
index 9182c16..5b6ffe3 100644
--- a/src/copy.c
+++ b/src/copy.c
@@ -334,7 +334,10 @@ extent_copy (int src_fd, int dest_fd, char *buf, size_t 
buf_size,
 {
   off_t ext_start = scan.ext_info[i].ext_logical;
   uint64_t ext_len = scan.ext_info[i].ext_length;
+  uint64_t hole_size = ext_start - last_ext_start - last_ext_len;

+  if (hole_size)
+{
   if (lseek (src_fd, ext_start, SEEK_SET)  0)
 {
   error (0, errno, _(cannot lseek %s), quote (src_name));
@@ -356,11 +359,6 @@ extent_copy (int src_fd, int dest_fd, char *buf, size_t 
buf_size,
   /* When not inducing holes and when there is a hole between
  the end of the previous extent and the beginning of the
  current one, write zeros to the destination file.  */
-  if (last_ext_start + last_ext_len  ext_start)
-{
-  uint64_t hole_size = (ext_start
-- last_ext_start
-- last_ext_len);
   if (! write_zeros (dest_fd, hole_size))
 {
   error (0, errno, _(%s: write failed), quote 
(dst_name));

From b113a93950776fc308985941d8e4825f1f8453dd Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Tue, 8 Feb 2011 19:16:55 +
Subject: [PATCH] copy: suppress redundant lseeks when using fiemap

* src/copy.c (extent_copy): Suppress redundant lseek()s in both
the source and dest files, when there is no hole between extents.
---
 src/copy.c |   40 +++-
 1 files changed, 19 insertions(+), 21 deletions(-)

diff --git a/src/copy.c b/src/copy.c
index 9182c16..5b6ffe3 100644
--- a/src/copy.c
+++ b/src/copy.c
@@ -334,33 +334,31 @@ extent_copy (int src_fd, int dest_fd, char *buf, size_t buf_size,
 {
   off_t ext_start = scan.ext_info[i].ext_logical;
   uint64_t ext_len = scan.ext_info[i].ext_length;
+  uint64_t hole_size = ext_start - last_ext_start - last_ext_len;
 
-  if (lseek (src_fd, ext_start, SEEK_SET)  0)
+  if (hole_size)
 {
-  error (0, errno, _(cannot lseek %s), quote (src_name));
-fail:
-  extent_scan_free (scan);
-  return false;
-}
+  if (lseek (src_fd, ext_start, SEEK_SET)  0)
+{
+  error (0, errno, _(cannot lseek %s), quote (src_name));
+fail:
+  extent_scan_free (scan);
+  return false;
+}
 
-  if (make_holes)
-{
-  if (lseek (dest_fd, ext_start, SEEK_SET)  0)
+  if (make_holes)
 {
-  error (0, errno, _(cannot lseek %s), quote (dst_name));
-  goto fail;
+  if (lseek (dest_fd, ext_start, SEEK_SET)  0)
+{
+  error (0, errno, _(cannot lseek %s), quote (dst_name));
+  goto fail;
+}
 }
-}
-  else
-{
-  /* When not inducing holes and when there is a hole between
- the end of the previous extent and the beginning of the
- current one, write zeros to the destination file.  */
-  if (last_ext_start + last_ext_len  ext_start)
+  else
 {
-  uint64_t hole_size = (ext_start
-- last_ext_start

Re: [PATCH] copy: adjust fiemap handling of sparse files

2011-02-10 Thread Pádraig Brady
On 10/02/11 10:53, Jim Meyering wrote:
 While using --sparse=always is a lot less common,
 it looks like there's room for improvement there, too.
 I.e., we should be able to eliminate all of these
 lseek calls on the output descriptor there, too:
 
 $ strace -e lseek cp --sparse=always /tmp/t.f /tmp/t2 21|wc -l
 16384

True. So merge the lseeks in the sparse detection in sparse_copy.
I'll have a look at adding that this evening.
Note we'd only get 1 lseek for the fallocated file above
if the mentioned FIEMAP_EXTENT_UNWRITTEN idea works.

cheers,
Pádraig.



Re: [PATCH] copy: adjust fiemap handling of sparse files

2011-02-13 Thread Pádraig Brady
Unfortunately, after checking BTRFS I see that fiemap
behaves differently to EXT4. IMHO the EXT4 operation
seems correct, and gives full info about the structure
of a file, which cp for example can use to efficiently
and accurately reproduce the structure at the destination.

On EXT4 (on kernel-2.6.35.11-83.fc14.i686) there are no extents
returned for holes in a file.

However on btrfs it does return an extent for holes?
So with btrfs there is no way to know an extent
is allocated but unwritten (zero), so one must
read and write all the data, rather than just
fallocating the space in the destination.

One can also see this with the following on btrfs:

$ fallocate -l 1 unwritten
$ truncate -s 1 sparse
$ dd count=1000 bs=10 if=/dev/zero of=zero

$ filefrag -vs *
Filesystem type is: 9123683e
File size of sparse is 1 (24415 blocks, blocksize 4096)
 ext logical physical expected length flags
   0   00   24415 unwritten,eof
sparse: 1 extent found
File size of unwritten is 1 (24415 blocks, blocksize 4096)
 ext logical physical expected length flags
   0   068160   12207
   1   122079256080366  12208 eof
unwritten: 2 extents found
File size of zeros is 1 (24415 blocks, blocksize 4096)
 ext logical physical expected length flags
   0   019360   20678
   1   206784376040037   3737 eof
zeros: 2 extents found

So is this expected?
Has this already been changed to match ext4?

For my own reference, I can probably skip performance
tests on (older) btrfs by checking `filefrag` output.
Also in `cp`, if we see an unwritten extent we should
probably create a hole rather than an empty allocation
by default.  It's better to decrease file allocation
than increase it.

cheers,
Pádraig.



Re: coreutils-8.10 released [stable]

2011-02-13 Thread Pádraig Brady
On 08/02/11 10:14, Pádraig Brady wrote:
 On 08/02/11 08:42, Jim Meyering wrote:
 Do you think it's worth resorting to the FS-name-based
 test when python is not available?
 
 I don't think so, because df can currently allow the test to proceed 
 erroneously:
 http://lists.gnu.org/archive/html/coreutils/2011-02/msg00018.html
 Also the overlap of fiemap capable systems with python should be good.

Here's a further update to the fiemap checking in tests,
which will enable those tests on BTRFS.
I'm holding off on this however until we figure out
exactly what to do about the BTRFS vs EXT4 fiemap differences.

cheers,
Pádraig.
From b0da9315af0b7430eea88429f0e54f43c345e1cf Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Sun, 13 Feb 2011 18:56:10 +
Subject: [PATCH] test: support more file systems in the cp fiemap tests

* tests/cp/fiemap-perf: Check for fiemap support against a file
rather than a directory to enable tests on BTRFS for example.
Explicity disable the test on ext3 or file systems where we
can't determine the type.
* tests/cp/sparse-fiemap: Likewise.
* tests/init.cfg: Comment that BTRFS only supports fiemap
for regular files.
---
 tests/cp/fiemap-perf   |   12 
 tests/cp/sparse-fiemap |9 +
 tests/init.cfg |2 +-
 3 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/tests/cp/fiemap-perf b/tests/cp/fiemap-perf
index dbb2a81..bbe3f0d 100755
--- a/tests/cp/fiemap-perf
+++ b/tests/cp/fiemap-perf
@@ -20,10 +20,14 @@
 print_ver_ cp
 
 # Require a fiemap-enabled FS.
-# Note we don't check a file here as that could enable
-# the test on ext3 where emulated extent scanning can be slow.
-fiemap_capable_ . \
-  || skip_ this file system lacks FIEMAP support
+touch fiemap_chk
+fiemap_capable_ fiemap_chk ||
+  skip_ this file system lacks FIEMAP support
+
+# Exclude ext3 (or unknown fs types)
+# as the emulated extent scanning is slow
+df -t ext3 . /dev/null 
+  skip_ ext3 has known slow FIEMAP scanning
 
 # Create a large-but-sparse file.
 timeout 10 truncate -s1T f || framework_failure_
diff --git a/tests/cp/sparse-fiemap b/tests/cp/sparse-fiemap
index fc27869..4eced1d 100755
--- a/tests/cp/sparse-fiemap
+++ b/tests/cp/sparse-fiemap
@@ -19,10 +19,11 @@
 . ${srcdir=.}/init.sh; path_prepend_ ../src
 print_ver_ cp
 
-# Note we don't check a file here as that could enable
-# the test on ext3 where this test is seen to fail.
-if fiemap_capable_ . ; then
-  : # Current dir is on a partition with working extents.  Good!
+# The test was seen to fail on ext3 so exclude that type
+# (or any file system where the type can't be determined)
+touch fiemap_chk
+if fiemap_capable_ fiemap_chk  ! df -t ext3 . /dev/null; then
+  : # Current partition has working extents.  Good!
 else
   # It's not;  we need to create one, hence we need root access.
   require_root_
diff --git a/tests/init.cfg b/tests/init.cfg
index eb3feaa..5e0f71b 100644
--- a/tests/init.cfg
+++ b/tests/init.cfg
@@ -296,7 +296,7 @@ require_proc_pid_status_()
 }
 
 # Return nonzero if the specified path is on a file system for
-# which FIEMAP support exists.  Note some file systems (like ext3)
+# which FIEMAP support exists.  Note some file systems (like ext3,btrfs)
 # only support FIEMAP for files, not directories.
 fiemap_capable_()
 {
-- 
1.7.4



Re: [PATCH] Replace spaces with tab on tests/Makefile.am

2011-02-19 Thread Pádraig Brady
Pushed.

thanks,
Pádraig.



bug#8001: cp (8.10) sparse handling fails on compressed btrfs (cp/fiemap-2)

2011-02-19 Thread Pádraig Brady
On 19/02/11 18:28, Mike Frysinger wrote:
 based on other threads (which i havent been following too closely), did we 
 settle on this being a btrfs bug ?
 -mike

Nope, cp 8.10 is not absolved yet.
It may be btrfs not honoring FIEMAP_FLAG_SYNC,
and/or it may be cp needing to handle FIEMAP_EXTENT_ENCODED
specially.

It would help if you ran `sync` before the copy,
to exclude that as a possible issue.

Also `filefrag -v` output for the file on
the compressed BTRFS file system would be helpful.

thanks,
Pádraig.





Re: A new solution for sorting du -h

2011-02-22 Thread Pádraig Brady
On 22/02/11 19:51, Alan Evans wrote:
 There are many discussions about sorting the human readable form of du
 for output in reports/cron jobs and the like.

sort got -h to operation on the suffixed numbers directly
in release 7.5 (2009-08-20)

cheers,
Pádraig.



[PATCH] dd: add a flag to discard cached data

2011-02-22 Thread Pádraig Brady
I noticed fadvise(DONTNEED) getting some love in the kernel recently
   http://lkml.org/lkml/2011/2/17/169
Which prompted me to implement this. With the attached one can now:

  # Advise to drop cache for whole file
  dd if=ifile iflag=nocache count=0

  # Ensure drop cache for whole file
  dd of=ofile oflag=nocache conv=notrunc,fdatasync count=0

  # Drop cache for part of file
  dd if=ifile iflag=nocache skip=10 count=10 of=/dev/null

  # Stream data just using readahead cache
  dd if=ifile of=ofile iflag=nocache oflag=nocache

When count=0, i.e. when only manipulating the cache,
we propagate the posix_fadvise() return to the exit status.

Note this will invalidate the cache even if another
process has the file open. That could be avoided with mincor:
   http://insights.oetiker.ch/linux/fadvise.html
However, I don't think that's needed for a tool like dd.

cheers,
Pádraig.
From 5ebe7618f8d62a81e053a57390132e4013218416 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Tue, 22 Feb 2011 21:14:00 +
Subject: [PATCH] dd: add a flag to discard cached data

* src/dd.c (usage): Add the 'nocache' flag.
(cache_round): A new function to help ignore cache
drop requests less than page_size.
(invalidate_cache): A new function to call posix_fadvise()
with the appropriate offset and length.  Note we don't
use fdadvise() so we can detect errors when count=0.
(dd_copy): Call invalidate_cache() for the processed portions.
(main): Call invalidate_cache for page_size slop or
for full file when count=0.
* cfg.mk (sc_dd_O_FLAGS): Adjust to pass.
* doc/coreutils.texi (dd invocation): Describe the 'nocache' flag,
and give some examples of how it can be used.
* tests/dd/nocache: A new test.
* tests/Makefile.am: Reference the new test.
* NEWS: Mention the new feature.
---
 NEWS   |6 ++
 cfg.mk |4 +-
 doc/coreutils.texi |   25 
 src/dd.c   |  160 +---
 tests/Makefile.am  |1 +
 tests/dd/nocache   |   48 
 6 files changed, 234 insertions(+), 10 deletions(-)
 create mode 100755 tests/dd/nocache

diff --git a/NEWS b/NEWS
index a367d8d..bbb8bd3 100644
--- a/NEWS
+++ b/NEWS
@@ -8,6 +8,12 @@ GNU coreutils NEWS-*- outline -*-
   delimiter and an unbounded range like -f1234567890-.
   [bug introduced in coreutils-5.3.0]
 
+** New features
+
+  dd now accepts the nocache option to iflag and oflag,
+  which will discard any cache associated with the files,
+  or processed portion thereof.
+
 
 * Noteworthy changes in release 8.10 (2011-02-04) [stable]
 
diff --git a/cfg.mk b/cfg.mk
index e4f3faa..c897dc4 100644
--- a/cfg.mk
+++ b/cfg.mk
@@ -39,8 +39,8 @@ _hv_file ?= $(srcdir)/tests/misc/help-version
 dd = $(srcdir)/src/dd.c
 sc_dd_O_FLAGS:
 	@rm -f $@.1 $@.2
-	@{ echo O_FULLBLOCK; perl -nle '/^ +\| (O_\w*)$$/ and print $$1' \
-	  $(dd); } | sort  $@.1
+	@{ echo O_FULLBLOCK; echo O_NOCACHE;\
+	  perl -nle '/^ +\| (O_\w*)$$/ and print $$1' $(dd); } | sort  $@.1
 	@{ echo O_NOFOLLOW; perl -nle '/{[a-z]+,\s*(O_\w+)},/ and print $$1' \
 	  $(dd); } | sort  $@.2
 	@diff -u $@.1 $@.2 || diff=1 || diff=;\
diff --git a/doc/coreutils.texi b/doc/coreutils.texi
index ea35afe..529adab 100644
--- a/doc/coreutils.texi
+++ b/doc/coreutils.texi
@@ -8168,6 +8168,31 @@ last-access and last-modified time) is not necessarily synchronized.
 @cindex synchronized data and metadata I/O
 Use synchronized I/O for both data and metadata.
 
+@item nocache
+@opindex nocache
+@cindex discarding file cache
+Discard the data cache for a file.
+When count=0 all cache is discarded,
+otherwise the cache is dropped for the processed
+portion of the file.  Also when count=0
+failure to discard the cache is diagnosed
+and reflected in the exit status.
+Here as some usage examples:
+
+@example
+# Advise to drop cache for whole file
+dd if=ifile iflag=nocache count=0
+
+# Ensure drop cache for the whole file
+dd of=ofile oflag=nocache conv=notrunc,fdatasync count=0
+
+# Drop cache for part of file
+dd if=ifile iflag=nocache skip=10 count=10 of=/dev/null
+
+# Stream data just using read-ahead cache
+dd if=ifile of=ofile iflag=nocache oflag=nocache
+@end example
+
 @item nonblock
 @opindex nonblock
 @cindex nonblocking I/O
diff --git a/src/dd.c b/src/dd.c
index daddc1e..f62aaed 100644
--- a/src/dd.c
+++ b/src/dd.c
@@ -225,6 +225,9 @@ static sig_atomic_t volatile interrupt_signal;
 /* A count of the number of pending info signals that have been received.  */
 static sig_atomic_t volatile info_signal_count;
 
+/* Whether to discard cache for input or output.  */
+static bool i_nocache, o_nocache;
+
 /* Function used for read (to handle iflag=fullblock parameter).  */
 static ssize_t (*iread_fnc) (int fd, char *buf, size_t size);
 
@@ -259,6 +262,7 @@ static struct symbol_value const conversions[] =
   {, 0}
 };
 
+#define FFS_MASK(x) ((x) ^ ((x)  ((x) - 1)))
 enum
   {
 /* Compute a 

Re: [PATCH] dd: add a flag to discard cached data

2011-02-24 Thread Pádraig Brady
On 24/02/11 07:52, Jim Meyering wrote:
 One quick question: does the test need
 something to make it skip (not fail)
 on systems that lack kernel support for the feature?

Oops right.
Attached is latest version.

thanks for the review,
Pádraig.
From fe067f8f52defc54636ae862df03a2115ac6266f Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Tue, 22 Feb 2011 21:14:00 +
Subject: [PATCH] dd: add a flag to discard cached data

* src/dd.c (FFS_MASK): A new macro (Find First Set) refactored
from the following enum as it's now used twice.
(usage): Mention the new 'nocache' flag.
(cache_round): A new function to help ignore cache
drop requests less than page_size.
(invalidate_cache): A new function to call posix_fadvise()
with the appropriate offset and length.  Note we don't
use fdadvise() so we can detect errors when count=0.
(dd_copy): Call invalidate_cache() for the read portions.
(iwrite): Likewise for the written portions.
(main): Call invalidate_cache for page_size slop or
for full file when count=0.
* cfg.mk (sc_dd_O_FLAGS): Adjust to pass.
* doc/coreutils.texi (dd invocation): Describe the 'nocache' flag,
and give some examples of how it can be used.
* tests/dd/nocache: A new test.
* tests/Makefile.am: Reference the new test.
* NEWS: Mention the new feature.
---
 NEWS   |6 ++
 cfg.mk |4 +-
 doc/coreutils.texi |   25 
 src/dd.c   |  160 +---
 tests/Makefile.am  |1 +
 tests/dd/nocache   |   58 +++
 6 files changed, 244 insertions(+), 10 deletions(-)
 create mode 100755 tests/dd/nocache

diff --git a/NEWS b/NEWS
index a367d8d..9ec24a6 100644
--- a/NEWS
+++ b/NEWS
@@ -8,6 +8,12 @@ GNU coreutils NEWS-*- outline -*-
   delimiter and an unbounded range like -f1234567890-.
   [bug introduced in coreutils-5.3.0]
 
+** New features
+
+  dd now accepts the 'nocache' flag to the iflag and oflag options,
+  which will discard any cache associated with the files, or
+  processed portion thereof.
+
 
 * Noteworthy changes in release 8.10 (2011-02-04) [stable]
 
diff --git a/cfg.mk b/cfg.mk
index e4f3faa..c897dc4 100644
--- a/cfg.mk
+++ b/cfg.mk
@@ -39,8 +39,8 @@ _hv_file ?= $(srcdir)/tests/misc/help-version
 dd = $(srcdir)/src/dd.c
 sc_dd_O_FLAGS:
 	@rm -f $@.1 $@.2
-	@{ echo O_FULLBLOCK; perl -nle '/^ +\| (O_\w*)$$/ and print $$1' \
-	  $(dd); } | sort  $@.1
+	@{ echo O_FULLBLOCK; echo O_NOCACHE;\
+	  perl -nle '/^ +\| (O_\w*)$$/ and print $$1' $(dd); } | sort  $@.1
 	@{ echo O_NOFOLLOW; perl -nle '/{[a-z]+,\s*(O_\w+)},/ and print $$1' \
 	  $(dd); } | sort  $@.2
 	@diff -u $@.1 $@.2 || diff=1 || diff=;\
diff --git a/doc/coreutils.texi b/doc/coreutils.texi
index ea35afe..529adab 100644
--- a/doc/coreutils.texi
+++ b/doc/coreutils.texi
@@ -8168,6 +8168,31 @@ last-access and last-modified time) is not necessarily synchronized.
 @cindex synchronized data and metadata I/O
 Use synchronized I/O for both data and metadata.
 
+@item nocache
+@opindex nocache
+@cindex discarding file cache
+Discard the data cache for a file.
+When count=0 all cache is discarded,
+otherwise the cache is dropped for the processed
+portion of the file.  Also when count=0
+failure to discard the cache is diagnosed
+and reflected in the exit status.
+Here as some usage examples:
+
+@example
+# Advise to drop cache for whole file
+dd if=ifile iflag=nocache count=0
+
+# Ensure drop cache for the whole file
+dd of=ofile oflag=nocache conv=notrunc,fdatasync count=0
+
+# Drop cache for part of file
+dd if=ifile iflag=nocache skip=10 count=10 of=/dev/null
+
+# Stream data just using read-ahead cache
+dd if=ifile of=ofile iflag=nocache oflag=nocache
+@end example
+
 @item nonblock
 @opindex nonblock
 @cindex nonblocking I/O
diff --git a/src/dd.c b/src/dd.c
index daddc1e..f62aaed 100644
--- a/src/dd.c
+++ b/src/dd.c
@@ -225,6 +225,9 @@ static sig_atomic_t volatile interrupt_signal;
 /* A count of the number of pending info signals that have been received.  */
 static sig_atomic_t volatile info_signal_count;
 
+/* Whether to discard cache for input or output.  */
+static bool i_nocache, o_nocache;
+
 /* Function used for read (to handle iflag=fullblock parameter).  */
 static ssize_t (*iread_fnc) (int fd, char *buf, size_t size);
 
@@ -259,6 +262,7 @@ static struct symbol_value const conversions[] =
   {, 0}
 };
 
+#define FFS_MASK(x) ((x) ^ ((x)  ((x) - 1)))
 enum
   {
 /* Compute a value that's bitwise disjoint from the union
@@ -278,17 +282,23 @@ enum
   | O_SYNC
   | O_TEXT
   ),
-/* Use its lowest bit.  */
-O_FULLBLOCK = v ^ (v  (v - 1))
+
+/* Use its lowest bits for private flags.  */
+O_FULLBLOCK = FFS_MASK (v),
+v2 = v ^ O_FULLBLOCK,
+
+O_NOCACHE = FFS_MASK (v2)
   };
 
 /* Ensure that we got something.  */
 verify (O_FULLBLOCK != 0);
+verify (O_NOCACHE != 0);
 
 #define MULTIPLE_BITS_SET(i) (((i)  

[PATCH] doc: group dd conv= options that are actually flags

2011-02-25 Thread Pádraig Brady
This requires reorganizing translations a little,
but I think it's worth it.
From db8405e00f30d794dfa91761cd101278f10008dc Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Fri, 25 Feb 2011 08:16:23 +
Subject: [PATCH] doc: group dd conv= options that are actually flags

* src/dd.c (usage): Move 'sync' up with other data transformation
options.  Having it alongside 'fsync' and 'fdatasync' is
particularly confusing.  Also the double line description of
the 'sync' option, serves as a visual break from the flag
type options that follow.
* doc/coreutils.texi (dd invocation):  Apply the same grouping
as above, by splitting the conv= table in two.
---
 doc/coreutils.texi |   32 +++-
 src/dd.c   |   12 +---
 2 files changed, 24 insertions(+), 20 deletions(-)

diff --git a/doc/coreutils.texi b/doc/coreutils.texi
index ea35afe..9f6c734 100644
--- a/doc/coreutils.texi
+++ b/doc/coreutils.texi
@@ -8062,22 +8062,29 @@ Swap every pair of input bytes.  @sc{gnu} @command{dd}, unlike others, works
 when an odd number of bytes are read---the last byte is simply copied
 (since there is nothing to swap it with).
 
-@item noerror
-@opindex noerror
-@cindex read errors, ignoring
-Continue after read errors.
+@item sync
+@opindex sync @r{(padding with @acronym{ASCII} @sc{nul}s)}
+Pad every input block to size of @samp{ibs} with trailing zero bytes.
+When used with @samp{block} or @samp{unblock}, pad with spaces instead of
+zero bytes.
 
-@item nocreat
-@opindex nocreat
-@cindex creating output file, avoiding
-Do not create the output file; the output file must already exist.
+@end table
+
+The following ''conversions'' are really file flags
+and don't effect and internal processing:
 
+@table @samp
 @item excl
 @opindex excl
 @cindex creating output file, requiring
 Fail if the output file already exists; @command{dd} must create the
 output file itself.
 
+@item nocreat
+@opindex nocreat
+@cindex creating output file, avoiding
+Do not create the output file; the output file must already exist.
+
 The @samp{excl} and @samp{nocreat} conversions are mutually exclusive.
 
 @item notrunc
@@ -8085,11 +8092,10 @@ The @samp{excl} and @samp{nocreat} conversions are mutually exclusive.
 @cindex truncating output file, avoiding
 Do not truncate the output file.
 
-@item sync
-@opindex sync @r{(padding with @acronym{ASCII} @sc{nul}s)}
-Pad every input block to size of @samp{ibs} with trailing zero bytes.
-When used with @samp{block} or @samp{unblock}, pad with spaces instead of
-zero bytes.
+@item noerror
+@opindex noerror
+@cindex read errors, ignoring
+Continue after read errors.
 
 @item fdatasync
 @opindex fdatasync
diff --git a/src/dd.c b/src/dd.c
index daddc1e..6d9db04 100644
--- a/src/dd.c
+++ b/src/dd.c
@@ -499,18 +499,16 @@ Each CONV symbol may be:\n\
   block pad newline-terminated records with spaces to cbs-size\n\
   unblock   replace trailing spaces in cbs-size records with newline\n\
   lcase change upper case to lower case\n\
-), stdout);
-  fputs (_(\
-  nocreat   do not create the output file\n\
-  excl  fail if the output file already exists\n\
-  notrunc   do not truncate the output file\n\
   ucase change lower case to upper case\n\
   swab  swap every pair of input bytes\n\
+  sync  pad every input block with NULs to ibs-size; when used\n\
+with block or unblock, pad with spaces rather than NULs\n\
 ), stdout);
   fputs (_(\
+  excl  fail if the output file already exists\n\
+  nocreat   do not create the output file\n\
+  notrunc   do not truncate the output file\n\
   noerror   continue after read errors\n\
-  sync  pad every input block with NULs to ibs-size; when used\n\
-with block or unblock, pad with spaces rather than NULs\n\
   fdatasync  physically write output file data before finishing\n\
   fsync likewise, but also write metadata\n\
 ), stdout);
-- 
1.7.4



bug#7362: dd strangeness

2011-02-25 Thread Pádraig Brady
On 02/02/11 13:23, Pádraig Brady wrote:
 This looks like another candidate to auto enable fullblock for.
 https://bugzilla.redhat.com/show_bug.cgi?id=614605
 I.E. oflag=direct

Attached is a proposed solution to this.
I'm worried about the last condition though
where we enable 'fullblock' when both count and bs are specified.

For example this would still work:

# Output first 2 parts
$ (echo part1; sleep 1; echo part2; sleep 1; echo discard) |
  dd count=2 obs=1 2/dev/null
part1
part2

However this would not:

# Output first 2 parts, each being up to 4096 bytes
$ (echo part1; sleep 1; echo part2; sleep 1; echo discard) |
  dd count=2 ibs=4096 obs=1 2/dev/null
part1
part2
discard

So how contrived is the last example,
given how brittle such a construct is?

cheers,
Pádraig.
diff --git a/doc/coreutils.texi b/doc/coreutils.texi
index ea35afe..9167537 100644
--- a/doc/coreutils.texi
+++ b/doc/coreutils.texi
@@ -8089,7 +8089,7 @@ Do not truncate the output file.
 @opindex sync @r{(padding with @acronym{ASCII} @sc{nul}s)}
 Pad every input block to size of @samp{ibs} with trailing zero bytes.
 When used with @samp{block} or @samp{unblock}, pad with spaces instead of
-zero bytes.
+zero bytes.  This implies the @samp{fullblock} flag.
 
 @item fdatasync
 @opindex fdatasync
@@ -8135,8 +8135,8 @@ output file to be truncated before being appended to.
 @cindex concurrent I/O
 Use concurrent I/O mode for data.  This mode performs direct I/O
 and drops the @acronym{POSIX} requirement to serialize all I/O to the same file.
-A file cannot be opened in CIO mode and with a standard open at the
-same time.
+A file cannot be opened in CIO mode and with a standard open at the same time.
+This implies the @samp{fullblock} flag.
 
 @item direct
 @opindex direct
@@ -8146,6 +8146,7 @@ Note that the kernel may impose restrictions on read or write buffer sizes.
 For example, with an ext4 destination file system and a linux-based kernel,
 using @samp{oflag=direct} will cause writes to fail with @code{EINVAL} if the
 output buffer size is not a multiple of 512.
+This implies the @samp{fullblock} flag.
 
 @item directory
 @opindex directory
diff --git a/src/dd.c b/src/dd.c
index daddc1e..5b56970 100644
--- a/src/dd.c
+++ b/src/dd.c
@@ -1075,6 +1075,19 @@ scanargs (int argc, char *const *argv)
   conversions_mask |= C_TWOBUFS;
 }
 
+  /* Enable 'fullblock' as one wouldn't want random
+ padding applied, when reading from a pipe for example.  */
+  if (conversions_mask  C_SYNC)
+input_flags |= O_FULLBLOCK;
+  /* Enable 'fullblock' with 'direct' or 'cio' as again if reading from
+ a pipe, we're constrained in how we write to output.  */
+  else if ((input_flags | output_flags)  (O_DIRECT | O_CIO))
+input_flags |= O_FULLBLOCK;
+  /* Enable 'fullblock' if we're reading a specific number of blocks,
+ with a specific block size.  */
+  else if (max_records  max_records != (uintmax_t) -1  input_blocksize)
+input_flags |= O_FULLBLOCK;
+
   if (input_blocksize == 0)
 input_blocksize = DEFAULT_BLOCKSIZE;
   if (output_blocksize == 0)


[PATCH] dd: don't discard data with if=fullblock

2011-02-28 Thread Pádraig Brady
I just noticed that dd may discard read data
in the presence of I/O errors.
Would something like this suffice?

cheers,
Pádraig.

diff --git a/src/dd.c b/src/dd.c
index 1a0e177..a1d47ff 100644
--- a/src/dd.c
+++ b/src/dd.c
@@ -811,12 +811,28 @@ static ssize_t
 iread_fullblock (int fd, char *buf, size_t size)
 {
   ssize_t nread = 0;
+  static iread_errno;
+
+  /* Return a pending error.  */
+  if (iread_errno)
+{
+  errno = iread_errno;
+  iread_errno = 0;
+  return -1;
+}

   while (0  size)
 {
   ssize_t ncurr = iread (fd, buf, size);
   if (ncurr  0)
-return ncurr;
+{
+  if (nread  0)
+{
+  iread_errno = errno;
+  ncurr = nread;
+}
+  return ncurr;
+}
   if (ncurr == 0)
 break;
   nread += ncurr;



auto merging extents

2011-03-07 Thread Pádraig Brady
I was wondering about adding fallocate() to cp,
to efficiently allocate the destination before writing.
With that in mind, I think it would be beneficial
to auto merge extents, so that fragments in the
source were not propagated to the dest?

This should also be more efficient even without fallocate()
as demonstrated by running the attached on my ext3 file system:

$ dd if=/dev/zero count=50 bs=100 of=file.fa

$ strace -c -e read,write cp-old file.fa file.fa.cp
% time seconds  usecs/call callserrors syscall
-- --- --- - - 
 86.670.017686   7  2363   write
 13.330.002721   1  2372   read
-- --- --- - - 
100.000.020407  4735   total

$ strace -c -e read,write cp-new file.fa file.fa.cp
% time seconds  usecs/call callserrors syscall
-- --- --- - - 
 85.760.019382  13  1535   write
 14.240.003218   2  1544   read
-- --- --- - - 
100.000.022600  3079   total

Hmm, I wonder should we get extent_scan_read() to loop until
all are read, rather than requiring loops outside?
As well as simplifying users, it would allow us to merge
extents whose info spans the 4K buffer provided?

cheers,
Pádraig.
From 8d7984f20bd26cbbbef6e6aeae5e66e515670422 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Mon, 7 Mar 2011 08:34:35 +
Subject: [PATCH] copy: merge similar extents before processing

* src/extent-scan.c (extent_scan_read):  Merge extents that vary
only in size, so that we may process them more efficiently.
This will be especially useful when we introduce fallocate()
to allocate extents in the destination.
---
 src/extent-scan.c |   26 --
 1 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/src/extent-scan.c b/src/extent-scan.c
index 1ba59db..c416586 100644
--- a/src/extent-scan.c
+++ b/src/extent-scan.c
@@ -85,23 +85,37 @@ extent_scan_read (struct extent_scan *scan)
   scan-ei_count = fiemap-fm_mapped_extents;
   scan-ext_info = xnmalloc (scan-ei_count, sizeof (struct extent_info));
 
-  unsigned int i;
+  unsigned int i, si = 0;
   for (i = 0; i  scan-ei_count; i++)
 {
   assert (fm_extents[i].fe_logical = OFF_T_MAX);
 
-  scan-ext_info[i].ext_logical = fm_extents[i].fe_logical;
-  scan-ext_info[i].ext_length = fm_extents[i].fe_length;
-  scan-ext_info[i].ext_flags = fm_extents[i].fe_flags;
+  if (si  scan-ext_info[si-1].ext_flags == fm_extents[i].fe_flags
+   (scan-ext_info[si-1].ext_logical + scan-ext_info[si-1].ext_length
+  == fm_extents[i].fe_logical))
+{
+  /* Merge previous with last.  */
+  scan-ext_info[si-1].ext_length += fm_extents[i].fe_length;
+}
+  else
+{
+  scan-ext_info[si].ext_logical = fm_extents[i].fe_logical;
+  scan-ext_info[si].ext_length = fm_extents[i].fe_length;
+  scan-ext_info[si].ext_flags = fm_extents[i].fe_flags;
+  si++;
+}
 }
 
-  i--;
-  if (scan-ext_info[i].ext_flags  FIEMAP_EXTENT_LAST)
+  scan-ei_count = si;
+
+  si--;
+  if (scan-ext_info[si].ext_flags  FIEMAP_EXTENT_LAST)
 {
   scan-hit_final_extent = true;
   return true;
 }
 
+  i--;
   scan-scan_start = fm_extents[i].fe_logical + fm_extents[i].fe_length;
 
   return true;
-- 
1.7.4



Re: auto merging extents

2011-03-10 Thread Pádraig Brady
On 09/03/11 18:23, Jim Meyering wrote:
 Pádraig Brady wrote:
 -  i--;
 -  if (scan-ext_info[i].ext_flags  FIEMAP_EXTENT_LAST)
 +  scan-ei_count = si;
 +
 +  si--;
 +  if (scan-ext_info[si].ext_flags  FIEMAP_EXTENT_LAST)
  {
scan-hit_final_extent = true;
return true;
  }

 +  i--;
scan-scan_start = fm_extents[i].fe_logical + fm_extents[i].fe_length;
 
 The above is all fine, but unless you know that scan-ei_count
 is always positive, seeing i and si used as indices right
 after being decremented may make you think there's a risk
 of accessing some_buffer[-1].
 
 What do you think about adding an assertion, either on scan-ei_count
 before the loop, or on i and/or si after it?

Yes, it's not immediately obvious that i and si = 1,
but it's guaranteed by the early return when fiemap-fm_mapped_extents==0.
Because of that return, an assert is overkill I think.
Actually I think we can simplify further by just
using a pointer to the last extent_info entry we're updating.

I also noticed that we shouldn't care about
the FIEMAP_EXTENT_LAST flag when merging.

Both changes are in the latest version attached.

cheers,
Pádraig.
From 561c261ea5a9d602b752a837b307bcc397b9d6cc Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Mon, 7 Mar 2011 08:34:35 +
Subject: [PATCH] copy: merge similar extents before processing

* src/extent-scan.c (extent_scan_read):  Merge adjacent extents
that vary only in size, so that we may process them more efficiently.
This will be especially useful when we introduce fallocate()
to allocate extents in the destination.
---
 src/extent-scan.c |   35 ---
 1 files changed, 24 insertions(+), 11 deletions(-)

diff --git a/src/extent-scan.c b/src/extent-scan.c
index 1ba59db..d32574a 100644
--- a/src/extent-scan.c
+++ b/src/extent-scan.c
@@ -85,24 +85,37 @@ extent_scan_read (struct extent_scan *scan)
   scan-ei_count = fiemap-fm_mapped_extents;
   scan-ext_info = xnmalloc (scan-ei_count, sizeof (struct extent_info));
 
-  unsigned int i;
+  unsigned int i, si = 0;
+  struct extent_info *last_ei IF_LINT ( = NULL);
+
   for (i = 0; i  scan-ei_count; i++)
 {
   assert (fm_extents[i].fe_logical = OFF_T_MAX);
 
-  scan-ext_info[i].ext_logical = fm_extents[i].fe_logical;
-  scan-ext_info[i].ext_length = fm_extents[i].fe_length;
-  scan-ext_info[i].ext_flags = fm_extents[i].fe_flags;
+  if (si  last_ei-ext_flags ==
+  (fm_extents[i].fe_flags  ~FIEMAP_EXTENT_LAST)
+   (last_ei-ext_logical + last_ei-ext_length
+  == fm_extents[i].fe_logical))
+{
+  /* Merge previous with last.  */
+  last_ei-ext_length += fm_extents[i].fe_length;
+}
+  else
+{
+  last_ei = scan-ext_info + si;
+  last_ei-ext_logical = fm_extents[i].fe_logical;
+  last_ei-ext_length = fm_extents[i].fe_length;
+  last_ei-ext_flags = fm_extents[i].fe_flags;
+  si++;
+}
 }
 
-  i--;
-  if (scan-ext_info[i].ext_flags  FIEMAP_EXTENT_LAST)
-{
-  scan-hit_final_extent = true;
-  return true;
-}
+  scan-ei_count = si;
 
-  scan-scan_start = fm_extents[i].fe_logical + fm_extents[i].fe_length;
+  if (last_ei-ext_flags  FIEMAP_EXTENT_LAST)
+scan-hit_final_extent = true;
+  else
+scan-scan_start = last_ei-ext_logical + last_ei-ext_length;
 
   return true;
 }
-- 
1.7.4



Re: auto merging extents

2011-03-10 Thread Pádraig Brady
On 10/03/11 14:13, Jim Meyering wrote:
 Have you thought of adding a test that would check for
 the merged extents in the result, say using filefrag?

That's a bit tricky, at least until I merge fallocate() support,
and even then we're at the behest of the file system as to
what actually happens in the dest.

Perhaps I could do something with strace.
I'll think about it.

cheers,
Pádraig.



[PATCH] maint: use wcswidth from gnulib

2011-03-13 Thread Pádraig Brady
FYI, after a suggestion from Bruno Haible, I applied:

* gl/lib/mbsalign.c (rpl_wcswidth): Remove this in favor
of the equivalent wcswidth replacement in gnulib.
* bootstrap.conf: Depend on the wcswidth module.




Re: parallel sort at fault? [Re: [PATCH] tests: avoid gross inefficiency...

2011-03-14 Thread Pádraig Brady
On 12/03/11 15:34, Jim Meyering wrote:
 Jim Meyering wrote:
 Jim Meyering wrote:
 Jim Meyering wrote:
 Running make -j25 check on a nominal-12-core F14 system would
 cause serious difficulty leading to an OOM kill -- and this is brand new.
 It worked fine yesterday.  I tracked it down to all of the make processes
 working on the built_programs.list (in src/Makefile.am) rule

 built_programs.list:
@echo $(bin_PROGRAMS) $(bin_SCRIPTS) | tr ' ' '\n' \
  | sed -e 's,$(EXEEXT)$$,,' | $(ASSORT) -u | tr '\n' ' '

 Which made me realize we were running that submake over 400 times,
 once per test scripts (including skipped ones).  That's well worth
 avoiding, even if it means a new temporary file.

 I don't know the root cause of the OOM-kill (preceded by interminable
 minutes of a seemingly hung and barely responsive system) or why it started
 happening today (afaics, none of the programs involved was updated),
 but this does fix it...

 FYI,
 I've tracked this down a little further.
 The horrid performance (hung system and eventual OOM-kill)
 are related to the use of sort above.  This is the definition:

 ASSORT = LC_ALL=C sort

 If I revert my earlier patch and instead simply
 insist that sort not do anything in parallel,

 ASSORT = LC_ALL=C sort --parallel=1

 then there is no hang, and things finish in relatively good time.

 I don't have a good stand-alone reproducer yet
 and am out of time for today.
 
 After updating to a new F14 kernel,
 I never managed to reproduce that, but maybe
 that's just a coincidence...
 
 Well, right after writing that, I realized the key to the misbehavior:
 sort was reading from a *pipe*:

 # This finishes right away, reading from input file k:
 seq 99 k  for i in $(seq 33); do LC_ALL=C timeout 1 sort k  /dev/null  
 done

 # When reading from a pipe, it's a very different story:
 # Without the timeout 7 prefix, the following would render an N-core
 # system (5N) unusable for many minutes.  As it is, be prepared:
 # my system goes unresponsive after 1 second, and doesn't return until 
 timeout.
 for i in $(seq 33); do seq 88|LC_ALL=C timeout 7 sort --para=5 /dev/null  
 done

 Occasionally, the above jobs all complete quickly.

 My first question was why were *any* processes being spawned to handle
 such a small input file.  The first answer is in the first hunk:

 diff --git a/src/sort.c b/src/sort.c
 index 13954cb..b9316e7 100644
 --- a/src/sort.c
 +++ b/src/sort.c
 @@ -112,9 +112,8 @@ struct rlimit { size_t rlim_cur; };
  /* Heuristic value for the number of lines for which it is worth
 creating a subthread, during an internal merge sort, on a machine
 that has processors galore.  Currently this number is just a guess.
 -   This value must be at least 4.  We don't know of any machine where
 -   this number has any practical effect.  */
 -enum { SUBTHREAD_LINES_HEURISTIC = 4 };
 +   This value must be at least 4.  */
 +enum { SUBTHREAD_LINES_HEURISTIC = 32 * 1024 };

  /* The number of threads after which there are
 diminishing performance gains.  */

 The old definition of SUBTHREAD_LINES_HEURISTIC meant that any group
 of 5 or more lines would be split in two and sorted via two (or more)
 separate threads.  Thus, with just 40 lines, you could get the maximum
 of 8 threads working.  That is obviously not efficient, unless lines are
 so pathologically long that the cost of comparing two of them approaches
 the cost of creating a new process.

 With the above, sort would use a more reasonable number.
 Tests on high-end hardware and using very short lines
 suggest that a value like 200,000 would still be conservative.
 
 Here's a complete patch for that:
 
From b2db5675bfeb3fe7e87bcc12934f34057ee26704 Mon Sep 17 00:00:00 2001
 From: Jim Meyering meyer...@redhat.com
 Date: Thu, 10 Feb 2011 08:48:27 +0100
 Subject: [PATCH] sort: spawn fewer threads for small inputs
 
 * src/sort.c (SUBTHREAD_LINES_HEURISTIC): Do not spawn a new thread
 for every 4 lines.  Increase this from 4 to 128K.  128K lines seems
 appropriate for a 5-year-old dual-core laptop, but it is too low for
 some common combinations of short lines and/or newer systems.
 * NEWS (Bug fixes): Mention it.
 ---
  NEWS   |9 ++---
  src/sort.c |   16 ++--
  2 files changed, 16 insertions(+), 9 deletions(-)
 
 diff --git a/NEWS b/NEWS
 index 3157ef2..5770410 100644
 --- a/NEWS
 +++ b/NEWS
 @@ -4,13 +4,16 @@ GNU coreutils NEWS-*- 
 outline -*-
 
  ** Bug fixes
 
 -  du would infloop when given --files0-from=DIR
 -  [bug introduced in coreutils-7.1]
 -
cut could segfault when invoked with a user-specified output
delimiter and an unbounded range like -f1234567890-.
[bug introduced in coreutils-5.3.0]
 
 +  du would infloop when given --files0-from=DIR
 +  [bug introduced in coreutils-7.1]
 +
 +  sort no longer spawns 7 worker threads to sort 16 lines
 +  [bug introduced in coreutils-8.6]
 +
wc would dereference a NULL 

Re: parallel sort at fault? [Re: [PATCH] tests: avoid gross inefficiency...

2011-03-14 Thread Pádraig Brady
On 14/03/11 15:31, Pádraig Brady wrote:
 On 12/03/11 15:34, Jim Meyering wrote:
 Jim Meyering wrote:
 Jim Meyering wrote:
 Jim Meyering wrote:
 Running make -j25 check on a nominal-12-core F14 system would
 cause serious difficulty leading to an OOM kill -- and this is brand new.
 It worked fine yesterday.  I tracked it down to all of the make processes
 working on the built_programs.list (in src/Makefile.am) rule

 built_programs.list:
   @echo $(bin_PROGRAMS) $(bin_SCRIPTS) | tr ' ' '\n' \
 | sed -e 's,$(EXEEXT)$$,,' | $(ASSORT) -u | tr '\n' ' '

 Which made me realize we were running that submake over 400 times,
 once per test scripts (including skipped ones).  That's well worth
 avoiding, even if it means a new temporary file.

 I don't know the root cause of the OOM-kill (preceded by interminable
 minutes of a seemingly hung and barely responsive system) or why it 
 started
 happening today (afaics, none of the programs involved was updated),
 but this does fix it...

 FYI,
 I've tracked this down a little further.
 The horrid performance (hung system and eventual OOM-kill)
 are related to the use of sort above.  This is the definition:

 ASSORT = LC_ALL=C sort

 If I revert my earlier patch and instead simply
 insist that sort not do anything in parallel,

 ASSORT = LC_ALL=C sort --parallel=1

 then there is no hang, and things finish in relatively good time.

 I don't have a good stand-alone reproducer yet
 and am out of time for today.

 After updating to a new F14 kernel,
 I never managed to reproduce that, but maybe
 that's just a coincidence...

 Well, right after writing that, I realized the key to the misbehavior:
 sort was reading from a *pipe*:

 # This finishes right away, reading from input file k:
 seq 99 k  for i in $(seq 33); do LC_ALL=C timeout 1 sort k  /dev/null  
 done

 # When reading from a pipe, it's a very different story:
 # Without the timeout 7 prefix, the following would render an N-core
 # system (5N) unusable for many minutes.  As it is, be prepared:
 # my system goes unresponsive after 1 second, and doesn't return until 
 timeout.
 for i in $(seq 33); do seq 88|LC_ALL=C timeout 7 sort --para=5 /dev/null  
 done

I still get bad performance for the above with SUBTHREAD_LINES_HEURISTIC=128K

So as you suggested, the large mem allocation when reading from a pipe is a 
problem,
and in fact seems to be the main problem.  Now given the memory isn't actually 
used
it shouldn't be a such an issue, but if one has MALLOC_PERTURB_ set, then it is 
used,
and it has a huge impact. Compare:

$ for i in $(seq 33); do seq 88| MALLOC_PERTURB_=  timeout 2 sort --para=1 
/dev/null  done
$ for i in $(seq 33); do seq 88| MALLOC_PERTURB_=1 timeout 2 sort --para=1 
/dev/null  done

So we should be more conservative in memory allocation in sort,
and be more aligned with CPU cache sizes than RAM sizes I suspect.
This will be an increasing problem as we tend to run more in ||.
It would be interesting I think to sort first by L1 cache size,
then by L2, etc, but as a first pass, a more sensible default
of 8MB or so seems appropriate.

As a general note, MALLOC_PERTURB_ should be unset when benchmarking anything 
to do with `sort`

cheers,
Pádraig.



Re: parallel sort at fault? [Re: [PATCH] tests: avoid gross inefficiency...

2011-03-16 Thread Pádraig Brady
On 16/03/11 12:07, Jim Meyering wrote:
 Pádraig Brady wrote:
 I've not fully analyzed this yet, and I'm not saying it's wrong,
 but the above change seems to have a large effect on thread
 creation when smaller buffers are used (you hinted previously
 that being less aggressive with the amount of mem used by default
 might be appropriate, and I agree).

 Anyway with the above I seem to need a buffer size more
 than 10M to have any threads created at all.

 Testing the original 4 lines heuristic with the following, shows:
 (note I only get  4 threads after 4M of input, not 7 for 16 lines
 as indicated in NEWS).

 $ for i in $(seq 30); do
   j=$((2$i))
   yes | head -n$j  t.sort
   strace -c -e clone sort --parallel=16 t.sort -o /dev/null 21 |
join --nocheck-order -a1 -o1.4,1.5 - /dev/null |
sed -n s/\([0-9]*\) clone/$j\t\1/p
 done
 4   1
 8   2
 16  3
 32  4
 64  4
 128 4
 ...
 1048576 4
 2097152 4
 4194304 8
 8388608 16

 When I restrict the buffer size with '-S 1M', many more threads
 are created (a max of 16 in parallel with the above command)
 4   1
 8   2
 16  3
 32  4
 64  4
 128 4
 256 4
 512 4
 10244
 20484
 40964
 81924
 16384   8
 32768   12
 65536   24
 131072  44
 262144  84
 524288  167
 1048576 332
 2097152 660
 4194304 1316
 8388608 2628

 After increasing the heuristic to 128K, I get _no_ threads until -S  10M
 and this seems to be independent of line length.
 
 Thanks for investigating that.
 Could strace -c -e clone be doing something unexpected?
 When I run this (without my patch), it would use 8 threads:
 
 seq 16  in; strace -ff -o k ./sort --parallel=16 in -o /dev/null
 
 since it created eight k.PID files:
 
 $ ls -1 k.*|wc -l
 8
 
 Now, for such a small file, it does not call clone at all.
 

Oops, yep I forget to add -f to strace.
So NEWS is correct.

# SUBTHREAD_LINES_HEURISTIC = 4
$ for i in $(seq 22); do
j=$((2$i))
yes | head -n$j  t.sort
strace -f -c -e clone ./sort --parallel=16 t.sort -o /dev/null 21 |
join --nocheck-order -a1 -o1.4,1.5 - /dev/null |
sed -n s/\([0-9]*\) clone/$j\t\1/p
  done
4   1
8   3
16  7
32  15
64  15
128 15
256 15
512 15
102415
204815
409615
819215
16384   15
32768   15
65536   15
131072  15
262144  15
524288  15
1048576 15
2097152 15
4194304 30
8388608 45

# As above, but add -S1M option to sort

4   1
8   3
16  7
32  15
64  15
128 15
256 15
512 15
102415
204815
409615
819215
16384   30
32768   45
65536   90
131072  165
262144  315
524288  622
1048576 1245
2097152 2475
4194304 4935
8388608 9855

With SUBTHREAD_LINES_HEURISTIC=128k and -S1M option to sort we get no threads as
nlines never gets above 12787 (there looks to be around 80 bytes overhead per 
line?).
Only when -S = 12M do we get nlines high enough to create threads.

cheers,
Pádraig.



Re: parallel sort at fault? [Re: [PATCH] tests: avoid gross inefficiency...

2011-03-16 Thread Pádraig Brady
On 16/03/11 15:32, Jim Meyering wrote:
 Pádraig Brady wrote:

 With SUBTHREAD_LINES_HEURISTIC=128k and -S1M option to sort we get no 
 threads as
 nlines never gets above 12787 (there looks to be around 80 bytes overhead 
 per line?).
 Only when -S = 12M do we get nlines high enough to create threads.
 
 Thanks for pursuing this.
 Here's a proposed patch to address the other problem.
 It doesn't have much of an effect (any?) on your
 issue when using very little memory, but when a sort user
 specifies -S1M, I think they probably want to avoid the
 expense (memory) of going multi-threaded.
 
 What do you think?
 
 -#define INPUT_FILE_SIZE_GUESS (1024 * 1024)
 +#define INPUT_FILE_SIZE_GUESS (128 * 1024)

This does seem a bit like whack-a-mole
but at least we're lining them up better.

The above gives reasonable threading by default,
while reducing the large upfront malloc.

$ for len in 1 79; do
for i in $(seq 22); do
  lines=$((2$i))
  yes $(printf %${len}s)| head -n$lines  t.sort
  strace -f -c -e clone ./sort --parallel=16 t.sort -o /dev/null 21 |
  join --nocheck-order -a1 -o1.4,1.5 - /dev/null |
  sed -n s/\([0-9]*\) clone/$lines\t\1/p
done
  done

#lines  threads (2 byte lines)
--
131072  1
262144  3
524288  7
1048576 15
2097152 15
4194304 15
8388608 15

#lines  threads (80 byte lines)
--
131072  1
262144  3
524288  7
1048576 15
2097152 15
4194304 22
8388608 60

cheers,
Pádraig.



Re: [PATCH] copy: adjust fiemap handling of sparse files

2011-03-16 Thread Pádraig Brady
On 16/03/11 19:18, Mike Frysinger wrote:
 On Wednesday, March 16, 2011 11:26:40 Pádraig Brady wrote:
 On 14/02/11 06:05, Jim Meyering wrote:
 Pádraig Brady wrote:
 For my own reference, I can probably skip performance
 tests on (older) btrfs by checking `filefrag` output.
 Also in `cp`, if we see an unwritten extent we should
 probably create a hole rather than an empty allocation
 by default.  It's better to decrease file allocation
 than increase it.

 Makes sense.

 Thinking a bit more about it, I'm not sure I should do the above,
 as one would be more surprised if cp by default introduced holes into
 a copy of a fully allocated file.

 The previously applied patch changes behavior on BTRFS on Fedora 14,
 in that it will convert a file with holes to a fully allocated one
 with zeros. But that is due to an already fixed bug in BTRFS
 where it reports holes as empty extents.

 I'm inclined to leave this as is, because this stuff is tricky enough,
 without introducing work arounds for buggy file systems.
 There is no data loss in this case, just bigger files
 (which one can avoid with cp --sparse=always).
 Also it will not be common to overlap future coreutils releases,
 with older BTRFS implementations.
 
 well, in the bug report i was working with, we were seeing data loss.  i 
 wonder if it'd be possible to detect the fs/kernel version and error out when 
 versions are found that are known to be broken ?

That was a different issue.

It would be nice, but I don't think it would be practical to try and detect
and work around such file system issues in cp.

There are always going to be teething problems with a large
body of new logic, so I think the best approach with file systems
is to increase trust in the gradually over time.

Personally I'd consider BTRFS for my backup drive at present,
which it should be particularly good at given its snapshot
capabilities.

cheers,
Pádraig.



Re: [PATCH] copy: adjust fiemap handling of sparse files

2011-03-17 Thread Pádraig Brady
On 17/03/11 07:24, Mike Frysinger wrote:
 On Wednesday, March 16, 2011 19:55:56 Pádraig Brady wrote:
 On 16/03/11 19:18, Mike Frysinger wrote:
 well, in the bug report i was working with, we were seeing data loss.  i
 wonder if it'd be possible to detect the fs/kernel version and error out
 when versions are found that are known to be broken ?

 That was a different issue.

 It would be nice, but I don't think it would be practical to try and detect
 and work around such file system issues in cp.

 There are always going to be teething problems with a large
 body of new logic, so I think the best approach with file systems
 is to increase trust in the gradually over time.
 
 while this is true, i'd understand if the issue were bugs in coreutils' cp.
 they'd get fixed and a new release made, no problem.  but we're talking about
 silent errors in the kernel, so anyone running a newer coreutils with a kernel
 from two days ago is going hit issues.  if we were looking at basic read/write
 functionality, i'd understand not bothering, but we're talking purely about
 optimization paths in coreutils' cp.
 
 on the Gentoo side, i guess i'll run with a hack like so:
 
 --- a/src/copy.c
 +++ b/src/copy.c
 @@ -22,6 +22,7 @@
  #include sys/ioctl.h
  #include sys/types.h
  #include selinux/selinux.h
 +#include sys/utsname.h
  
  #if HAVE_HURD_H
  # include hurd.h
 @@ -930,7 +931,32 @@ copy_reg (char const *src_name, char const *dst_name,
   the file is a hole.  */
if (x-sparse_mode == SPARSE_AUTO  S_ISREG (src_open_sb.st_mode)
 ST_NBLOCKS (src_open_sb)  src_open_sb.st_size / 
 ST_NBLOCKSIZE)
 -make_holes = true;
 +{
 +  make_holes = true;
 +
 +# ifdef __linux__
 +  static int safe_kernel = -1;
 +
 +  if (safe_kernel == -1)
 +{
 +  struct utsname name;
 +
 +  safe_kernel = 1;
 +
 +  if (uname (name) != -1  strncmp (name.release, 2.6., 
 4) == 0)
 +{
 +  int kver = atoi (name.release + 4);
 +
 +  /* ext4  btrfs are known to be broken */
 +  if (kver  38)
 +safe_kernel = 0;
 +}
 +}
 +
 +  if (safe_kernel == 0)
 +make_holes = false;
 +# endif
 +}
  #endif
  }
  
 -mike

This is the kind of patch appropriate for a distro,
as they may or may not have a fix backported to their kernel.

I presume you're referring to coreutils bug 8001.
I didn't realise ext4 was also affected.
Is this specific to the compress flag?
What was the specific fix to btrfs /or ext4 in 2.6.38?

Also note the make_holes heuristic variable is no
longer used in extent_copy due to this patch which
came after the 8.10
http://git.sv.gnu.org/gitweb/?p=coreutils.git;a=commitdiff;h=275c831a

cheers,
Pádraig



Re: [PATCH] copy: adjust fiemap handling of sparse files

2011-03-17 Thread Pádraig Brady
On 17/03/11 19:29, Mike Frysinger wrote:
 On Thursday, March 17, 2011 05:32:33 Pádraig Brady wrote:
 This is the kind of patch appropriate for a distro,
 as they may or may not have a fix backported to their kernel.
 
 Gentoo, being a source distribution, has no kernel requirement.  people are 
 free to run on pretty much anything.  we dont have the luxury of saying 
 upgrade your kernel package and reboot.  especially since we also support 
 running Gentoo inside of other distros (often times as non-root user) where 
 people dont have the ability at all to upgrade.
 
 I presume you're referring to coreutils bug 8001.
 I didn't realise ext4 was also affected.
 Is this specific to the compress flag?
 What was the specific fix to btrfs /or ext4 in 2.6.38?
 
 this is the ext4-specific issue i'm avoiding:
 http://www.spinics.net/lists/linux-ext4/msg23430.html

Ah that issue. That's what the syncing required in this test
was working around:
http://git.sv.gnu.org/gitweb/?p=coreutils.git;a=blob;f=tests/cp/sparse-fiemap;h=a2460a08;hb=HEAD
Note Fedora 15 about to be released is using the new fiemap code in cp,
but is also based on 2.6.38 and so will have the fix soon,
if it does not already have it.

 
 Also note the make_holes heuristic variable is no
 longer used in extent_copy due to this patch which
 came after the 8.10
 http://git.sv.gnu.org/gitweb/?p=coreutils.git;a=commitdiff;h=275c831a
 
 i'll worry about it once 8.11 is released ;)
 -mike

You might be safer to just bypass extent_copy for kernels  2.6.38
I'm 30:70 for doing that in upstream coreutils

cheers,
Pádraig.




Re: [PATCH] copy: adjust fiemap handling of sparse files

2011-03-18 Thread Pádraig Brady
On 17/03/11 23:00, Pádraig Brady wrote:
 On 17/03/11 19:29, Mike Frysinger wrote:
 On Thursday, March 17, 2011 05:32:33 Pádraig Brady wrote:
 This is the kind of patch appropriate for a distro,
 as they may or may not have a fix backported to their kernel.

 Gentoo, being a source distribution, has no kernel requirement.  people are 
 free to run on pretty much anything.  we dont have the luxury of saying 
 upgrade your kernel package and reboot.  especially since we also support 
 running Gentoo inside of other distros (often times as non-root user) where 
 people dont have the ability at all to upgrade.

 I presume you're referring to coreutils bug 8001.
 I didn't realise ext4 was also affected.
 Is this specific to the compress flag?
 What was the specific fix to btrfs /or ext4 in 2.6.38?

 this is the ext4-specific issue i'm avoiding:
 http://www.spinics.net/lists/linux-ext4/msg23430.html
 
 Ah that issue. That's what the syncing required in this test
 was working around:
 http://git.sv.gnu.org/gitweb/?p=coreutils.git;a=blob;f=tests/cp/sparse-fiemap;h=a2460a08;hb=HEAD
 Note Fedora 15 about to be released is using the new fiemap code in cp,
 but is also based on 2.6.38 and so will have the fix soon,
 if it does not already have it.

I also now remember this discussion:
http://debbugs.gnu.org/cgi/bugreport.cgi?bug=6131
where FIEMAP_FLAG_SYNC was introduced in cp,
I think to work around this same bug in BTRFS and EXT4.
This flag in ineffective though on ext4 loopback
and thus I needed to add the syncs to the test as ref'd above.

 Also note the make_holes heuristic variable is no
 longer used in extent_copy due to this patch which
 came after the 8.10
 http://git.sv.gnu.org/gitweb/?p=coreutils.git;a=commitdiff;h=275c831a

 i'll worry about it once 8.11 is released ;)
 -mike
 
 You might be safer to just bypass extent_copy for kernels  2.6.38
 I'm 30:70 for doing that in upstream coreutils

So am I right in thinking FIEMAP_FLAG_SYNC is no longer needed with 2.6.38.
I'm quite worried about implicit syncing in cp actually, where it might
introduce bad performance regressions.
Maybe we could disable this flag if XFS || kernel = 2.6.38?
Or maybe we might do as you suggest above and disable extent_copy()
completely, for kernels before 2.6.38.
Not an ideal situation at all :(

cheers,
Pádraig.



Re: [PATCH] copy: adjust fiemap handling of sparse files

2011-03-18 Thread Pádraig Brady
On 18/03/11 13:19, Pádraig Brady wrote:
 Bah humbug. Looks like there is no such issue.
 This actually seems like an issue in a coreutils test script,
 which made it seem like the SYNC done by `filefrag -vs` was ineffective.

Proposed fix attached.
My perl is still weak, so I won't apply until someone has reviewed.

thanks,
Pádraig.
From 7e27d28b799f0399bbf79074854fa9967ff7752e Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?P=C3=A1draig=20Brady?= p...@draigbrady.com
Date: Sat, 19 Mar 2011 01:22:37 +
Subject: [PATCH] tests: fix the sparse-fiemap test

* tests/filefrag-extent-compare: Merge adjacent extents in
each list before processing, so we correctly account for
split extents in either list.
* tests/cp/sparse-fiemap: Remove the explicit syncing,
which was only changing way extents were arranged,
and thus working around the extent comparison issue
that was seen on ext4 loop back.
---
 tests/cp/sparse-fiemap|   11 +--
 tests/filefrag-extent-compare |   41 ++---
 2 files changed, 35 insertions(+), 17 deletions(-)

diff --git a/tests/cp/sparse-fiemap b/tests/cp/sparse-fiemap
index a2460a0..5f0beb7 100755
--- a/tests/cp/sparse-fiemap
+++ b/tests/cp/sparse-fiemap
@@ -69,12 +69,11 @@ for i in $(seq 1 2 21); do
   -e 'for (1..'$j') { sysseek (*F, $n, 1)' \
   -e ' syswrite (*F, chr($_)x$n) or die $!}'  j1 || fail=1
 
-# Note the explicit fdatasync is used here as
-# it was seen that `filefrag -s` (FIEMAP_FLAG_SYNC) was
-# ineffective on ext4 loopback on Linux 2.6.35.10-72.fc14.i686
-dd if=/dev/null of=j1 conv=notrunc,fdatasync
+# Note there is an implicit sync performed by cp to
+# work arounds bugs in EXT4 and BTRFS before Linux 2.6.38
+# Note also the -s parameter to the second filefrag below
+# for the same reasons.
 cp --sparse=always j1 j2 || fail=1
-dd if=/dev/null of=j2 conv=notrunc,fdatasync
 
 cmp j1 j2 || fail=1
 if ! filefrag -v j1 | grep -F extent /dev/null; then
@@ -98,7 +97,7 @@ for i in $(seq 1 2 21); do
 
   # exclude the physical block numbers; they always differ
   filefrag -v j1  ff1 || framework_failure
-  filefrag -v j2  ff2 || framework_failure
+  filefrag -vs j2  ff2 || framework_failure
   { f ff1; f ff2; } | $PERL $abs_top_srcdir/tests/filefrag-extent-compare ||
 fail=1
 fi
diff --git a/tests/filefrag-extent-compare b/tests/filefrag-extent-compare
index 3c095d5..fa04d9e 100644
--- a/tests/filefrag-extent-compare
+++ b/tests/filefrag-extent-compare
@@ -28,31 +28,50 @@ my @b;
 foreach my $i (0..@A/2-1) { $a[$i] = { L_BLK = $A[2*$i], LEN = $A[2*$i+1] } };
 foreach my $i (0..@B/2-1) { $b[$i] = { L_BLK = $B[2*$i], LEN = $B[2*$i+1] } };
 
+# Merge adjacent extents in passed array
+# Discounted extents have length set to 0
+sub merge_extents($)
+{
+  my @e = @{ $_[0] };
+
+  my $a = 1;
+  my $b = 0;
+  while (1)
+{
+  (!defined $e[$a] || !defined $e[$b])
+and last;
+  ($e[$b]-{L_BLK} + $e[$b]-{LEN} == $e[$a]-{L_BLK})
+and $e[$b]-{LEN} += $e[$a]-{LEN}, $e[$a]-{LEN} = 0, next;
+  $b=$a;
+}
+  continue
+{
+  ++$a;
+}
+}
+
+merge_extents(\@a);
+merge_extents(\@b);
+
 my $i = 0;
 my $j = 0;
 while (1)
   {
+# skip discounted extents
+defined $a[$i]  $a[$i]-{LEN} == 0 and ++$i, next;
+defined $b[$j]  $b[$j]-{LEN} == 0 and ++$j, next;
+
 !defined $a[$i]  !defined $b[$j]
   and exit 0;
 defined $a[$i]  defined $b[$j]
   or die \@a and \@b have different lengths, even after adjustment\n;
 ($a[$i]-{L_BLK} == $b[$j]-{L_BLK}
   $a[$i]-{LEN} == $b[$j]-{LEN})
-  and next;
-($a[$i]-{LEN}  $b[$j]-{LEN}
-  exists $a[$i+1]  $a[$i]-{LEN} + $a[$i+1]-{LEN} == $b[$j]-{LEN})
-  and ++$i, next;
-exists $b[$j+1]  $a[$i]-{LEN} == $b[$i]-{LEN} + $b[$i+1]-{LEN}
-  and ++$j, next;
+  and $i++, $j++, next;
 die differing extent:\n
   .   [$i]=$a[$i]-{L_BLK} $a[$i]-{LEN}\n
   .   [$j]=$b[$j]-{L_BLK} $b[$j]-{LEN}\n
   }
-continue
-  {
-++$i;
-++$j;
-  }
 
 ### Setup GNU style for perl-mode and cperl-mode.
 ## Local Variables:
-- 
1.7.4



Re: [PATCH] copy: adjust fiemap handling of sparse files

2011-03-19 Thread Pádraig Brady
On 19/03/11 12:07, Jim Meyering wrote:
 Pádraig Brady wrote:
 On 18/03/11 13:19, Pádraig Brady wrote:
 Bah humbug. Looks like there is no such issue.
 This actually seems like an issue in a coreutils test script,
 which made it seem like the SYNC done by `filefrag -vs` was ineffective.

 Proposed fix attached.
 
 Thanks!  Here's a new version of your patch:
 
 I've adjusted your new function to modify the actual arrays,
 which lets us simplify the caller.
 I've also added two die message $ME:  prefixes I'd forgotten.

Cool. I'd considered splice but was worried about side effects,
which I might not notice due to my Perl inexperience.

 Is filefrag's -s option portable enough for us to rely on it?

I'll add a -s param to the first use of filefrag, to catch that.
If FIEMAP_FLAG_SYNC is not implemented by the kernel then the
test will fail anyway as CP won't be doing it's internal sync.

thanks for the perl tutorial,
much appreciated!

Pádraig.



  1   2   3   4   5   6   7   8   9   10   >