Re: Found a bug in tar-header at "ustar" magic indicator field

2020-10-27 Thread Joerg Schilling
Juha Mäkinen  wrote:

> Hello everyone,
>
> I looked at the contents of the tar-file in hex editor.
> What I noticed is that the tar header is formed incorrectly. It's supposed to
> contain the "ustar" TMAGIC indicator at offset 257 followed by a null.  But, 
> what
> I get is "ustar" followed by a space (0x20).  Then at offset 263, it's
> supposed to be the TVERSION version "00", but I am seeing a space (0x20) and 
> then a
> null (0x00).
>
> Tar creates this kind of header:
> 400  \0   u   s   t   a   r  \0   m   a   k   i   n   e   n
>
> The correct form should be:
> 400  \0   u   s   t   a   r  \0   0   0   \0   m   a   k   i   n   e   n

GNU tar is not POSIX by default and changing this would cause other tar 
implementations to become unable to adopt to the incompatiblities in the 
archive format created by GNU tar.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
  Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: Possible file date related bug in modern GNU tar?

2020-10-26 Thread Joerg Schilling
Peter Dyballa  wrote:

> The Apple supplied versions of /usr/bin/gnutar in PPC Tiger (Mac OS X 
> 10.4.11) and PPC Leopard (Mac OS X 10.5.8), 1.14 resp. 1.15.1, show since 
> some time warnings like this
>
>   /usr/bin/gnutar: meson-0.55.3/COPYING: implausibly old time stamp 
> 1970-01-01 01:00:00
>
> that I noticed recently when untarring an archive (meson-0.55.3.tar.gz in 
> this case, but there is about a dozen more TAR files that show this). These 
> "implausibly old time stamps" are also preserved by using means like Python's 
> pip installer.

A timestamp value of 0 is not forbidden. The POSIX.1/1988 tar format supports 
any non-negative value.

The tar you are using to unpack does not support POSIX.1/2001 Tar extensions 
and the TAR program that was used to create the archive could be seen to be 
buggy since it encoded the value to 0 even though the right value is correctly
representable in an 11 bit unsigned number.

> Of course up-to-date GNU tar 1.32 shows a reasonable date '2020-08-15 18:27'.

This is because the recent gnu tar supports to read POSIX.1/2001 tar extensions.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
  Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: I am so sad.

2020-08-21 Thread Joerg Schilling
Vince Eccles  wrote:

> I have advised UNIX developers in the past on catching stupid things people 
> do. I do stupid things from time to time.  I was a tester for a number of the 
> old Digital Corporation OS developments. For example, they put some simple 
> catches in compilers to disallow certain command requests.
>
> You don't have to do anything, but I just thought you might like to know and 
> think about a warning when -f is used and a file name is not consistent with 
> tarball names.

As mentioned before: there is already a long existing practice on ow to avoid 
this kind of problems:

-   The official CLI from tar does not use a dash before options,
but rather calls the characters key-letters. Tar is not using
a CLI that is compatible to typical UNIX programs but to "ar"
and to the old "ps" interface from the 1970s.

-   Forbid to truncate existing files, i.e. abort if the CLI
is risky and the file exists with a size >0

-   When called with a dash in front of the key letters, do not
permit non-boolean keyletters tobe combined as a single
argument.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: I am so sad.

2020-08-20 Thread Joerg Schilling
Vince Eccles  wrote:

> Dear sirs,
>
> I worked all day on debugging some important coding in FORTRAN. It was tested 
> and working. I decided it was time to tar up the new code and send it to a 
> backup machine.
>
> I intended to type:
>
> tar -zcf src.tar.gz ./various_*/*.f90 *.f90
>
> which would have places all the fortran codes in a compressed tar file that I 
> would transfer to a new machine.
>
> However, I typed:
>
> tar -zcf ./various_/*.f90 *.f90
>
> and the tar blasted all of my fortran files. I had a backup from two days 
> ago, but the lost effort was horrific.

First, if this really destroys _all_ f90 files, then there would be a bad bug 
in gtar. I expect only the first f90 file to be destroyed.

In general, this is a result of the way, the historical tar from 1977 did 
implement command line parsing.

While gtar implements a method that claims to be compatible to that historical 
way, it is still not 100 compatible to a real tar but o the other side 
continues 
to have this CLI parsing problem.

star since > 35 years implements a new safe method that does not permit certain 
use cases. If called as "star", this is definitely impossible. If called as 
"tar", star still prevents your problem to happen, since it remembers that is 
has been called with the dangerous historic CLI and thus requires the output 
file to either not exist or to be of zero size.

There are several levels for the security in star:

   tar cf archive ...

uses the official tar CLI and the related compatibility converter contains the
rule mentioned above.

   tar -cf archive ...

is an undocumented CLI that "tar" does not need to support and for this reason, 
the option parser in such a case does not permit to combine options in a single 
argument unless they are boolean flags. 'f' does not match that category.

So there is a way to prevent similar problems when using the right software.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: Unexpected behaviour when creating a tarball with -h: symlinks are replaced by hardlinks

2020-08-13 Thread Joerg Schilling
Joerg Schilling  wrote:

> And finally, there is 
>
> -link-data
>
> see page 35, which is allowed for the POSIX.1-1988 TAR archive format and 
> later 
> versions. -link-data tells star to archive the data again for hardlinkes 
> files.

Sorry for the mistake, star supports -link-data only if the archive format is

-Hexustar

I now believe to remember that permitting link data was added with POSIX.1-2001

BTW: star is part of the schilytools at:

http://sourceforge.net/projects/schilytools/files/

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: Unexpected behaviour when creating a tarball with -h: symlinks are replaced by hardlinks

2020-08-13 Thread Joerg Schilling
"bug|gnu...@nanl.de"  wrote:

> However what I'm experiencing is - while -h indeed does not preserve the
> symlinks - it replaces them with hard links instead of actually
> de-referencing them and including the original file.
>
> This in particular is an issue, if you want to create a tarball for a
> filesystem which doesn't support any of such link types - e.g. FAT.
>
> So when extracting the tarball (created with -h) on a FAT filesystem,
> I'm experiencing errors like:
>
> tar: dir/target: Cannot hard link to ?dir/origin?: Operation not permitted
>
> And nothing ends up where a symlink was present when creating the archive.

Given that this is a common problem on non-POSIX platforms, 20 years ago star
introduced support to unpack archives on non-POSIX platforms by using the 
options:

-copylinks

-copyhardlinks

-copysymlinks

-copydlinks

See star man page http://schilytools.sourceforge.net/man/man1/star.1.html
the related parts are currently on page 17.

There is also the option

-hardlinks

see page 33, that tells star to unpack symlinks as hardlinks, which works
on platforms like BeOS or Haiku.

And finally, there is 

-link-data

see page 35, which is allowed for the POSIX.1-1988 TAR archive format and later 
versions. -link-data tells star to archive the data again for hardlinkes files.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: GNU tar fails to restore listed-incremental backup

2020-02-14 Thread Joerg Schilling
Deweloper  wrote:

> I'm using GNU tar for incremental backup using --listed-incremental
> option. Unfortunately there seems to be a problem with restoring such
> backup if following operations are performed:
> 1. Rename directory d1 to d3
> 2. Rename directory d2 to d1
> Backup created afterwards contains:
>
> R./d2 T./d1
> R./d1 T./d3
> R./d2 T./d1
>
> Restoring it fails with:
>
> tar: Cannot rename './d2' to './d1': Directory not empty
> tar: Exiting with failure status due to previous errors

This bug exists since that "incremental feature" has been first announced for
GNU tar in ~ 1992.

The bug has been reported at least 4 times since September 2004 and it will 
probably never be fixed as this is a problem that results from the basic 
concept for incremental dumps used in GNU tar. So it is impossible to make it 
working without introducing a completely new and different incremental 
system comcept... 

I recommend you to use star that has been verified for being able to correctly 
restore incremental star dumps thousands and thousands of times with real 
world's data.

You of course need to use star for your backups as well, as star uses a 
different concept for incrementals that was derived from the definitely working 
concept in ufsdump/ufsrestore from BSD Unix from 1981.

Check: http://schilytools.sourceforge.net/man/man1/star.1.html

Incremental dumps are explained starting from approx. page 53

BTW: Incrementals made with star are also smaller than incrementals made with 
gtar.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: rmt(8) spells "MTIOCOP" of "MTIOCTOP"

2020-02-13 Thread Joerg Schilling
Rick van Rein  wrote:

> Hi,
>
> See subject, I think that the ioctl identifier is misspelled in the rmt
> man page.
>
> In general, I was hoping to find enough information to simulate this
> protocol in a simple daemon over a protected link, but the man page is
> insufficient for that, notably for the I and S commands.  I'll dig deeper.

Be careful, the rmt implementation from gtar is outdated since approx. 30 years.

It implements the old protocol version from 1981 instead of the new protocol 
version 1 from 1989.

The old protocol is unable to abstract from binariy incompatibilities in the 
MTIOCTOP ioctl() implemented e.g. on Linux, compared to a typical UNIX. This 
typically causes a "tape rewind" command to be mapped to e.g. "erase tape" on 
the remote side and this usually results in junk values retrieved from the 
"status" command.


The implementation from gtar is also slower than the current enhanced version 1 
implementation.

See:

http://schilytools.sourceforge.net/man/man1/rmt.1.html

For the documentation of the RMT daemon and:

http://schilytools.sourceforge.net/man/man3/librmt.3.html

http://schilytools.sourceforge.net/man/man3/rmtgetconn.3.html

http://schilytools.sourceforge.net/man/man3/rmtopen.3.html

http://schilytools.sourceforge.net/man/man3/rmtinit.3.html

http://schilytools.sourceforge.net/man/man3/rmtstatus.3.html

for a documentation of the application counterpart in librmt.

The Source code is in schilytools:

http://sourceforge.net/projects/schilytools/files/

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [GNU tar 1.32] testsuite: 91 210 failed

2019-11-13 Thread Joerg Schilling
Sergey Poznyakoff  wrote:

> Hi Lloyd,
>
> Thanks for your report.
>
> The test 91 fails because the shell command 'ulimit -n 10' fails:

This is expected to fail if you like to use a shell afterwards since the shell 
uses an fd like 19 to save stdin.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] [PATCH] tar.1: minor fixes

2019-03-25 Thread Joerg Schilling
Kir Kolyshkin  wrote:

> 1. Add missing .TP before TAR_SUBCOMMAND, otherwise it is merged with
> the description of the previous option.
>
> 2. Fix typesetting of pax(1) utility (last word was in bold).
>
> 3. Remove .B from examples (some were in bold, some not,
> and by default bold is not used in examples).

Esamples in UNIX man pages are usually bold!

BTW: .EX and .EE are not valid UNIX man macros.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] gtar's ACL support is still unusable

2019-03-18 Thread Joerg Schilling
Because of that missimplementaion in GNU tar, I reworked the star.4 man page
and added information on when the additional numeric values may be omitted and 
how the ACL entries are restored:

...
  If the user name or group name field is numeric because
  the related user has no entry in the passwd/group data-
  base at the time the archive is created, the additional
  numeric field may be omitted.

  This  is  an   example   of   the   format   used   for
  SCHILY.acl.access  (a space has been inserted after the
  equal sign and lines are broken [marked with '\' ]  for
  readability, additional fields in bold):

  SCHILY.acl.access= user::rwx,user:lisa:r-x:502, \
 group::r-x,group:toolies:rwx:102, \
 mask::rwx,other::r--x

  If and only if the user ID 502 and group ID 102 have no
  passwd/group  entry,  our  example acl entry looks this
  way:

  SCHILY.acl.access= user::rwx,user:502:r-x, \
 group::r-x,group:102:rwx:, \
 mask::rwx,other::r--x

  The added numerical  user  and  group  identifiers  are
  essential  when  restoring  a  system completely from a
  backup, as initially  the  name-to-identifier  mappings
  may  not be available, and then file ownership restora-
  tion would not work.

  When the archive is unpacked and the  ACL  entries  for
  the  files  are  restored, first the additional numeric
  fields are removed and an attempt is  made  to  restore
  the  resulting  ACL  data.   If that fails, the numeric
  fields are extracted and  the  related  user  name  and
  group  name  fields are replaced by the numeric fields,
  before the ACL restore is retried.
...

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] gtar's ACL support is still unusable

2019-03-15 Thread Joerg Schilling
Pavel Raiskup  wrote:

> Thanks for the report! +cc acl-devel
>
> On Thursday, March 14, 2019 2:51:10 PM CET Joerg Schilling wrote:
> > Trying to unpack the reference archives for the POSIX ACL proposal from
> > 1993 that was withdrawn in 1997 results in something like:
> > 
> > /tmp/tar-1.31/src/tar --acls -xpf acl-test3.tar.gz 
> > /tmp/tar-1.31/src/tar: default/dir2: Warnung: Funktion acl_from_text 
> > fehlgeschlagen
> > /tmp/tar-1.31/src/tar: default/dir3: Warnung: Funktion acl_from_text 
> > fehlgeschlagen
> > /tmp/tar-1.31/src/tar: default: Warnung: Funktion acl_from_text 
> > fehlgeschlagen
> > /tmp/tar-1.31/src/tar: default: Warnung: Funktion acl_from_text 
> > fehlgeschlagen
>
> This is because we use acl_from_text() without pre-filtering, which
> doesn't accept the fourth UID/GID number value in e.g.
> ACL record 'user:joe:rwx:503' (stored in the archive):
>
>$ tar -t -vv --acls -f acl-test5.tar
>...
>drwxrwxr-x+ gruenbacher/assis 0 2001-11-04 04:43 default/dir2/
>  a: user::rwx,user:joe:rwx:503,group::r-x,mask::rwx,other::r-x
>...
>$ tar -xf --acls -f acl-test5.tar
>...
>tar: default/dir2: Warning: Cannot acl_from_text: Invalid argument
>...

This is the way, it has been negotiated in 2001 with Andreas Gruenbacher from 
Suse and how star implements it since 2001.

The background is that the Solaris tar impementation missed numeric entries and 
thus could not restore ACLs that refer to named entries that do not exist on 
the platform used for extraction.

This was after a discussion with Andreas Gruenbacher during summer 2001 on how 
to define a text format that allows to support the master ACL-on-UFS 
implementation from Solaris that was the base for the withdrawn POSIX proposal
as well as the differing Linux implementation and AIX, HP-UX, IRIX, True64...

The text format for the historical ACL system in star did never change since 
the impementation was first introduced into SCCS in November 2001.

In order to allow other people to implement a compatible interface, we created 
the reference archives at the same time and made them available via ftp.

After some discussions, Sun even added a related ACL_APPEND_ID flag for
acl_totext() in Spring 2005 to match that star format.

> I did not notice this so far, since we don't add the fourth numeric

Well, it is in the star.4 man page since October 2003 - with bold numeric 
fields. 
See
sccs get -p -A -m -r1.3 star.4

output:
...
1.3 joerg   03/10/07.B SCHILY.acl.access
1.3 joerg   03/10/07(a space has been inserted after the 
equal sign and lines are broken
1.3 joerg   03/10/07[marked with '\e' ] for readability, 
additional fields in bold):
1.3 joerg   03/10/07.sp
1.3 joerg   03/10/07SCHILY.acl.access=  
user::rwx,user:lisa:r\-x:\fB502\fP,\ \e
1.3 joerg   03/10/07
group::r\-x,group:toolies:rwx:\fB102\fP,\ \e
1.3 joerg   03/10/07
mask::rwx,other::r\-\-x
1.3 joerg   03/10/07.sp
1.3 joerg   03/10/07The numerical user and group 
identifiers are essential when restoring a system completely
1.3 joerg   03/10/07from a backup, as initially the 
name-to-identifier mappings may not be available,
1.3 joerg   03/10/07and then file ownership restoration 
would not work.
1.3 joerg   03/10/07.sp
1.3 joerg   03/10/07As the archive format that is used for 
king up access control lists is compatible
1.3 joerg   03/10/07with the
1.3 joerg   03/10/07.B pax
1.3 joerg   03/10/07archive format, archives created that 
way can be restored by
1.3 joerg   03/10/07.B star
1.3 joerg   03/10/07or a POSIX.1-2001 compliant
1.3 joerg   03/10/07.BR pax .
1.3 joerg   03/10/07Note that programs other than
1.3 joerg   03/10/07.B star
1.3 joerg   03/10/07will ignore the ACL information.
1.1 joerg   03/09/14.TP
1.1 joerg   03/09/14.B SCHILY.acl.default
1.3 joerg   03/10/07The default ACL for a file. See 
1.3 joerg   03/10/07.B SCHILY.acl.access
1.3 joerg   03/10/07for more information.
1.3 joerg   03/10/07.sp
1.3 joerg   03/10/07This is an example of the format used 
for
1.3 joerg   03/10/07.B SCHILY.acl.default
1.3 joerg   03/10/07(a space has been inserted after the 
equal sign and lines are broken
1.3 joerg   03/10/07[marked with 

[Bug-tar] gtar's ACL support is still unusable

2019-03-14 Thread Joerg Schilling
Hi,

I recently mentioned that gtar typically does not support ACLs at all. It e.g. 
does not compile the provided ACL code on any POSIX certified platform.

Now I discovered another problem:

I recently compiled gtar-1.31 on Linux and did some testing. While doing so, I 
discovered that gtar does not unpack any of the ACL reference archives 
correctly.

Trying to unpack the reference archives for the POSIX ACL proposal from 1993
that was withdrawn in 1997 results in something like:

/tmp/tar-1.31/src/tar --acls -xpf acl-test3.tar.gz 
/tmp/tar-1.31/src/tar: default/dir2: Warnung: Funktion acl_from_text 
fehlgeschlagen
/tmp/tar-1.31/src/tar: default/dir3: Warnung: Funktion acl_from_text 
fehlgeschlagen
/tmp/tar-1.31/src/tar: default: Warnung: Funktion acl_from_text fehlgeschlagen
/tmp/tar-1.31/src/tar: default: Warnung: Funktion acl_from_text fehlgeschlagen

If you extract the archive using "star" and then try to create a tar archive 
with ACL support using "gtar", you get an archive that is non-compliant to the 
format definition:

http://schilytools.sourceforge.net/man/man4/star.4.html

The reference archives are here:

http://sf.net/projects/s-tar/files/alpha/acl-test.tar.gz
http://sf.net/projects/s-tar/files/alpha/acl-test2.tar.gz
http://sf.net/projects/s-tar/files/alpha/acl-test3.tar.gz
http://sf.net/projects/s-tar/files/alpha/acl-test4.tar.gz
http://sf.net/projects/s-tar/files/alpha/acl-test5.tar.gz

http://sf.net/projects/s-tar/files/alpha/acl-nfsv4-test.tar.gz
http://sf.net/projects/s-tar/files/alpha/acl-nfsv4-test2.tar.gz
http://sf.net/projects/s-tar/files/alpha/acl-nfsv4-test3.tar.gz
http://sf.net/projects/s-tar/files/alpha/acl-nfsv4-test4.tar.gz
http://sf.net/projects/s-tar/files/alpha/acl-nfsv4-test5.tar.gz

They are also included in the latest schilytools tarball at:

http://sf.net/projects/schilytools/files/schily-2019-03-11.tar.bz2

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Each line in exclude-from=file causes slowdown

2019-02-21 Thread Joerg Schilling
Jørn Skog Odberg  wrote:

> The result are not linear with the numbers of excludes, but it shows 
> that each exclude-line makes a negative impact on performance.

People who are interested in performance usually use star.

The recent version 1.6 is in schilytools.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] rmt filename support make tar vulnerable?

2019-02-05 Thread Joerg Schilling
Sergey Poznyakoff  wrote:

> > Back in January of 2005, Joey Hess pointed out in a bug report against
> > Debian's package of tar that's actually an enhancement request, and as I
>
> Thanks. However, this report is based on a premise that doesn't seem
> valid to me:
>
>   "Anything with a colon will do, though a real rmt volume
>   probably has a path after the colon."
>
> I don't see any reason why the remote archive name must contain an
> absolute file name in it (which, apparently, "path" in the above
> fragment implies). It can quite reasonably refer to a relative one as
> well.

More important issues with gtar & rmt are:

-   The GNU RMT server allows arbitrary names and thus permits to use it
as file transfer protocol for any readable file. The rmt server from 
star has configurable safety filters since 2001.

-   Linux ignores RMTIO command value rules that exist since 1980 and
since "grmtd" and gtar does not implement the RMT protocol version 1,
it is possible to erase a remote tape if you just intend to rewind it
and the OS on local and remote side are not identical.

It would be nice if gtar could implement modern enhancements...

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] 1.31: Test 162 (sparse07.at) fails on FreeBSD due to iconv() differences

2019-01-14 Thread Joerg Schilling
Christian Weisgerber  wrote:

> GNU tar 1.31's regression test #162 (sparse07.at) fails on FreeBSD
> 11 and 12.
>
> This has nothing to do with sparse files.  It concerns unicode file
> names in general.  The underlying problem appears to be a difference
> between GNU iconv() and FreeBSD's iconv().  For a conversion from
> UTF-8 to ASCII, GNU iconv() will return -1 and signal an error if
> the input contains any characters that cannot be represented in
> ASCII.  FreeBSD's iconv() replaces those characters with '?' and
> returns the number of such substitutions.  This latter behavior is
> in agreement with my reading of the POSIX standard on iconv().

Thank you for this hint, it also applies to star

it will be fixed in the next release that most likely appears tomorrow 
or a day later in the new schilytools tarball.

The problem seems to be a bad working in the standard that makes it hard to 
find that the return code is > 0 if there are non-identical conversions.

BTW: iconv() from BSD is better than the GNU implementation that illegally 
reads too deep into the input even though the output size forbids to convert 
more characters.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] 'BZh[1-9]' file in v7 format archive confuses GNU tar

2018-11-27 Thread Joerg Schilling
Micha? Górny  wrote:

> Here's another quirk I've found:
>
> $ echo test > BZh5
> $ tar --format=v7 -cf test.tar BZh5
> $ tar -xf test.tar
> bzip2: stdin: compressed data error: bad block header magic
> tar: Child returned status 1
> tar: Error is not recoverable: exiting now
>
> I think the easiest solution would be to copy the trick used
> by libarchive, citing:
>
>   /* After BZh[1-9], there must be either a data block
>* which begins with 0x314159265359 or an end-of-data
>* marker of 0x177245385090. */
>
> However, this wouldn't help with the harder (though less likely to occur
> accidentally) case of:
>
> $ echo test > 'BZh91AY'

An interesting discovery!

I however believe that the format recognition in "star" would not be fooled by 
this and since star first checks for archive formats before it checks for 
compression, this should not be a problem.

BTW: I could not trigger this problem with older gtar versions, when did gtar 
introduce automated compression detection?

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] ACL entries contain comments which may break compatibility

2018-11-26 Thread Joerg Schilling
Micha? Górny  wrote:

> Hi,
>
> Thanks for fixing the previous bug I reported.  Sadly, I just managed to
> accidentally find another one.  When ACL mask restricts effective ACL
> entries, getfacl(1) reports the effective permissions as a comment,
> e.g.:
>
> user:nobody:rw-   #effective:r--
>
> It seems that GNU tar writes that comment as part of the pax header,
> and e.g. libarchive does not restore the ACL correctly.
>

Interesting, before I thought this code has been created by "looking at" the 
original implementation from "star", but I did not understand why GNU tar 
incorrectly leaves default ACLs on files that have been archived without ACLs.

Now it is obvious that the code has been written without looking at the 
original implementation

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] ustar-format archives with 32-char long user/group names are misinterpreted

2018-11-26 Thread Joerg Schilling
Micha? Górny  wrote:

> Hello,
>
> I've been tinkering with the tar format limitations and noticed that GNU
> tar (as of v1.30) misbehaves when given ustar-format archive created by
> bsdtar (from libarchive) and using 32-char long user/group name.
>
> The output is:
>
> $ tar -tvf test.tar 
> -rw-r--r-- 
> verylongverylongverylongverylongverylongverylongverylongverylong00 
> /verylongverylongverylongverylong00  5 2018-11-24 09:28 input.txt

Well, this is not a ustar archive since ustar requires these two strings to be 
null terminated.

A program however should be able to deal with such an incorrect archive.


The standard only permits name, linkname and prefix to be not nul terminated
in case the whole string length is used.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Multi-threaded tar

2018-07-31 Thread Joerg Schilling
"J.R. Heisey"  wrote:

> Greetings,
>
> I was trying to make some processes faster which use tar a bit on large 
> archives.
>
> I was able to use the command line options --use-compress-program pbzip2 and 
> that helped a lot.
>
> I was wondering if anyone has experimented using pthreads in the tar 
> implementation. I don't see any references in version 1.30 of the source or 
> readme I downloaded. Anyone discussed a strategy for using pthreads in 
> previous postings?

Star forks and runs two processes since approx. 30 years.

The first process is for the filesystem I/O and the second process is for the 
archive I/O. between both, there is a ring buffer with configurable size that 
decouples both tasks. This helps to make star very fast.

Implementing something like this in a TAR implementation that does not yet have 
support for it is highly complex.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] paxutils: rmt.c missing va_end()

2018-07-30 Thread Joerg Schilling
Pavel Raiskup  wrote:

> Hi,
>
> please consider fixing rmt_write function:
>
>   rmt_write (const char *fmt, ...)
>   {
> va_list ap;
> va_start (ap, fmt);
> vfprintf (stdout, fmt, ap);
> // missing va_end
> fflush (stdout);
> VDEBUG (10, "S: ", fmt, ap);
> // potential re-use of 'ap' (requires new va_start)
>   }
>
> the first issue is not a problem in GNU/Linux GCC/Glibc ecosystem, but
> still it seems to be good candidate for fix for compatibility.

The biggest problem with the grmt implementation is that it is still just a 
rewrite of the 1981 protocol and nobody did ever try to implement the current
rmt protocol.

While this protocol was bad in 1981 already, it was not a problem, since 
"everybody" who used it had a Vax-11/780 running UNIX.

Today, we have different CPUs and it is an anacronism to believe that sending 
raw binary data over the wire still works.

Today, we have Linux and the Linux people ignored the rules for MTIO opcodes.
As a result, there is no grant that the right MTIO opcode is executed on the 
remote side. A simple "rewind" could execute a "erase" on the remote side...

If you like to be sure that no strange things happen and there is a working 
abstraction from CPU and OS, you need to use librmt and the rmt command from 
star.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] use optimal file system block size

2018-07-19 Thread Joerg Schilling
Christian Krause  wrote:

> To clarify: I do not mean to change the **record size**, which would result 
> in an incompatible tar file. I am only interested in the buffer sizes that 
> are used to read from and write to block devices.

This has been noticed.

BTW: could you please use for better readability a line length of 79 chars as
in the mail RFC? 

> $strace -T -ttt -ff -o tar-1.30-factor-4k.strace tar cbf 4096 data4k.tar data
>
> $ strace-analyzer io tar-1.30-factor-4k.strace.72464 | grep data | column -t
> read   84M  in  1.520   s   (~  55M  /  s)  with  43  ops  (~  2M  /  op,  ~  
> 2M  request  size)  data/blob
> write  86M  in  61.316  ms  (~  1G   /  s)  with  43  ops  (~  2M  /  op,  ~  
> 2M  request  size)  data4k.tar
> ```

Are you mainly interested in the # of "ops" in your output?

>
> Due to changing the **record size**, this creates a different, 
> not-so-compatible tar file:
>
> ```
> $ stat -c %s data.tar data4k.tar
> 88084480
> 90177536
>
> $ md5sum data.tar data4k.tar
> 4477dca65dee41609d43147cd15eea68  data.tar
> 6f4ce17db2bf7beca3665e857cbc2d69  data4k.tar
> ```
>
>
> Please verify: The fact that input buffer and output buffer sizes are the 
> same as the record size is an implementation detail. The input buffer and 
> output buffer sizes could be decoupled from the record size to improve I/O 
> performance without changing the resulting tar file. Decoupling would entail 
> a huge refactoring, like Jörg suggests.

Well, since the related changes have been implemented 30 years ago already and
since the star FIFO mode was intentionally made the default frm the beginning, 
this is still rock solid code. It has been tested millions of times and star 
is at least one of the most stable tar implementations if not the stablest.

If you used the same with star (using default parameters), you would only get
11 "read ops".

If you used "star fs=100m ..." you would only get one read.

If you make performance tests, you'll notice that the IO size reported 
by stat is not the optimum but the smallest size that gives improved 
performance. If you read with even bigger IO sizes, you get better performance 
(see the star results).

> ```
> $ bsdtar --version
> bsdtar 3.2.2 - libarchive 3.2.2 zlib/1.2.8 liblzma/5.0.4 bz2lib/1.0.6
>
> $ strace -T -ttt -ff -o bsdtar-3.2.2-create.strace bsdtar -cf data-bsdtar.tar 
> data
>
> $ strace-analyzer io bsdtar-3.2.2-create.strace.14101 | grep data | column -t
> read   84M  in  388.927  ms  (~  216M  /  s)  with  42ops  (~  2M   /  
> op,  ~  2M   request  size)  data/blob
> write  84M  in  4.854s   (~  17M   /  s)  with  8602  ops  (~  10K  /  
> op,  ~  10K  request  size)  data-bsdtar.tar
> ```

I checked and it seems that "bsdtar" (which differs from "BSD tar") reads 64 KB 
blocks.

This gives slightly better results than gtar but does not give you what you may 
get from star.

Let me give you a simple performance result run on a FreeBSD  11.1-RELEASE-p10
virtual instance.

I did run all tars several times and reported only the fastest result:

gtar-1.30:

sudo /tmp/tar-1.30/src/tar -cf /dev/zero /usr
tar: Removing leading `/' from member names
tar: Removing leading `/' from hard link targets
42.668127 real 1.901632 user 13.029537 sys 34% cpu 243590+0io 0pf+0w

Note that gtar needs /dev/zero to prevent it from cheating.

bsdtar 3.3.1:

sudo tar -cf /dev/null /usr  
tar: Removing leading '/' from member names
47.094941 real 5.640348 user 13.939351 sys 41% cpu 177123+0io 12pf+0w

star-1.5.4:

sudo star -c -f /dev/null /usr
star: Cannot allocate memory. Cannot lock fifo memory.
star: 175948 blocks + 0 bytes (total of 1801707520 bytes = 1759480.00k).
26.913171 real 1.403554 user 10.688413 sys 44% cpu 174209+0io 5pf+0w

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] use optimal file system block size

2018-07-18 Thread Joerg Schilling
Christian Krause  wrote:

> Dear tar Community,
>
> We are using **tar** at our High-Performance Computing (HPC) at our research 
> institute iDiv. The networked file system serving (scientific) data on our 
> cluster is using a block size of 2 MiB:
>
> ```
> $ mkdir data
> $ dd if=/dev/zero bs=2M count=42 of=data/blob status=none
> $ stat -c %o data/blob
> 2097152
> ```
>
> **tar** does not explicitly use the block size of the file system where the 
> files are located, but, for a reason I don't know (feel free to educate me), 
> 10 KiB:

If you like to stay with the default tape block size, you cannot easily change 
this without a complete rewrite of the gtar source. If you however just change 
the tape block size, this may result in tapes that will not be readable on 
other hardware.

Star did this rewrite nearly 30 years ago, but it is done by implementing a 
ring buffer of configurable size. If you have e.g. a fast tape drive, you may 
use the option fs=512m to configure a 512 MB ring buffer or even much more, 
the default size is 8 MB. 

If there is still space in the ring buffer, star may read files with a single
read(2) system call. The tape block size is completely independent from the 
ring buffer size and this is one reason why star is very fast.

 


Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] unmodified files included in incremental tar if link count was changed

2018-07-16 Thread Joerg Schilling
Ralph Corderoy  wrote:

> Hi Peter,
>
> > What if I change the kernel and prevent ctime-changes if only st_nlink
> > was changed?
>
> Wouldn't it be easier, and have less unknown impact, to change the
> program making the hard links to read-link-write ctime, preserving it?

If you did change the kernel to not change ctime in case that a new link
is established to a file, the ctime for the new name would be old as well since 
there is only one inode for the file names.

As a result, your incremental backup did not include the new name.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] unmodified files included in incremental tar if link count was changed

2018-07-16 Thread Joerg Schilling
Peter Koch  wrote:

> Dear Joerg:
>
> Thank's for the quick response. In the meantime I have tried
> star which I had not used before. Wonderful programm but its
> decision on which files will be included into an incremental
> backup seems to be based on the same algorithm that gtar is
> using - i.e. mtime-/ctime > date of last backup.
>
> > My guess is that tar does a stat()-call on x/f2 and notices
> > that the st_nlink-value has increased.
>
> This guess was wrong. Neither gtar nor star care about the
> st_nlink count. No wonder I could not find the
> string "st_nlink" in their sources.
>
> So my next guess is: Creating a hardlink will not only change
> the st_nlink value but the ctime-value as well. And this makes
> gtar and star believe that the file or its metadata has been
> changed. If ctime has changed there's no way to find out wether
> this was caused by a real change of the files metadata (like
> changing the ownership or permissions or acls or whatever) or
> wether just the st_nlink-value was changed.

This is how POSIX filesystem semantics is defined.

> What if I change the kernel and prevent ctime-changes if only
> st_nlink was changed?
>
> Would that have any unexpected side-effects?

This would e.g. result in non-working incremental backups.

Maybe I should add that there of course is a reason for not using mtime as a 
base for incremental backups:

Since you can set mtime to any value, you would end in backups where all files 
that have been extracted by tar or that have been copied with "cp -p" are 
missing in incremental backups.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] unmodified files included in incremental tar if link count was changed

2018-07-13 Thread Joerg Schilling
"bug-...@naev.de"  wrote:

> Dear tar-experts
>
> I observed lots of files that where included in incremental
> tar files without beeing modified. And it seems that the
> reason is a modfied hardlink count of these files.

Not only gtar, but also star adds files in case that the "ctime" of those
files changed.

I am not sure about whether gtar would be able to do so, but at least star is 
theoretically able to archive only the meta data of the file in such a case.

It just has not been enabled because this would require intensive testing.


Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Problem with gnu tar and ZFS on Linux

2018-06-25 Thread Joerg Schilling
Ralph Corderoy  wrote:

> Hi Jörg,
>
> > Well, the name of the mailing list is "bug-tar" and not "bug-gtar", so
> > people would assume that it is a general tar list and not just a gtar
> > specific list.
>
> But you omitted the address of the mailing list is bug-tar@gnu.org and
> thus most would expect it to be centred on GNU's tar and its bugs.

Well, I repeated a bug report against gnu tar. Let us see whether it results in 
a fix ;-)

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Problem with gnu tar and ZFS on Linux

2018-06-25 Thread Joerg Schilling
Paul Eggert  wrote:

> The problem with this particular filesystem appears to be that it lies 
> about inode numbers in its snapshots. I suppose GNU tar (and other 
> programs) will have to add a flag to ignore inode numbers too. What a pain.

In case that all inode numbers are different, this filesysem is completely 
unusable. If only the inode number for the root directory is wrong, I cannot 
say without testing, whether there is a way to make star deal with that problem
and I cannot speak for gtar.

This implementation decision is definitely in conflict with the basic ZFS 
design rules that grant that inode numbers > 2147483647 will only be created 
in case that the file system already has 2147483647 files which results in a 
filesystem size of at least 8 TB and a filesystem that has been completely 
filled up with smallest files.


> By the way, would you mind toning down the ads in your emails? This 
> mailing list is for improving GNU Tar, not for advertising other 
> software. Thanks.

Well, the name of the mailing list is "bug-tar" and not "bug-gtar", so people 
would assume that it is a general tar list and not just a gtar specific list.

Since you asked, I should be more explicit and explain why I have been 
carefully hinting people to use different solutions:

Since 1990, gtar claims that it supports incremental backups and restores but 
it seems that incremental restores never have been tested with the needed care.

When star's inremental support was initially written in September 2004, I did 
run some basic tests in order to check whether star was on the right way and in 
order to compare star with other solutions. It turned out that gtar did not 
pass the basic tests and I made a bug report against that gtar problem.

The method introduced by ufsdump around 1981 and used as the basic concept for 
star however works and incrementals with star never caused any problem since 
February 2005 when star incrementals have been declared to be ready for use.

Now nearly 14 years after the gtar problems have been detected and reported,
the basic problem with gtar still exists and from a quick web search, it seems 
that I send a reminder at least in 2011 and 2016 already.

Let me give you a script to reproduce the problem:

/*--*/
if [ "$gtar" ]; then
#
# Permit: gtar=/tmp/tar-1.30/src/tar sh gnutarfail.sh
#
GT=`"$gtar" --help 2> /dev/null | grep GNU`
else
GT=`gtar --help 2> /dev/null | grep GNU`
if [ "$GT" ]; then
gtar=gtar
else
# Some systems have "gtar" installed as "tar"
GT=`tar --help 2> /dev/null | grep GNU`
if [ "$GT" ]; then
gtar=tar
fi
fi
fi
if [ -z "$GT" ]; then
echo No gtar found
exit 1
fi
echo gtar installed as $gtar
# Preparation complete
#---

cd /tmp
mkdir test.$$
cd test.$$

set -x

mkdir test
mkdir test/dir1
mkdir test/dir2

echo dir1-file > test/dir1/dir1-file
echo dir2-file > test/dir2/dir2-file

$gtar -g/tmp/test.$$/listed-incr -c -f /tmp/test.$$/full.tar test

rm -rf test/dir2
mv test/dir1 test/dir2

$gtar -g/tmp/test.$$/listed-incr -c -f /tmp/test.$$/incremental.tar test

mv test orig

$gtar -x -g/dev/null -f /tmp/test.$$/full.tar
$gtar -x -g/dev/null -f /tmp/test.$$/incremental.tar
/*--*/

If you run that script, you get:

LC_ALL=C sh gnutarfail.sh 
gtar installed as gtar
+ mkdir test 
+ mkdir test/dir1 
+ mkdir test/dir2 
+ echo dir1-file 
+ echo dir2-file 
+ gtar -g/tmp/test.6611/listed-incr -c -f /tmp/test.6611/full.tar test 
+ rm -rf test/dir2 
+ mv test/dir1 test/dir2 
+ gtar -g/tmp/test.6611/listed-incr -c -f /tmp/test.6611/incremental.tar test 
+ mv test orig 
+ gtar -x -g/dev/null -f /tmp/test.6611/full.tar 
+ gtar -x -g/dev/null -f /tmp/test.6611/incremental.tar 
gtar: Cannot rename `test/dir1' to `test/dir2': File exists
gtar: Error exit delayed from previous errors

I am not sure whether it is possible to solve the problem without introducing a 
new incompatible dump format in GNU tar. AFAIK Gnu tar tries to detect and 
understand all changes while creating the archive by using a partial database 
during create. Star detects and understands changes at extract time by using a 
complete database which coveres all files.

See e.g.:

https://lists.gnu.org/archive/html/bug-tar/2016-07/msg00026.html

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Problem with gnu tar and ZFS on Linux

2018-06-22 Thread Joerg Schilling
Pieter Bowman  wrote:

> The problem seems to be caused by the changing of the inode of the
> root of that filesystem.  The inode for the test filesystem's root
> directory is 3, the inode for various snapshots are numbers like:
>
> 28147497177
> 281474976671479
> 281474976673971

So it seems that there is a bug in your specific ZFS snapshot implementation.

The original ZFS snapshot implementation behaves as expected:

stat /pool/home/joerg /pool/home/joerg/.zfs/snapshot/snap

  File: `/pool/home/joerg'
  Size: 149 Blocks: 12 IO Block: 9728   directory
Device: 2d90007h/47775751d  Inode: 4   Links: 62
Access: (0755/drwxr-xr-x)  Uid: (  100/   joerg)   Gid: (0/root)
Access: 2018-06-22 17:53:19.060329104 +0200
Modify: 2018-05-18 11:24:02.368669378 +0200
Change: 2018-05-18 11:24:02.368669378 +0200

  File: `/pool/home/joerg/.zfs/snapshot/snap'
  Size: 149 Blocks: 12 IO Block: 9728   directory
Device: 2d90013h/47775763d  Inode: 4   Links: 62
Access: (0755/drwxr-xr-x)  Uid: (  100/   joerg)   Gid: (0/root)
Access: 2018-06-18 11:09:11.710687095 +0200
Modify: 2018-05-18 11:24:02.368669378 +0200
Change: 2018-05-18 11:24:02.368669378 +0200

You should make a bug report against your ZFS integration.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Problem with gnu tar and ZFS on Linux

2018-06-22 Thread Joerg Schilling
Andreas Dilger  wrote:

> Keep your old snapshot around after you create the new one, and run "stat(1)"
> on the same file in both snapshots to see if there are any other differences
> besides the device number that may be causing tar to think the file changed.

gtar likes to implement something that is in conflict with a reliable 
filesystem backup. It tries to support directory trees without looking at mount 
point limits. This requires to look at both, stat.st_ino and stat.st_dev, since
inodes are unique only on a single filesystem.

Reliable filesystem backups on the other side need to be based on snapshots an 
snapshots need to create ephemeral st_dev values in order to create the 
appearance of a different filesystem with POSIX semantics.

This does not work well together...

Please have a look at star's incrementals. There are examples that are 
currently starting at page #55. Examples for incrementals are in the section 
INCREMENTAL BACKUPS that currently starts at page #50.


http://schilytools.sourceforge.net/man/man1/star.1.html

Note that star's incrementals are smaller than what you get from gtar:

If you have a full 1TB disk with a single directory tree in it's root directory
and rename the top level diretory just under the root, you end up with a 10 kB
star incremental.

The same done with gtar needs 1TB of backup space and you would even need a 
2 TB disk to be able to restore the incremental.

BTW: currently sourceforge again tells me since half an hour that it is under 
maintenence that will end in 10 minutes :-(

So be a bit patient when trying to access the man page.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Problem with gnu tar and ZFS on Linux

2018-06-21 Thread Joerg Schilling
Pieter Bowman  wrote:

> We have been using gnu tar with amanda to backup our ZFS filesystems
> on Solaris (both SPARC and x86) for more than 10 years.  We take a ZFS

Did you ever run a restore operation?

> snapshot (destroying the previous snapshot if it exists), run amanda
> pointing to the snapshot directory.  We are in the process of
> replacing our old Solaris x86 file server with one running CentOS and
> ZFS.  Unfortunately, that project has now stalled because the same
> process that we've been using no longer works.  Every night we end up
> backing up the full filesystem (only three at the moment, but that's
> still hundreds of gigabytes).  I did add the --no-check-device switch,
> but that didn't help.

If the problem is that the stat.st_dev entry of the stat() data differs from
one snapshot to another, I see no problems if you use star(1) for incremental 
backups.

The FreeBSD people use star(1) for incremental ZFS backps since the beginning 
of the ZFS port to FreeBSD.

star only remembers the time stamp of the last and the current incremental and 
decides what to do just based on inode numbers. So in fact star uses the basic 
concept from ufsdump/ufsrestore to find which files have been renamed and which 
files have been removed.

BTW: We did run a daily dump/restore pipeline for 10 years on a larger server 
in order to create a mirrored filesystem and there never has been any problem.

I recommend to use the star source from the latest schilytools.


Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] tar HEAD: difflink.at portability issue

2018-04-27 Thread Joerg Schilling
Christian Weisgerber  wrote:

> I suggest the following fix:
>
> --- /home/naddy/difflink.at   Thu Apr 26 18:58:04 2018
> +++ difflink.at   Thu Apr 26 18:52:59 2018
> @@ -20,7 +20,7 @@
>  mkdir a
>  genfile -f a/x
>  ln -s x a/y
> -ln a/y a/z
> +ln -P a/y a/z

Do you like this to fail on many systems?

The option -P has been added in 2008 and is not yet available everywhere.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Detection of sparse files is broken on btrfs

2018-01-23 Thread Joerg Schilling
Sorry for the resend mail, but it turned out that accidently typed "r" instead 
of "R" and I belive this may be of interes for more than just you Paul.

Paul Eggert <egg...@cs.ucla.edu> wrote:

> On 01/22/2018 09:47 AM, Joerg Schilling wrote:
> > we are talking about files that do not change while something like TAR
> > is reading them.
>
> It's reasonable for a file system to reorganize itself while 'tar' is 
> reading files. Even if a file's contents do not change, its space 
> utilization might change. When I use "du" or "df" to find out about 
> space utilization I want the numbers now, not what they were last week, 
> and this is true regardless of whether I have modified the files since 
> last week.

First: "df" output is not related to the stat() data but related only to the 
statvfs() data. 

Then, "du" output on Linux seems to have a tradition to be incorrect. I 
remember that reiserfs returned st_blocks based on a strange "fragment 
handling" that ignored the fact that st_blocks counts in multiples of 
DEV_BSIZE rather than in multiples os the logial block size from reiserfs.

In general, there is a major change in filesystems since WOFS introduced 
COW 30 years ago: before (when data blocks have always been overwritten), any
basic element of a filesysten was forbidden do be larger than the sector size 
because otherwise a system or power crash could leave the filesystem in an 
unrepairable state.

With COW, some of the structures are now allowed to be larger than the sector 
size and since this includes the "inode" equivalent (called "gnode" on WOFS or 
"dnode" on ZFS), this structure may be larger than the disks sector size, as 
it may be written to the background medium before the switch to the next 
stable filesystem state is introduced - given that the related filesystem is
organized in a way that this switch is not done by just writing the "inode" 
equivalent.

On WOFS with it's inverted structure, a file is going to the next state by just
writing the gnode to the next free gnode location. So WOFS does not allow the 
gnode to be larger than the sector size, unless there was an extension to allow 
to detect partially written gnodes as invalid.

On ZFS, with a "classical" filesystem structure, the file's next state is 
reached by writing the dnode, the directory it is in,  up to the uberblock.
So only care needs to be taken with the way the next uberblock location is 
interpreted as valid. On ZFS, a dnode definitely could be larger than the 
sector size and in theory larger parts of the file's data could be held in the
meta data.

If btrfs does not do it this way, returning st_blocks == 0 for a file with 
DEV_BSIZE or more of data would be wrong. Your claim that reorganizing the 
filesystem could result in different stat() data to be returned applies only 
in case that the file content is moved from logically being file content to
logically being file meta data. So in theory, a stat() call could first return 
st_blocks == 1 and later (when the filesystem knows that the new/whole data of 
the file fits into the meta data) return st_blocks == 0. It seems however, that 
btrfs behaves just the other way round.

BTW: you mentioned that POSIX does not grant many things that people might 
believe to be requiredThis in special is the case for directories.

POSIX does not:

-   require the directory link count to be it's hard link count plus
the number of sub-directories. This was an artefact from a design
mistake in the 1970s.

-   require a directory to be readable, since there is readdir()

-   require a directory to return a stat.st_size that depends on it's
"content".

-   require a directory to return "." or ".." with readdir().

WOFS follows the minimal POSIX requirements for directories only. A directory 
is a special file with size 0 and a link count of 1 except there is an inode 
related link (the equivalent to a hard link) to another directory. The entries
"." and ".." are understood by the filesystem's path handling routines but
readdir() never returns these entries.

ZFS emulates the historical directory link count from the 1970s but returns
sta.st_size to be the number of entries readable by readdir(). This usually 
let's the historic BSD function implementation for scandir() fail, as the 
historic scandir implementation allocates memory based on the assumption that 
the minimal stat.st_size of a directory is "number of entries" * "minimal 
struct dirent size" for UFS.

Does gtar deal correctly with these constrainst?

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] [PATCH] Remove nonportable check for files containing only zeroes

2018-01-23 Thread Joerg Schilling
Mark H Weaver  wrote:

> > Now many bytes have been written past the hole?
>
> Did you read my entire message?  The answer to your question was just a
> few lines beyond the excerpt that you quoted above.  I wrote:
>
> >> Yes, on Btrfs I reliably see (st_blocks == 0) on a recently written,
> >> mostly sparse file with size > 8G, using linux-libre-4.14.14.  More
> >> specifically, the "storing sparse files > 8G" test in tar's test suite
> >> reliably fails on my system:
> >> 
> >>   140: storing sparse files > 8G   FAILED 
> >> (sparse03.at:29)
> >> 
> >> The test creates a sparse file consisting of 8 gibibytes of hole
> >> followed by 512 bytes of 'A's at the end.  [...]

Sorry, I did not see this.

Well then it would be of interest whether btrfs is able to keep 512 bytes of 
data in the meta data space.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Detection of sparse files is broken on btrfs

2018-01-23 Thread Joerg Schilling
Andreas Dilger  wrote:

> Maybe you wrote a filesystem 30 years ago when everything was BSD FFS, but
> things have moved on from that time.  I'm one of the maintainers for ext4,

Things changed since then because people followed my filesystem design from 30 
years ago.


Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Detection of sparse files is broken on btrfs

2018-01-22 Thread Joerg Schilling
Paul Eggert  wrote:

> The implementation that you suggest requires the file system to remember 
> how much reserved space that it initially allocated to the file, even if 
> that number changes as a result of file system reorganization. This can 
> place an undue burden on more-advanced implementations. Also, it isn't 
> what most users want: users want to know how much space the file is 
> consuming now, not how much space it was consuming last week.

The amount of space a file is consuming now is the current cached state, which 
is background storage plus the current state of the cache overlay. A file that 
consumed a different amount of space last week is not what we are discussing
herewe are talking about files that do not change while something like TAR
is reading them.

So we are talking about filesystem consistency. If btrfs is able to store >= 
DEV_BSIZE data inside an existent meta data location, this would need me to
rethink whether there is a need for a different fallback algorithm for 
filesystems that do not support SEEK_HOLE. Otherwise there is a bug in btrfs 
that should be fixed.

> > If you still don't understand this, I recommend you to try to write an in 
> > kernel
> > filesystem implementation I did this 30 years ago.
>
> There's no need for this kind of comment. Other contributors to this 
> thread are competent, and it does not help matters for you to flash your 
> credentials.

Well, I tought that you realise that this was a hint to an icompetent other 
person that tried to use personal attacks instead of arguments.

If you care about a decent discussion, why didn't you reply to that post?

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] [PATCH] Remove nonportable check for files containing only zeroes

2018-01-22 Thread Joerg Schilling
Mark H Weaver  wrote:

> Yes, on Btrfs I reliably see (st_blocks == 0) on a recently written,
> mostly sparse file with size > 8G, using linux-libre-4.14.14.  More
> specifically, the "storing sparse files > 8G" test in tar's test suite
> reliably fails on my system:

Now many bytes have been written past the hole?

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Detection of sparse files is broken on btrfs

2018-01-22 Thread Joerg Schilling
Andreas Dilger  wrote:

> So, what you're saying is that filesystem resizing is forbidden by POSIX,
> background data compression and data deduplication is forbidden by POSIX,
> migration across storage tiers is forbidden by POSIX?  All modifications
> to the filesystem need to be synchronous because they cannot have any
> background effects?
>
> POSIX is useful as a guideline, but shouldn't be considered a straight
> jacket that prevents innovation in storage.  Doubly so if POSIX doesn't
> actually require some behaviour, but only implies it by omission as you
> are suggesting.
>
> Maybe, you should stop trolling in the GNU tar mailing list and flogging
> your own tar for the past ten years?

Are you confused?

or why do you attack me and agree with me at the same time?

Yes, filesystem resizing is a reason why statvfs() may return different data.
But as mentioned before, this is unrelated to the stat() data.

You need to understand the spirit of POSIX or you will missinterpret it.

BTW: There was a teleconference discussion about a similar problem with stat() 
some years ago and there is a full agreement that stat() returns the "visible" 
state of the cached filesystem in a way that there is no permission to modify 
stat() data just because the state of only the background storage did change.

Since you need to reserve space on the background storage before you can even 
write to the cached data for a file, you need to make stat() return the related 
state that includes the reserved space.

If you still don't understand this, I recommend you to try to write an in 
kernel 
filesystem implementation I did this 30 years ago.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] [PATCH] Remove nonportable check for files containing only zeroes

2018-01-19 Thread Joerg Schilling
Andreas Dilger  wrote:

> I'd be happy to have a proper SEEK_HOLE/SEEK_DATA implementation for
> Lustre, though it would be a bit tricky for sparse files striped over
> multiple OSTs.  Probably the best way to handle this would be to
> fetch the FIEMAP data for each stripe to the client, and then interleave
> the extents on stripe_size boundaries (in file offset order) to determine
> where the actual holes/data are.

Given the fact that any filesystem needs to be implemented in a way that allows 
to read data from files, I see no reason why there should be a problem on 
lustre.

If you read data from disk blocks, you are not inside a hole and if the data 
is synthesized nulls, you are inside a hole.

Now the filesystem only needs to be able to scan the block allocation tables.
Regardless of how inefficient the filesystem is implemented in how it knows 
where to read data from, SEEK_HOLE will always be faster than reading data.



Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] [PATCH] Remove nonportable check for files containing only zeroes

2018-01-18 Thread Joerg Schilling
Andreas Dilger  wrote:

> That means SEEK_HOLE is NOT available in RHEL 6.x kernels, which are still
> in fairly widespread (though declining) use.  I'd prefer that the heuristic
> for sparse files without SEEK_HOLE not be removed completely, but I do think
> that it needs to be fixed for the small inline file and file in cache cases.

While you should always be prepared to find historic filesystems and do 
something reasonable with then...

SEEK_HOLE was first implemented 13 years ago and after that amount of time, 
implementors should be allowed to assume that any halfway up to date system now 
supports it.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Detection of sparse files is broken on btrfs

2018-01-18 Thread Joerg Schilling
Andreas Dilger  wrote:

> > POSIX does not require you to call fsync() before you are able to get the
> > expected result from stat()
> > 
> > If POSIX did make such assumptions, it would document then. The fact that
> > there is no related text in POSIX is sufficient to prove what POSIX expects.
>
> I don't agree with your extrapolation at all.  You're saying that everything
> POSIX doesn't document must be forbidden, which is a big stretch.

You seem to missinterpret me.

POSIX requires things to be documented in case there is unexpected behavior.

Returning st_blocks == 0 for a file with at least 512 bytes of data is such 
unexpected behavior.

Returning a value for st_blocks, that changes with the phases of the moon while 
the content of that file is not changed is another unexpected behavior.

BTW: I remember that Sun started with a similar inconsistent approach (for 
statvfs() in this case) ~ 14 years ago, when efficiency for unlink() was 
increaded by implementing a background unlink(). Sun failed to pass the POSIX 
conformance tests with the first approach and had to change the implementation 
to returm more expectable results.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Detection of sparse files is broken on btrfs

2018-01-18 Thread Joerg Schilling
Andreas Dilger  wrote:

> Given that calling SEEK_HOLE is also going to have some cost, my suggestion
> would be to ignore st_blocks completely for small files (size < 64KB) and
> just read the file in this case, since the overhead of reading will probably
> be about the same as checking if any blocks are allocated.  If no blocks are
> allocated, then the read will not do any disk IO, and if there are blocks
> allocated they would have needed to be read from disk anyway and SEEK_HOLE
> would be pure overhead.

If you propose to ignore st_blocks completely for small files that are on 
historic (unmaintained) filesystems without SEEK_HOLE support, this is what 
star does since years ;-)



> To catch larger files that have been written to cache but haven't flushed
> data to disk yet, it might still make sense to use SEEK_HOLE on larger files
> with st_blocks == 0, especially if they have a recent mtime.

Using mtime does not look like a good idea in this context.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] [PATCH] Remove nonportable check for files containing only zeroes

2018-01-10 Thread Joerg Schilling
Dominique Martinet  wrote:

> Jumping in for lustre, for which there currently is a trivial SEEK_HOLE
> implementation that only checks file size boundaries, but I'd like to
> properly implement it soonish so I don't think tar should wait for
> lustre there.
> (not sure how Andreas feels about that though, will let him speak up)

This seems to be the wrong method.

The POSIX standard ia aligned with the original implementation that comes from 
Sun/Solaris. An implementation that does not correctly reimplements this method
thus is incorrect.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Detection of sparse files is broken on btrfs

2018-01-10 Thread Joerg Schilling
Paul Eggert <egg...@cs.ucla.edu> wrote:

> On 01/09/2018 01:38 AM, Joerg Schilling wrote:
> > If POSIX would allow such unexpected behavior, this would have been 
> > documented.
>
> I'm afraid we'll just have to agree to disagree here. Even if you expect 
> a particular behavior, it's not the behavior that I expect nor is it the 
> behavior that we actually observe. You can take up up with the POSIX 
> committee if you like; please reference this discussion so that they can 
> see the arguments on both sides.

You seem to missinterpret the facts. It seems that only btrfs behaves this way 
and this is still definitely an unexpected behavior.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Detection of sparse files is broken on btrfs

2018-01-10 Thread Joerg Schilling
Tim Kientzle  wrote:

> What is the most efficient (preferably portable) way for an archiving program 
> (such as tar) to determine whether it should archive a particular file as 
> sparse or non-sparse?

IIRC, a lseek() call is aprox. 2 microseconds. I did some reseach in 2005 when 
implemented support for SEEK_HOLE. IIRC, SEEK_HOLE was implemented by Sun in 
spring 2005 after I discussed methods for a useful and performant interface 
with Jeff Bonwick from the Sun ZFS team. At that time, Sun told us that we 
shopuld first use fpathconf(f, _PC_MIN_HOLE_SIZE) to find whether a specific 
filesystem supports SEEK_HOLE and I believed that another syscall would be bad 
for the perfornamce. It turned out that this is not the case and I finally 
started to use fpathconf(f, _PC_MIN_HOLE_SIZE) in 2006.

The current method I use is to call:

lseek(f, (off_t)0, SEEK_HOLE);

If this returns EINVAL, the OS does not support SEEK_HOLE (I use private 
#defines for SEEK_HOLE == 4 and SEEK_DATA == 3) to check this.

It it returns ENOTSUP, the specific filesystem does not support SEEK_HOLE.

If the return value is >= st_size, the file is not sparse, as there is only the 
virtual hole past the end of the file.

> Historically, we?ve compared st_nblocks to st_size to quickly determine if a 
> file is sparse in order to avoid the SEEK_HOLE scan in most cases.  Some 
> programs even consider st_nblocks == 0 as an indication that a file is 
> entirely sparse.  Based on the claims I?ve read here, it sounds like 
> st_nblocks can no longer be trusted for these purposes.
>
> So is there some other way to quickly identify sparse files so we can avoid 
> the SEEK_HOLE scan for non-sparse files?

Star only uses this method in case that SEEK_HOLE is not supported.
In addition, I changed my algorithm regarding st_blocks == 0 and the assumtion 
that the file only consists of a single hole in October 2013 after I discovered 
that NetAPP stores files up to 64 bytes in the inode.

Otherwise the fallback algorithm for sparse files on a dump OS is:

st_size > (st_blocks * DEV_BSIZE) + DEV_BSIZE

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] [PATCH] Re: Detection of sparse files is broken on btrfs

2018-01-09 Thread Joerg Schilling
Pavel Raiskup  wrote:

> On Tuesday, January 9, 2018 8:59:06 AM CET Paul Eggert wrote:
> > Pavel Raiskup wrote:
> > > So what about special casing that filesystem, where we can lseek() for
> > > holes anyway?
> > 
> > If we can lseek for holes, then why not just do that?
>
> Checking whether lseek() actually works costs some additional syscalls _per
> sparse_ file;  checking for ST_NBLOCKS() is without this penalty.

Well, star does this since a long time and the penalty is a few microseconds.

"~A



Re: [Bug-tar] [PATCH] Re: Detection of sparse files is broken on btrfs

2018-01-09 Thread Joerg Schilling
Paul Eggert  wrote:

> If we can lseek for holes, then why not just do that? We shouldn't need 
> special-case code for btrfs per se. Any filesystem where we can lseek for 
> holes 
> should take advantage of that optimization.

This is what star uses since 13 years ;-)

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Detection of sparse files is broken on btrfs

2018-01-09 Thread Joerg Schilling
Paul Eggert  wrote:

> POSIX does not require that st_nblocks remain constant across any system 
> call. It doesn't even require that it remain constant if you merely call 
> stat twice on the same file, without doing anything else in between. So 
> I agree with you that it's irrelevant whether fsync or sync is called in 
> the interim. Where we disagree is that I don't think st_nblocks must 
> remain constant when a file is not modified. No such requirement is in 
> POSIX.

POSIX dos not document that the value of st_nblocks may vary while the content 
and the size of the file remains constant.

If POSIX would allow such unexpected behavior, this would have been documented.



Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Detection of sparse files is broken on btrfs

2018-01-08 Thread Joerg Schilling
Paul Eggert <egg...@cs.ucla.edu> wrote:

> On 01/08/2018 09:41 AM, Joerg Schilling wrote:
> > POSIX explains that st_blocks counts in units of DEV_BSIZE.
>
> That's not required by the standard. It's merely a comment in the 
>  rationale "Traditionally, some implementations defined the 
> multiplier for /st_blocks/ in // 

My impression is that you look at the wrong things.

POSIX does not require you to call fsync() before you are able to read written 
data from a file.

POSIX does not require you to call fsync() before you are able to get the 
expected result from stat()

If POSIX did make such assumptions, it would document then. The fact that 
there is no related text in POSIX is sufficient to prove what POSIX expects.

So the real problem is not what exact value you may expect but rather whether 
btrfs behaves inconsistent because you get different results after calling 
fsync().

BTW: From the original report, the value in st_blocks seems to be OK after 
fsync() was called.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Detection of sparse files is broken on btrfs

2018-01-08 Thread Joerg Schilling
Paul Eggert <egg...@cs.ucla.edu> wrote:

> On 01/08/2018 08:54 AM, Joerg Schilling wrote:
> > The most important fact however is that allocating spade happens before you
> > copy data into that space.
>
> Certainly users need the ability to make sure there's enough room before 
> starting to copy, and POSIX allows for that with posix_fallocate. 
> However, I don't see any POSIX requirement that st_blocks has anything 
> to do with that room. For example, POSIX doesn't specify the size of the 
> units that st_blocks uses, and it even allows the unit size to vary 
> depending on the file. One cannot really deduce much of anything from 
> st_blocks, if the goal is portability to any POSIX implementation.

POSIX explains that st_blocks counts in units of DEV_BSIZE.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Detection of sparse files is broken on btrfs

2018-01-08 Thread Joerg Schilling
Paul Eggert <egg...@cs.ucla.edu> wrote:

> On 01/08/2018 08:06 AM, Joerg Schilling wrote:
> > blkcnt_t st_blocks  Number of blocks allocated for this object.
> >
> > I hope I do not need to explain the term "allocated".
>
> I'm afraid that you do need to explain "allocated". Suppose, for 
> example, two files are clones: they have different inode numbers and are 
> different files from the POSIX point of view, but they have the same 
> contents and only one copy exists at the lower level. How many blocks 
> are "allocated" for each file?

POSIX does not explain what happens with several filesystems.

For this reason, I cannot see any reason why deduplication between different 
filesystems that share a common dataset should be alllowed to be visible at 
user level.

Also note that "du" may report more blocks than you expect in case it does not
have the ability to honor hard linked files.

The most important fact however is that allocating spade happens before you 
copy data into that space. A file with more data than what can be hold in the 
inode thus must have st_blocks > 0.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Detection of sparse files is broken on btrfs

2018-01-08 Thread Joerg Schilling
Adam Borowski  wrote:

> A file that doesn't have a single block allocated for it may thus return
> st_blocks of 0, no matter if it's empty or not.

_before_ you may add data to a file, you need to allocate space for it.
This is what POSIX requires to return with a stat() call.

For this reason, it is impossible that st_blocks is 0 and the file contains 
at least 512 bytes of data.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Detection of sparse files is broken on btrfs

2018-01-08 Thread Joerg Schilling
Mark H Weaver <m...@netris.org> wrote:

> >>   https://lists.gnu.org/archive/html/bug-tar/2016-07/msg0.html
> >>
> >> At the time, Joerg Schilling unilaterally refused to fix the bug,
> >> claiming that Btrfs was broken and violated POSIX, although when asked
> >> for a reference to back that up he never provided one.  Everyone else in
> >> the thread disagreed with him, but the bug never got fixed.
> >
> > Of course I provided that reference by pointing to the POSIX standard.
>
> Please quote the relevant excerpts regarding the constraints on
> st_blocks in relation to the file contents, along with links to the
> official online text containing those excerpts.

I did this 18 months ago already

http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_stat.h.html

blkcnt_t st_blocks  Number of blocks allocated for this object. 

I hope I do not need to explain the term "allocated".

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Detection of sparse files is broken on btrfs

2018-01-08 Thread Joerg Schilling
Tim Kientzle  wrote:

> I'm not entirely sure I understand the above.
>
> It sounds like someone is claiming that:
>
> * Archiving programs should know about the *timing* of filesystem 
> implementations  (60s here for btrfs, something else for  XYZ>?)
>
> * And specifically request the OS to fsync() files before trusting the 
> metadata

This is exactly the reason, why btrfs (in case it behaves as claimed) seems to
be be in conflict with POSIX.

POSIX requires that stat() returns cached meta data instead of probably out of 
date information from the background medium. In other words: It is not 
allowed to return different data before and after a sync() or fsync() call.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Detection of sparse files is broken on btrfs

2018-01-08 Thread Joerg Schilling
Mark H Weaver <m...@netris.org> wrote:

> I just got bitten by the same problem reported back in July 2016:
>
>   https://lists.gnu.org/archive/html/bug-tar/2016-07/msg0.html
>
> At the time, Joerg Schilling unilaterally refused to fix the bug,
> claiming that Btrfs was broken and violated POSIX, although when asked
> for a reference to back that up he never provided one.  Everyone else in
> the thread disagreed with him, but the bug never got fixed.

Of course I provided that reference by pointing to the POSIX standard.

In order to make sure that every constraint is correct, I may enhance my 
statement:

In theory, a filesystem could put data for a tiny file into some kind of "free 
space" in the meta-data-storage (sometimes called "inode") and thus legally 
report st_blocks == 0. But this would not be allowed to change as a result of 
just a "sync()" operation.

But note that a file that could be sparse needs to have a minimal size of 
DEV_BSIZE in order to be "sparse" while known implementations do not store more 
than 64 bytes in that location.


> Paul Eggert argued that there's no guarantee that st_blocks must be zero
> for a file with nonzero data.  As an example, he pointed out that if all
> of the file's data fits within the inode, it would be reasonable to
> report st_blocks == 0 for a file with nonzero data.

See above BTW: There is a related comment in star/hole.c that explains that 
NetApp puts file data up to 64 bytes completely into the meta data storage and 
the method used by star avoids calling a file with st_blocks == 0 sparse as long
as it follows the POSIX semantics.



> Others pointed out that in Linux's /proc filesystem, all files have
> st_blocks == 0.  That is also the case on my system running
> linux-libre-4.14.12.  Joerg claimed that his /proc filesystem reported
> nonzero st_blocks, but he was the only one in the thread who did so.

This is incorrect: I pointed out that the *original* /proc filesystem 
implementation always returns st_blocks != 0 if st_size != 0. If you encounter 
a /proc filesystem that st_blocks == 0, this must be a buggy inofficial clone
implementation.


> It was also pointed out that with the advent of SEEK_HOLE and SEEK_DATA,
> the st_blocks hack is no longer needed for efficiency on modern systems.
>
> I see from the GNU maintainers file that Paul Eggert is a maintainer for
> GNU tar, and Joerg Schilling is not, so I don't see why we should let
> Joerg continue to prevent us from fixing this bug.

Given that http://austingroupbugs.net/view.php?id=415#c862 defines SEEK_HOLE
and SEEK_DATA already and given that most OS alreday implement it, it would be 
the best way to just follow the accepted standard.

BTW: I am in the group of core POSIX maintainers.

> I propose that we revisit this bug and fix it.  We clearly cannot assume
> that st_blocks == 0 implies that the file contains only zeroes.  This
> bug is fairly serious for anyone using btrfs and possibly other
> filesystems, as it has the potential to lose user data.

I cannot speak for gnu tar, but star does not call a file "sparse" as long as 
this file follows POSIX semantics. This is implemented by requiring the size of 
the file (st_size) to be at least DEV_BSIZE larger than the size computed from 
st_blocks in order to be treated as "sparse".


Conclusion: If btrfs returns st_blocks == 0 for larger (non sparse) files, this 
is a POSIX non-compliance that needs to be fixed.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] tar-1.30 released [stable]

2017-12-21 Thread Joerg Schilling
Pavel Raiskup  wrote:

> Has ./configure detected the ACL support?
>
> ...
> checking for library containing acl_get_file... -lacl

This is an interface from a standard proposal that has been withdrawn in 1997.
I believe at the time this proposal has been withdrawn, there was no existing
implementation, based on this proposal.

There was an older proposal based on implementations that exist since 1993 and 
this interface is based on the acl() syscall and the aclfromtext() / 
acltotext() library interface in libsec.

 http://cdrecord.org/private/   http://sf.net/projects/schilytools/files/



Re: [Bug-tar] tar-1.30 released [stable]

2017-12-20 Thread Joerg Schilling
Pavel Raiskup <prais...@redhat.com> wrote:

> On Wednesday, December 20, 2017 1:41:38 PM CET Joerg Schilling wrote:
> > There still is a --acl option but no support for ACLs.
>
> There's no support for NTFS/NFSv4 ACLs in GNU tar, to un-confuse the
> statement.

But the withdrawn POSIX.4 ACL interface proposal does not seem to be supported 
either.

I am on a system that sports both - depending on the underlying filesystem.

BTW: With regard to the NTFS/NFSv4 ACLs, I recently enhanced star to use a more 
compact ACL format in this case.

Here is what the old definition was: 

30 atime=1383676106.725425278
30 ctime=1383676333.651257344
30 mtime=1383676106.725425278
324 
SCHILY.acl.ace=group:daemon:rwx---:---:allow:12,user:root:rwx---:---:allow:0,owner@:--x---:---:deny,owner@:rw-p---A-W-Co-:---:allow,group@:-wxp--:---:deny,group@:r-:---:allow,everyone@:-wxp---A-W-Co-:---:deny,everyone@:r-a-R-c--s:---:allow
23 SCHILY.dev=47775747
17 SCHILY.ino=11
18 SCHILY.nlink=1
27 SCHILY.filetype=regular

and here is the new compact format:

30 atime=1383676106.725425278
30 ctime=1383676333.651257344
30 mtime=1383676106.725425278
186 
SCHILY.acl.ace=group:daemon:rwx::allow:12,user:root:rwx::allow:0,owner@:x::deny,owner@:rwpAWCo::allow,group@:wxp::deny,group@:r::allow,everyone@:wxpAWCo::deny,everyone@:raRcs::allow
23 SCHILY.dev=47775747
17 SCHILY.ino=11
18 SCHILY.nlink=1
27 SCHILY.filetype=regular

that usually fits in a single 512 byte block.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] tar-1.30 released [stable]

2017-12-20 Thread Joerg Schilling
Sergey Poznyakoff  wrote:

> Hello,
>
> This is to announce the release of GNU tar 1.30. Please see below
> for a list of noteworthy changes.

There still is a --acl option but no support for ACLs.

Is this intented?

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] bug or feature?

2017-12-15 Thread Joerg Schilling
Paul Eggert  wrote:

> On 12/14/2017 12:03 PM, Bruce Dubbs wrote:
> > Is there any reason that tar should change the permissions or 
> > ownership of the . directory if it is present in a tarball?
>
> Yes, as that's what tar has done "forever" and quite possibly some 
> people depend on it. It might be a good idea to omit extraction of "." 
> unless some new option is specified.

While the UNIX tar did this since ~ 1977, star does something different since 
before GNU tar even exists:

star extracts only files if they are younger in the archive than on disk.

This is what cpio decided to to and this is safe in most cases.


If you like to extract "." unconditionally there is a special star option
"-xdot".

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] How to specify the option for gzip used by tar?

2017-12-07 Thread Joerg Schilling
Tristan Miller  wrote:

> Greetings.
>
> On Wed, 6 Dec 2017 15:44:02 -0600, Peng Yu
>  wrote:
> > I am not sure how to use -l.
>
> Not -l (lowercase L), but -I (uppercase i).  The argument to -I is the
> compression program you want to use, along with its arguments.  So you
> probably want something like -I="gzip -n".

Just note that this is not portable

from "man tar":

 -I include-file

 Opens include-file containing a list of files,  one  per
 line,  and treats it as if each file appeared separately
 on the  command  line.  Be  careful  of  trailing  white
 spaces.  Also beware of leading white spaces, since, for
 each line in the included file, the entire  line  (apart
 from  the  newline) is used to match against the initial
 string of files to include. In the case  where  excluded
 files (see X function modifier) are also specified, they
 take precedence over all included files. If  a  file  is
 specified  in both the exclude-file and the include-file
 (or on the command line), it is excluded.

This exist since aprox. 35 years.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] tar writes invalid mtime for times before the epoch

2017-11-21 Thread Joerg Schilling
Paul Eggert  wrote:

> As I vaguely recall, this extension was designed by both of us in 
> collaboration, and superseded an earlier base-256 format that GNU tar 
> still supports but does not document. I'm too lazy to consult the email 
> archives to check my memory; it's not a big deal either way.

I remember that GNU tar used a base-64 method that could store up to 60 bits 
when I designed the base-256 method. I contacted you as I did not like to 
implement something that cannot store at least 64 bits.

We had some discussions about the method I used to mark a base-256 block (which 
is using the top bit of the left most byte).

> > The background is that base-256 allows up to 95 bits + sign bit and this is
> > sufficient for all possible storage as long as you cannot manage to store 
> > part
> > of the data in a parallel universe.
> No parallel universe should be needed to exhaust the format's limits. In 
> 2001 Seth Lloyd estimated that the observable universe, if treated as a 
> computer, would contain about 10**120 bits of information if quantum 
> gravity were taken into account. This would require 396 bits to address, 
> assuming byte-addressible storage. Admittedly Lloyd's estimate is very 
> rough and could well need updating in the light of more-recent physical 
> discoveries. Still, 95 bits does not seem to be nearly enough. And even 
> if 396 bits is right just now, eventually it'll be too small as the 
> number of bits in the universe is growing.

Well, if you look at existing storage, typically less than 1% of the matter 
used in such a device is active storage matter and you should allow others to 
create their own storage instead of using up all the universe ;-)

I made a practical assumption that a storage unit should not include more than 
aprox. 5x5x5m of volume from active storage matter and this results in aprox. 
one megamol of storage matter. Whith this assumption you either don't need more 
than 95 bits or you need a parallel universe.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] incorrect reporting of harlink mismatch

2017-11-09 Thread Joerg Schilling
Cezary Sliwa <sl...@ifpan.edu.pl> wrote:

>
> On Wed, November 8, 2017 18:14, Joerg Schilling wrote:
> > Cezary Sliwa <sl...@ifpan.edu.pl> wrote:
> >
> >> ==
> >>
> >> I get
> >>
> >> a/y: Not linked to a/y
>^^^
>
> >>
> >> which is evidently incorrect.
> >
> > This is correct.
>
> I am not sure whether you mean that the printed message is correct or my
> claim that it is not.

OK, you are correct and in case you are OK with the output from star, we agree.

Note that the archive does not contain the symlink target but only the hardlink 
target.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] incorrect reporting of harlink mismatch

2017-11-08 Thread Joerg Schilling
Cezary Sliwa  wrote:

> ==
>
> I get
>
> a/y: Not linked to a/y
>
> which is evidently incorrect.

This is correct.

With star -diff -v you get:

star: Blocksize = 16 records.
diffopts=perm,type,nlink,uid,gid,uname,gname,size,data,rdev,hardlink,symlink,sympath,sparse,mtime,ctime,dir,acl,xattr,fflags
Release star 1.5.3 (i386-pc-solaris2.11)
Archtypeexustar
Dumpdate1510160623.989320928 (Wed Nov  8 18:03:43 2017)
Volno   1
Blocksize   20 records
a/: different mtime,ctime
  0 drwxr-xr-x   2 joerg/bs  Nov  8 17:59 2017 a/
  0 drwxr-xr-x   2 joerg/bs  Nov  8 18:11 2017 a/
a/y: different nlink,ctime
  0 lrwxrwxrwx   2 joerg/bs  Nov  8 17:59 2017 a/y -> x
  0 lrwxrwxrwx   1 joerg/bs  Nov  8 17:59 2017 a/y -> x
a/z: not linked to a/y
a/z: different nlink,hardlink,ctime
  0 lrwxrwxrwx   2 joerg/bs  Nov  8 17:59 2017 a/z link to a/y
  0 lrwxrwxrwx   1 joerg/bs  Nov  8 18:11 2017 a/z -> x
star: 1 blocks + 0 bytes (total of 8192 bytes = 8.00k).

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Multivolume archive. Problem on extract

2017-05-29 Thread Joerg Schilling
???  wrote:

> I've got the bug on extract data from multivolume archive.
>
> 1)create multivolume archive - no error after that.
> 2)extract this archive
> and got error  "tar: This volume is out of sequence (15360 - 9216 != 7680)"

GNU tar uses a method to mark multi volume archives that is expected to fail 
with a certain probability.

You culd extract this using "star" as star ignores the meta data that gtar 
uses to verify a correct follow up volume.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] Use posix_fadvise to improve archive creation performance

2017-03-29 Thread Joerg Schilling
Paul Eggert  wrote:

> On 03/27/2017 07:02 AM, Carlo Alberto Ferraris wrote:
> > This is a PoC patch that improves archive creation performance at least in 
> > certain configurations
>
> What configuration performs poorly with sequential access? How much 
> improvement do you see with the patch, and why?

I doubt that such methods will help to speed up archiving. I did many tests 
with similar approaches with star since aprox. 1997 and I did never see any 
performance win on any modern OS.

There was one single OS where it helped to use unbufferede I/O: DG/UX.

I used O_DG_UNBUFFERED and a special selr written read function that could help 
to speed up star dramatically - but only on DG/UX.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
joerg.schill...@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/
 URL: http://cdrecord.org/private/ http://sf.net/projects/schilytools/files/'



Re: [Bug-tar] what is the point of archiving /proc "files" if you can't extract them later?

2016-11-07 Thread Joerg Schilling
??? Dan Jacobson  wrote:

> Tar can archive these /proc "files" just fine,
> but it can't extract them. It gets fooled by its storing a zero file size.

What you describe does not happen if you are using a real procfs.

This is however a well known implementation bug that exists in the Linux 
kernel where /proc only has a remote similarity with procfs.

On Linux, stat() indeed returns st_size == 0 for all files in /proc.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] http://seclists.org/fulldisclosure/2016/Oct/96

2016-10-28 Thread Joerg Schilling
Somchai Smythe  wrote:

> FYI,
>
> Just in case nobody informed you, the notice at:
>
> http://seclists.org/fulldisclosure/2016/Oct/96

Star completely skips such files since Summer 2003 ;-)

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] /usr/sfw/bin/gtar: Cannot utime: Invalid argument

2016-10-25 Thread Joerg Schilling
 wrote:

> Can you Please be specific how to check those

Well, I thought this is obvious: in the tar archive.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] /usr/sfw/bin/gtar: Cannot utime: Invalid argument

2016-10-24 Thread Joerg Schilling
Dagobert Michelsen  wrote:

> Hi Seetha,
>
> Am 24.10.2016 um 14:15 schrieb  
> :
> > As part and backup and restore, while restoring we got below error we are 
> > not able to find what is causing this ERROR, please help me ASAP as this is 
> > a high important blocker for us
>
> This binary is shipped with Solaris. I suggest to open an SR at Oracle with 
> high priority
> if this a blocker for you.

> > New vxfs FS on home_verify
> > gtar home_verify
> > Using gtar to receive root@10.255.56.137:/dev/rmt/7n 
> >  to /mnt
> > 
> > /usr/sfw/bin/gtar: ./crintuce/Ericsson/OMSec/sessions/1440012103926.cdb: 
> > Cannot utime: Invalid argument
> > /usr/sfw/bin/gtar: ./crintuce/Ericsson/OMSec/sessions/1456522272888.cdb: 
> > Cannot utime: Invalid argument
> > /usr/sfw/bin/gtar: Exiting with failure status due to previous errors
> > ERROR 2: /usr/sfw/bin/gtar --exclude=lost+found --extract --file=- 
> > --multi-volume --rmt-command=/usr/sfw/libexec/grmt --blocking-factor=4096
> > fsck ossdg/home_verify

I would guess that this is a problem with an out of range microsecond value or 
(in case of NFS or an unknown filesystem)with a time stamp not in the range 
1970..2038.

I recommend to check the archive headers for the files in question.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] Monthly backup doesn't expand files, but which are in the archive

2016-08-30 Thread Joerg Schilling
Steffen Nurpmeso  wrote:

>  |Even though smake is much older than gmake, I planned to let it die \
>  |in 1997 but 
>
> No, please don't.  It is fast and if you provide an .y inference
> rule it rocks it.

I know that it is fast ;-)
It needs 8x less CPU time that GNU make and 10x less CPU time that SunPro make
if I let it run on an up-to-date schilytools tree.

Which rule are you missing?
It this not sufficient:

#   Yacc language
.y:
$(YACC) $(YFLAGS) $<
$(CC) $(CFLAGS) $(LDFLAGS) $(LDLIBS) -o $@ y.tab.c
$(RM) y.tab.c


#   Yacc language
.y.o:
$(YACC) $(YFLAGS) $<
$(CC) $(CFLAGS) $(CPPFLAGS) -o $@ -c y.tab.c
$(RM) y.tab.c

#   Yacc language to C
.y.c:
$(YACC) $(YFLAGS) $<
mv y.tab.c $@

If you like to add builtin rules, just edit /opt/schily/lib/defaults.smk
and report your enhancements.


>  |willing to change smake.
>
> My problem was that i had to diversify my makefile because we now
> need -lrt on Solaris (for nanosleep(2)), also for
> a privilege-separated mini support-program, and that didn't work
> out (it would gain GSSAPI libraries too, then, for example).  The
> result used $(<) in a non-inference rule, which smake didn't like.
> (Maybe gmake(1) would have complained with .POSIX, but which
> i don't use because gmake v3.81 bails for it.)
> Now fixed.

$< is a dynamic macro that is only defined with inference rules.

SunPro make and GNU make expand it to something not documented with explicit 
rules but they use different algorithms. The result is identical for aprox. 80% 
of the cases by chance only. 

For this reason, smake warns you that you are using a non-portable makefile.


>  |If you know of statements in STARvsGNUTAR that are no longer true (the \
>  |file was 
>  |last changed in 2007), please send a list of items that need to be \
>  |corrected.
>
> Ooh.  Puh.  I feel it, i feel it ... there is a burnout
> syndrome lingering on the horizon.  ._.
> Ciao.


If you feel better, please try to send a list.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] Monthly backup doesn't expand files, but which are in the archive

2016-08-30 Thread Joerg Schilling
Steffen Nurpmeso  wrote:

>  |A better solution may be to call:
>  |
>  | star -c -f some-file -find 
>  |
>  |This is even faster than using find | xargs.
>
> I started using star instead of what was on (then) MacOS
> X / FreeBSD / Linux because the first time i have used it on some
> tarball it spit out error messages on format incompatibilities for
> tarballs of mine, a problem area i hadn't been sensitized for.
> And better that is!  And sometimes it would have been better to
> try it first -- i have produced some release tarballs via git
> archive, before testing on NetBSD tar, for example.  No no.

??? What do you like to say here?
Do you have problems with star because it warns about archives that claim to be 
POSIX compliant but really are not? Note that it does not warn with gtar 
archives, as gtar does not mark them as POSIX archives.

I know of bugs in various tar implementations, but I don't know any bug in star.
If you believe you discovered a bug, you should report it.

Please be more specific, so I am able to understand your concern.

> Same is true for smake, by the way, it just failed in the other
> window: it is really time to replace the main work machine that
> died almost a year ago now.

Even though smake is much older than gmake, I planned to let it die in 1997 but 
then I discovered that gmake does not work on many platforms that are listed as 
"working". Gmake does not work at all with non-trivial makefiles on OS/2 and 
VMS 
and it has massive problems on Cygwin because of the incorrect white space 
handling. I was forced to continue to support smake in order to support 
cdrtools on all target platforms.

Smake is much closer to POSIX than gmake, but it uses a common namespace for 
macros and rules and it does not support looking into timestamps in libraries. 
If you have a project that was written for "make" and not for "gmake", I am 
willing to change smake.

> Where was i?   Ah.  Well i could use list= of star, but that is
> non-portable (the STARvsGNUTAR has some false claims btw.).


If you are talking about CLI compatibility, you are right with "list=", 
but please note that this option has been added in 1984 already. At that time, 
there was no -I option in UNIX tar. This was added on UNIX in late 1989.

If you know of statements in STARvsGNUTAR that are no longer true (the file was 
last changed in 2007), please send a list of items that need to be corrected.


Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] Monthly backup doesn't expand files, but which are in the archive

2016-08-29 Thread Joerg Schilling
Steffen Nurpmeso  wrote:

> I've changed it back to how it was all the time, and only use "|
> xargs -0 tar -r -f $ar", adding the compression step later on.
> As far as i recall all this originated, long ago that is, from the
> problem that file lists stored in a file were not supported by all
> tar's around and for all possible operations, and this is a real
> pity, given that argument lists exceed so soon, and space is
> expensive.  Anyway, a mode to simply concatenate, and then
> a finalizer invocation would be great for my use case, otherwise.
> Maybe i'll switch to ar(1) instead, in the future.

A better solution may be to call:

star -c -f some-file -find 

This is even faster than using find | xargs.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] Monthly backup doesn't expand files, but which are in the archive

2016-08-29 Thread Joerg Schilling
Ralph Corderoy  wrote:

> Hi Steffen,
>
> > So it collects a list of files in a textfile, and then either
> >   $tarxcmd = "tar -c -f - | $COMPRESSOR > $ar";
> > or
> >   $tarxcmd = "tar -r -f $ar >/dev/null";
>
> With the second of these, $ar isn't a compressed file name, like
> foo.tar.gz?  Which was used for the problem tar file?
>
> > and then
> > unless (open XARGS, "| xargs -0 $tarxcmd 2>>$MFFN") {
>
> So xargs may run tar more than once, and if $#{$listref} has been
> growing then perhaps it's tipped over from one invocation to two
> recently?

If you like to create working incremental backups that are able to handle file 
renames, you need to backup any filesystem with exactly one command run, so 
xargs is a really bad idea.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] Cannot restore incremental backup after directories deleted and renamed

2016-07-18 Thread Joerg Schilling
"Dieterly, Deklan"  wrote:

> This script produces an error. "tar: Cannot rename 'backup/dir1' to 
> 'backup/dir2': Directory not empty"
>
> This is a use case that we would like to be able to handle. I've seen other 
> threads describe this problem too.
>
> Here are the links to the other threads.
>
> https://lists.gnu.org/archive/html/bug-tar/2008-07/msg5.html
> https://bugs.launchpad.net/freezer/+bug/1570304
> http://osdir.com/ml/bug-tar-gnu/2011-11/msg00016.html
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=648048

As you may see from the last entry, I reported this bug in September 2004 
already.

> Here is the script that produces the error.
...
> tar: Cannot rename 'backup/dir1' to 'backup/dir2': Directory not empty
> tar: Exiting with failure status due to previous errors
>
>
> Will this problem be addressed and fixed, or is there some fundamental 
> underlying reason why the problem will not be fixed? Thanks.

Given that the problem is known since nearly 12 years and there is no fix, I 
guess that the gtar maintainers also believe that there is no way to fix the 
problem without introducing a new incompatible archive format.


Let me start with a modified version of what I wrote in 2011:

This is a problem that I reported in 2004 already. The problem was discovered 
in September 2004 while running the first test for the new incremental 
dump/restore format in star. The tests had been made with ufsdump/ufsrestore, 
gtar and star. Gtar was the only program that completely failed in this test, 
so I am wondering why there are still backup management programs that use gtar 
as their backend.

I am not sure whether it is possible to solve the problem without introducing a 
new incompatible dump format in GNU tar. AFAIK Gnu tar tries to detect and 
understand all changes while creating the archive by using a partial database 
during create. Star detects and understands changes at extract time by using a 
complete database which coveres all files.

Star with it's incremental dump format that was inspired by ufsdump is able to 
handle all known deltas on a filesystem. Star (as "stable version") is 
available 
at:

https://sourceforge.net/projects/s-tar/files/

Frequent development snapshots are inside the "schilytools" tarball at:

https://sourceforge.net/projects/schilytools/files/


Star was running an incremental dump + ** incremental restore ** once a day on 
berlios.de with not a single problem since March 2005 (for aprox. 10 years 
until 
a new ZFS based fileserver with a snapshot based backup was introduced). There 
have been typically of 2-10 GB of changes per day on thousands of files and more
than 3500 successful dump/restore operations have been made in incremental mode.

Between September 2004 (the initial release of the star incremental support) 
and Mach 2005, there was one single minor problem that had to be fixed.


The difference between star and gtar is that star uses a known to work 
algorithm and archives all needed meta data inside the archive. When you do an 
incremental restore, star creates a database of filenames together with old and 
new inode numbers that permit to track any rename operation.

Gtar has no real support for tracking renames. Gtar just fully archives any 
renamed object. So if you have a 1TB disk with 900GB of data in a single top 
level directory and rename that directory, the next incremental holds 900GB of
gtar archive data.

If you try to do an incremental restore on that data, you would need 2x 900GB 
of space on the target filesystem.

If you do the same using star, the incremental archive holds 10kB and the 
incremental restore does not need more free space on the disk than you had in 
the original. If files are removed, this happens at the end of an incremental
restore operation and you need sufficient space to hold the removed files and 
the new files, but this is what you need with gtar as well.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] stat() on btrfs reports the st_blocks with delay (data loss in archivers)

2016-07-06 Thread Joerg Schilling
"Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote:

> On 2016-07-06 11:22, Joerg Schilling wrote:
> > "Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote:
> >
> >>> It should be obvious that a file that offers content also has allocated 
> >>> blocks.
> >> What you mean then is that POSIX _implies_ that this is the case, but
> >> does not say whether or not it is required.  There are all kinds of
> >> counterexamples to this too, procfs is a POSIX compliant filesystem
> >> (every POSIX certified system has it), yet does not display the behavior
> >> that you expect, every single file in /proc for example reports 0 for
> >> both st_blocks and st_size, and yet all of them very obviously have 
> >> content.
> >
> > You are mistaken.
> >
> > stat /proc/$$/as
> >   File: `/proc/6518/as'
> >   Size: 2793472 Blocks: 5456   IO Block: 512regular file
> > Device: 544h/88342528d  Inode: 7557Links: 1
> > Access: (0600/-rw---)  Uid: (   xx/   joerg)   Gid: (  xx/  bs)
> > Access: 2016-07-06 16:33:15.660224934 +0200
> > Modify: 2016-07-06 16:33:15.660224934 +0200
> > Change: 2016-07-06 16:33:15.660224934 +0200
> >
> > stat /proc/$$/auxv
> >   File: `/proc/6518/auxv'
> >   Size: 168 Blocks: 1  IO Block: 512regular file
> > Device: 544h/88342528d  Inode: 7568Links: 1
> > Access: (0400/-r)  Uid: (   xx/   joerg)   Gid: (  xx/  bs)
> > Access: 2016-07-06 16:33:15.660224934 +0200
> > Modify: 2016-07-06 16:33:15.660224934 +0200
> > Change: 2016-07-06 16:33:15.660224934 +0200
> >
> > Any correct implementation of /proc returns the expected numbers in st_size 
> > as
> > well as in st_blocks.
> Odd, because I get 0 for both values on all the files in /proc/self and 
> all the top level files on all kernels I tested prior to sending that 

I tested this with an official PROCFS-2 implementation that was written by 
the inventor of the PROC filesystem (Roger Faulkner) who as a sad news pased 
away last weekend.

You may have done your tests on an inofficial procfs implementation

> > Now you know why BTRFS is still an incomplete filesystem. In a few years 
> > when
> > it turns 10, this may change. People who implement filesystems of course 
> > need
> > to learn that they need to hide implementation details from the official 
> > user
> > space interfaces.
> So in other words you think we should be lying about how much is 
> actually allocated on disk and thus violating the standard directly (and 
> yes, ext4 and everyone else who does this with delayed allocation _is_ 
> strictly speaking violating the standard, because _nothing_ is allocated 
> yet)?

If it returns 0, it would be lying or it would be wrong anyway as it did not 
check fpe available space.

Also note that I mentioned already that the priciple availability of SEEK_HOLE 
does not help as there is e.g. NFS...

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] stat() on btrfs reports the st_blocks with delay (data loss in archivers)

2016-07-06 Thread Joerg Schilling
Paul Eggert <egg...@cs.ucla.edu> wrote:

> On 07/06/2016 04:53 PM, Joerg Schilling wrote:
> > Antonio Diaz Diaz<anto...@gnu.org>  wrote:
> >
> >> >Joerg Schilling wrote:
> >>> > >POSIX requires st_blocks to be != 0 in case that the file contains 
> >>> > >data.
> >> >
> >> >Please, could you provide a reference? I can't find such requirement at
> >> >http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_stat.h.html
> > blkcnt_t st_blocks  Number of blocks allocated for this object.
>
> This doesn't require that st_blocks must be nonzero if the file contains 
> nonzero data, any more that it requires that st_blocks must be nonzero 
> if the file contains zero data. In either case, metadata outside the 
> scope of st_blocks might contain enough information for the file system 
> to represent all the file's data.

In other words, you concur that a delayed assignment of the "correct" value for 
st_blocks while the contend of the file does not change is not permitted.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] stat() on btrfs reports the st_blocks with delay (data loss in archivers)

2016-07-06 Thread Joerg Schilling
Antonio Diaz Diaz <anto...@gnu.org> wrote:

> Joerg Schilling wrote:
> > POSIX requires st_blocks to be != 0 in case that the file contains data.
>
> Please, could you provide a reference? I can't find such requirement at 
> http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_stat.h.html

blkcnt_t st_blocks  Number of blocks allocated for this object.

It should be obvious that a file that offers content also has allocated blocks.

Blocks are "allocated" when the OS decides whether the new data will fit on the 
medium. The fact that some filesystems may have data in a cache but not yet on 
the medium does not matter here. This is how UNIX worked since st_block has 
been introduced nearly 40 years ago. 

A new filesystem cannot introduce new rules just because people believe it 
would 
save time.



Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] stat() on btrfs reports the st_blocks with delay (data loss in archivers)

2016-07-06 Thread Joerg Schilling
"Austin S. Hemmelgarn"  wrote:

> > A broken filesystem is a broken filesystem.
> >
> > If you try to change gtar to work around a specific problem, it may fail in
> > other situations.
> The problem with this is that tar is assuming things that are not 
> guaranteed to be true.  There is absolutely nothing that says that 
> st_blocks has to be non-zero if there's data in the file.  In fact, the 

This is not true: POSIX requires st_blocks to be != 0 in case that the file 
contains data.

> behavior that BTRFS used to have of reporting st_blocks to be 0 for 
> files entirely inlined in the metadata is absolutely correct given the 
> description of the field by POSIX, because there _are_ no blocks 
> allocated to the file (because the metadata block is technically 
> equivalent to the inode, which isn't counted by st_blocks).  This is yet 
> another example of an old interface (in this case, sparse file 
> detection) being short-sighted (read in this case as non-existent).

The internal state of a file system is irrelevant. The only thing that counts 
is the user space view and if a file contains data (read succeeds in user 
space), it needs to report st_blocks != 0.

> The proper fix for this is that tar (and anything else that handles 
> sparse files differently) should be parsing the file regardless.  It has 
> to anyway for a normal sparse file to figure out where the sparse 
> regions are, and optimizing for a file that's completely sparse (and 
> therefore probably pre-allocated with fallocate) is not all that 
> reasonable considering that this is going to be a very rare case in 
> normal usage.

This does not help.

Even on a decent OS (e.g. Solaris since Summer 2005) and a decent tar 
implementation (star) that supports SEEK_HOLE since Summer 2005, this method 
will not work for all filesystems as there may be old filesystem 
implementations and as there may be NFS...

For this reason, star still checks st_blocks in case that SEEK_HOLE did not 
work.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] stat() on btrfs reports the st_blocks with delay (data loss in archivers)

2016-07-05 Thread Joerg Schilling
Andreas Dilger  wrote:

> I think in addition to fixing btrfs (because it needs to work with existing
> tar/rsync/etc. tools) it makes sense to *also* fix the heuristics of tar
> to handle this situation more robustly.  One option is if st_blocks == 0 then
> tar should also check if st_mtime is less than 60s in the past, and if yes
> then it should call fsync() on the file to flush any unwritten data to disk,
> or assume the file is not sparse and read the whole file, so that it doesn't
> incorrectly assume that the file is sparse and skip archiving the file data.

A broken filesystem is a broken filesystem.

If you try to change gtar to work around a specific problem, it may fail in 
other situations.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] [PATCH 0/5] Several changes in multi-volume archive handling

2015-12-07 Thread Joerg Schilling
Sergey Poznyakoff  wrote:

> Hi Pavel,
>
> 1 - 4 applied (with slight modifications - 0a93c16c & 239441b5).
>
> As to 5, I have serious doubts.  Of course hinting about the ? key
> is a nice idea.  However, the "Prepare volume" prompt is in its
> present form for long enough time, so that some other software
> might rely on its wording.

Did you patch the current gtar behavior or did you introduce a new working 
system?

The gtar multi-volume method I know cannot be made bug-free. This is why I 
don't support to create a related output in star.

Star introduced a working system in 2004.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] format v7 unexpected header fields

2015-11-12 Thread Joerg Schilling
Mario Aichinger  wrote:

> Last week I did a research about the v7 format. I did this, because I
> wanted to push my programming skills and thought, why not to program a tar
> extractor. So I started reading about the most basic tar format which is
> obviously v7. The first thing I programed was the part which tries to find
> out which version of archive is passed to my program (v7, star, ustar, gnu,
> posix, pax, ...). My understanding was and is that v7 tar's have a 256 byte
> padding (null bytes) at the end of their headers, and an empty  magic
> header field. So these are the things I test against. After I implemented
> this in my program I created some test archives. Of course with tar and the
> option --format=v7. But my program failed to detect them as v7 tar's. So I
> opened them (the archives) in a text editor (kate) and found two fields in
> the header filled with zeros. This fields where located after the linkname
> header field.

gtar is not very clean with format switching. Last time I checked, it did even 
create gnu long name headers when other formats have been selected.

I recommend you to rather use "star H=v7tar"

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] feature request: determine compression type from stdin

2015-10-29 Thread Joerg Schilling
Scott Moser  wrote:

> I'm using gnu tar 1.28 as packaged on Ubuntu (xenial) and also tried 1.27
> on 15.10.  Please forgive me if this feature is already added upstream.
>
> Tar's determination of compression is very handy:

>
> My issue is that it only works if input is from a file.  The following do
> not work:
>$ wget http://example.com/some.tar.gz | tar -tf -
>$ tar -tf - < my.tar.gz
>tar: Archive is compressed. Use -z option
>tar: Error is not recoverable: exiting now

Did you try "star"? Star implementes this feature since 15 years and gtar's 
current auto-decompression features appeared after star offered the feature.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] Circular symlinks lead to segfault

2015-03-26 Thread Joerg Schilling
Paul Eggert egg...@cs.ucla.edu wrote:

 Yes, it's a problem if you give 'tar' an infinite tree, because tar 
 tries very hard to dump the whole thing, and having it check for loops 
 would slow it down.  The simplest workaround I can think of is to not 
 use -h unless you know the tree is finite.

BTW: star implements a method to detect loops in the filesystem that does not 
cost extra time.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] misidentified tar.bz2 file

2015-02-19 Thread Joerg Schilling
sam samtyg...@yahoo.co.uk wrote:

 On 16/02/15 17:19, Paul Eggert wrote:
  Is this a tar file that you can publish?  If not, can you provide a way
  to reproduce the bug?

 Bad file:
 http://www.hep.manchester.ac.uk/u/samt/pub/tar/results_9694.tar.bz2
 Good file
 http://www.hep.manchester.ac.uk/u/samt/pub/tar/results_9695.tar.bz2

Star unpacks both files using the compress detection code, I introduced in 
February 1999, so it is obviously a problem in the gtar compress detection 
code. 

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] Appending file(s) to tar file may silently fail

2014-11-12 Thread Joerg Schilling
Hendrik Grewe hendrik.gr...@tu-dortmund.de wrote:

 The defective tar file (generated by pg_basebackup) may be found here:

 https://depot.tu-dortmund.de/get/8aq4g

The only structural defect in the tarfile is that it contains too many bits in 
the mode field.

See the output from the program tartest.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] Appending file(s) to tar file may silently fail

2014-11-12 Thread Joerg Schilling
Markus Steinborn gnugv_maintai...@yahoo.de wrote:

 Something is broken here. If I add a fle (let's say mkdvdiso.sh) to
 the archive (only once), checking the result with tar -tvf
 defective.tar reveals that

 -rw--- postgres/postgres   8192 2014-11-12 15:57 global/pg_control

 is missing in the tar file (compared to the original tar file). star
 confirms this diagnostic. So GNU tar's append clearly removes a file,
 which is obviously another bug.

This append feature did never work decently in gtar. Gtar did e.g. ignore the 
tar archive format of the first part of the archive and appended data in the 
current setup regardless of what the previous archive format was.

BTW: given the fact that 

   8192 -rw---  postgres/postgres Nov 12 15:57 2014 global/pg_control

is a file with 8192 bytes of data, it seems to be a bigger problem when this 
file disappears. Note that I can understand that there might be a +-1 offset 
error, but then the structural integrity of the new archive would be lost and 
the following file could not be seen.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] [bug] multi-volume archive not compatible with --sparse

2014-09-16 Thread Joerg Schilling
Marek Marczykowski-Górecki marma...@invisiblethingslab.com wrote:

 Hi,

 I've noticed that sparse file header can't be splitted across multiple archive
 volume. So if you try to make a multi-volume archive of a big sparse file
 (with a lot of holes), it can happen that a file header (list of the holes)
 will not fit on the same archive volume. Tar then continues such a header on
 the next volume, but *without any tar archive header*. If even second volume
 isn't enough, it looks like the header got truncated there and tar starts the
 file data (with a proper header this time) on the next volume.
 When you try to extract such a file from an archive, tar complains that the
 second volume doesn't look like tar archive. When you concatenate all the
 volumes and extract this archive, it will look better (apart from tar:
 Skipping to next header message), but the file will be broken.

 The worst thing here - tar pretends that the archive was successfully created,
 the user do not get any error message during creation of such archive.

 Tested on tar 1.26 (Fedora 20), confirmed on 1.28 compiled from sources.

This is a well known problem in gtar. Gtar did never handle multi volume 
archives decently. The way gtar tries to verify the correctness of a follow up 
volume cannot work. This is why star does not support to emulate to write the 
gtar multi volume method. Star should however be able to extract the archive
as star ignores the gtar verification method.

See: http://cdrtools.sourceforge.net/private/man/star/
and http://cdrtools.sourceforge.net/private/man/star/star.4.html for a 
documentation of the archive file format used by star. Star uses a method that 
is able to correctly verify whether you used the right folow up volume.

See also: https://sourceforge.net/projects/s-tar/?source=directory
and https://sourceforge.net/projects/schilytools/?source=directory for frequent 
source snapshots.

Jörg

-- 
 EMail:jo...@schily.net(home) Jörg Schilling D-13353 Berlin
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.org/private/ 
http://sourceforge.net/projects/schilytools/files/'



Re: [Bug-tar] Regarding untaring tar files containing symlinks

2014-06-30 Thread Joerg Schilling
Amit Kapila amit.kapil...@gmail.com wrote:

  I send already a hint on how to do this using star.

 Thanks for your suggestion.
 I have tried to search on net to download this utility and found
 below link:
  ftp://ftp.ffokus.gmd.de/pub/unix/cdrecord/alpha/win32/

 I am not able to get from above link, it says location doesn't
 exist.

 Could you let me know from where can I get this utility.

It is on Sourceforge now.

 One more question, after using copysymlinks, will it retain the
 symlinks in Extracted data(folder)?

It will create copies as Win-DOS does not support symlnks.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [Bug-tar] Regarding untaring tar files containing symlinks

2014-06-30 Thread Joerg Schilling
Amit Kapila amit.kapil...@gmail.com wrote:

 On Mon, Jun 30, 2014 at 6:42 PM, Joerg Schilling 
 joerg.schill...@fokus.fraunhofer.de wrote:
  Amit Kapila amit.kapil...@gmail.com wrote:
   One more question, after using copysymlinks, will it retain the
   symlinks in Extracted data(folder)?
 
  It will create copies as Win-DOS does not support symlnks.

 For my usecase, I need it to maintain symlinks even after it gets
 untarred.  I have noticed that WinRar is able to maintain symlinks
 after extraction.

I am not sure how this should work, given the fact that Win-DOS does not 
correctly support symlinks.

See:

http://msdn.microsoft.com/en-us/library/windows/desktop/aa363866%28v=vs.85%29.aspx

On the other side, are you sure that winrar supports tar archives?

You may like to check to list this archive:


http://sourceforge.net/projects/s-tar/files/testscripts/pax-big-10g.tar.bz2/download

and test whether the listing looks correctly, similar to this:

10737418240 -rw---  jes/glone Jun 15 23:18 2002 10g
  0 -rw-r--r--  jes/glone Jun 15 16:53 2002 file
star: 1048576 blocks + 3072 bytes (total of 10737421312 bytes = 10485763.00k).

I would guess that winrar is unable to list both filenames.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [Bug-tar] Regarding untaring tar files containing symlinks

2014-06-28 Thread Joerg Schilling
Amit Kapila amit.kapil...@gmail.com wrote:

 I wanted to know that is there any tool on Windows which can
 extract from tar when it contains symlinks.

I send already a hint on how to do this using star.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [Bug-tar] Regarding untaring tar files containing symlinks

2014-06-27 Thread Joerg Schilling
Amit Kapila amit.kapil...@gmail.com wrote:

 While working on one software, I face the requirement
 to untar a tar file on windows and it contains symlinks as
 well.

 While untaring it using tar -xvf abc.tar, it gives me below
 error for symlinks:

 tar.exe: xyz/1234: Cannot create symlink to `E:\\sym_loc': Not a directory

 The directory sym_loc already exists, is this a known
 limitation of tar on Windows?

Win-DOS does not support symlinks. Even newer versions that claim to support 
symlinks make a difference between links to files and links to directories.

If you are not able to live with the Cygwin symlink emulation, I recommend to 
use star and to tell it to make copies by using the option -copysymlinks

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [Bug-tar] Feature request (patch): ignore requested members missing in archive

2014-06-26 Thread Joerg Schilling
Vladimir A. Pavlov p...@bk.ru wrote:

  Well, there are many options in gtar that do not follow a scheme but just 
  serve 
  a single purpose. For this reason, they are hard to remember.
  
  The current proposal would be covered by a general method in star:
  
  errctl=
  
  See: http://cdrtools.sourceforge.net/private/man/star/star.1.html
  
  page 21 ff.

 After the first look star seems to have the features I need (though in my case
 I have to actively use gnu tar specific
 --exclude/--recursion/--anchored/--wildcards so a deeper research is needed
 to understand whether I can implement what I need using star) and appears
 to be maintained the last two years (but the absense of releases from 2009 
 till
 2013 is not a good sign imho).

??? Are you talking about star release strategy?
New star releases are made availabe on a regular base - I believe more 
frequently 
that gtar. There never was a pause since 1982.

In the time between January 2009 and December 2012, there have been 230 file 
edits in 111 edit groups. The related deltas have been made availabe as 41 
separate releases in tar archives. So there is a new public release with an 
average frequency of one every 5 weeks. I hope this helps.

Regarding your options

Star supports -D, -V and pattern matching via pat= since a significant time 
before 
gtar exists. Gtar did not choose to be compatible, but this is not my fault.

Since 9 years, star in addition links to libfind and thus supports the find(1) 
syntax inherently via -find. This allows much more poweful selections than with 
a simple pattern matcher that of course is still present in star.


Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [Bug-tar] Possible bug in auto detection of extraction program

2014-04-15 Thread Joerg Schilling
Paul Eggert egg...@cs.ucla.edu wrote:

 On 04/14/2014 06:41 AM, Sasa Vilic wrote:
  I guess that in out particular case it just might be that accidentally 
  checksum is correct.

 Thanks for the bug report.  Perhaps tar could be modified to not only 
 look at the checksum, but also attempt to decode the first header (as a 
 sort of larger checksum).  That would have fixed your problemand would 
 fix the typical case of this sort of thing,though I suppose it still 
 wouldn't work in general.

Yesterday, I reconstructed the data from the last mail and it is obvious that 
the checksum is not correct. Star has absolutely no problems to deal with that
archive (except that it reports that the stream is not unpackable). Star 
detects both independently: 

- this is not a tar archive but a bzip2 compressed stream

- the checksum is not correct

so this problem in gtar is not caused by what you believe.

I am not sure whether you remember this, but since around 1994, gtar has 
strange problems that frequently result in skipping to next header messages.
These problems in the first attempt have only been reported with archives 
created by star and ignored, but since around y2000, there are also reports 
with archives created by gtar. About 10 years ago, there was a change that 
reduced the probability but that did not completely fix the problem.

As a side note: tar is also confused by this archive and reports:

tar tvf /tmp/x.tar 
tar: Blockgröße = 2
tar: Warnung: tar-Datei aufgrund von Quersumme mit Vorzeichen erzeugt
tar: Verzeichnis-Quersummenfehler

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [Bug-tar] [patch v3] Bug / question in tar

2014-03-31 Thread Joerg Schilling
Paul Eggert egg...@cs.ucla.edu wrote:

 Joerg Schilling wrote:
  Gtar does not read the files in this case:

 Hmm, it read the files for the case that I tried.  Perhaps it reads 
 sometimes and not others.  Either way, it appears that I was wrong and 
 this is not an entirely theoretical discussion; also, from other's 
 comments it appears that some practical uses run faster due to gtar's 
 optimizations, which sounds like a good thing.

The only reason I am aware of where it reads filesm is when it scans for sparse 
files.

What you call gtar's optimizations is what I would call unexpected side 
effects. Star shows that there is no need to have these side effects.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [Bug-tar] [patch v3] Bug / question in tar

2014-03-30 Thread Joerg Schilling
Tim Kientzle t...@kientzle.com wrote:

 I'm curious.  If someone types the following command:

tar cf /dev/null some files

 What do you think they expect to happen?

Of course that all files are read. POSIX does not allow different behavior just 
because the output is connected to a different sink.

Maybe this is why n-1 tar implenentations implement a uniform behavior.
Star implements a special documented option for the case that a user is 
interested in getting the size of the archive to be created quickly.
If one user (amanda) expects side-effects, I would see this as a bug.

tar cf /dev/null some files 

Is used by many people to do one of the following:

-   Do speed tests for the underlying filesystem

-   Give the OS the opportunity to cache data and meta data

-   Make sure that all data and meta data in a filesystem is read
in order to trigger disks to do auto-reallocation on blocks
that are going to become bad.

-   Make sure that all data and meta data in a filesystem is readable

Gtar does not allow this to be done easily and keep in mind that many OS 
implementations have a significant performance loss when using pipes.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [Bug-tar] [patch v3] Bug / question in tar

2014-03-30 Thread Joerg Schilling
Paul Eggert egg...@cs.ucla.edu wrote:

 Tim Kientzle wrote:
  If you really believe that sending output to /dev/null should not do 
  anything, make it a fatal error so people won't rely on it.

 That would be silly, as it defeats the whole point of having a /dev/null 
 for output.

 At this point we're arguing only about theory, since GNU tar actually 
 does read the files in this case.  But in other cases, programs avoid 

It seems that you are mistaken. Gtar does not read the files in this case:

gtar cf /dev/null /usr
gtar: Entferne führende ?/? von Elementnamen
gtar: Entferne führende ?/? von Zielen harter Verknüpfungen
49.358r 2.940u 7.180s 20% 0M 0+0k 0st 0+0io 0pf+0w

star cf /dev/null /usr
star: 763471 blocks + 0 bytes (total of 7817943040 bytes = 7634710.00k).
9:38.746r 4.470u 48.610s 9% 0M 0+0k 0st 0+0io 0pf+0w

It is unlikely that gtar is nearly 12x more efficient than star.

But let us check star in size estimation mode that does not read files:

star c -nullout /usr  
star: 763471 blocks + 0 bytes (total of 7817943040 bytes = 7634710.00k).
40.308r 1.080u 6.070s 17% 0M 0+0k 0st 0+0io 0pf+0w

And as this is on Solaris, we have /dev/zero, so let us check gtar to /dev/zero:

gtar cf /dev/zero /usr
gtar: Entferne führende ?/? von Elementnamen
gtar: Entferne führende ?/? von Zielen harter Verknüpfungen
10:06.437r 8.300u 48.370s 9% 0M 0+0k 0st 0+0io 0pf+0w


 input as an optimization, and that's perfectly all right.  For example, 
 'diff FOO FOO' doesn't read FOO twice, and there's nothing wrong with 

While diff may get it's result without knowing how this works, tar is known to 
read the files.


 It sounds like enough people are misusing GNU tar in the way you 
 describe that, if we improved its performance in this case, we'd need to 
 add a --be-stupid option so that tar would continue to read data that it 
 doesn't need to.  (Perhaps you could come up with a better name for the 
 option.  :-)

Tar reads the files in such a case, it is gtar that behaves different from 
other tar implementations.

BTW: The '?' chars in the output from gtar let me asume a bug. I cannot believe 
this was intentionally.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [Bug-tar] [patch v3] Bug / question in tar

2014-03-26 Thread Joerg Schilling
Pavel Raiskup prais...@redhat.com wrote:

 Thanks a lot for processing this.  This is kind of philosophical thread
 and I recall one thing:  should we take the detection of /dev/null output
 as an OK exception?

I believe that this gtar behavoir is a bug. It does not do what people expect 
and on platforms that do not have /dev/zero, there is no way to tell gtar to do 
what people expect.

Star has a special option -onull for the case that the user does not like the 
files to be read.



Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [Bug-tar] seek_hole proposal

2014-03-05 Thread Joerg Schilling
Pavel Raiskup prais...@redhat.com wrote:

 Hello all,

 I am trying to prepare patch which would reuse lseek's
 SEEK_HOLE/SEEK_DATA, the [v1] is attached.  Some info:

 Note that after discussion [1] I still think that existing ST_IS_SPARSE
 macro is better for file-sparseness detection than using SEEK_HOLE (not
 worth having additional syscalls open~seek~close).

Your code is not compatible to the SEEK_HOLE interface. A file is sparse in 
case that pathconf()/fpathconf(f, _PC_MIN_HOLE_SIZE) return a positive number 
and lseek(f, (off_t)0, SEEK_HOLE) returns a number  stat.st_size.

see man pathconf:

 11.  If a filesystem supports  the  reporting  of  holes
  (see  lseek(2), pathconf() and fpathconf() return a
  positive number that represents  the  minimum  hole
  size  returned  in  bytes.  The  offsets  of  holes
  returned will be aligned to this same value. A spe-
  cial  value of 1 is returned if the filesystem does
  not specify the minimum hole size but still reports
  holes.

In other cases, the file stil may be sparse, but the filesystem does not 
support SEEK_HOLE. I e.g. doubt that Linux correctly implements SEEK_HOLE
for NFS.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [Bug-tar] adding ACLs when there are none

2014-03-05 Thread Joerg Schilling
Pavel Raiskup prais...@redhat.com wrote:

 On Wednesday, March 05, 2014 05:06:06 Linda A. Walsh wrote:
  Pavel Raiskup wrote:
   Or could you give an example?  What *exactly* do you expect the --acls
   should behave by default?  Combine existing acls in parent directory
   (default acls) with the stored in archive?
  
   Thanks, Pavel
  
  -

  If the SetGid bit is set on a directory on linux, it is usually
  propagated to lower lower level dirs to permit a particular type of
  access to be propagated to  lower level files and dirs.

 The _default_ seems to be matter of taste.  Looking at how the SetGid
 works in GNU tar, the bit is inherited from parent by default (no
 additional option passed).  But when you specify '-p' option, then the bit
 is not inherited as you want (the permissions stored in archive have a
 priority).  I would rather take --acls similarly to -p in this regard.

This does nit seem to be correct.

The BSD sgroup bit on directories only propagates to directories and not to 
files. The default acls propagate to files also.

Note that the bevior in star has been defined in 2001 after talking to various 
people. gtar should behave similar.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



Re: [Bug-tar] paxutils: unable to work over ipv6

2014-01-29 Thread Joerg Schilling
Pavel Raiskup pa...@raiskup.cz wrote:

 Hi,

 $Subject is truth because of gethostbyname call from paxutils which works
 for ipv4 only.  We should use getaddrinfo these days for checking for
 existing hostname/ip address.

not only this

The gnu remote tape interface does not support the recent version of the 
protocol and as a result may destroy tape content in case that client and 
server are not on the same OS.

This is a problem that is addressed by the support in star since 19 years and 
was extended in 2001.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily



  1   2   3   >