Bug#963191: RFH: aufs

2020-06-29 Thread Timo Weingärtner
Hallo,

20.06.20 13:26 Bastian Blank:
> On Sat, Jun 20, 2020 at 12:14:17PM +0200, Jan Luca Naumann wrote:
> > At the moment aufs is nearly unmaintained since I do not have time due to
> > personal issues. Therefore, I would be happy if there is somebody to
> > co-maintain the package.
> Since the kernel supports overlayfs since some time now, what blocks
> it's removal?

There are Debian installations on filesystems that are incompatible with 
overlayfs, for example xfs without d_type.

I ran into this while trying to get rid of aufs.


Grüße
Timo
-- 
ITscope GmbH
Ludwig-Erhard-Allee 20
D-76131 Karlsruhe

Tel: +49 721 627376-0
Fax: +49 721 66499175

https://www.itscope.com

Handelsregister: AG Mannheim, HRB 232782 Sitz der Gesellschaft: Karlsruhe
Geschäftsführer: Alexander Münkel, Benjamin Mund, Stefan Reger

signature.asc
Description: This is a digitally signed message part.


Bug#722950: ITP: ssh-agent-filter -- filtering proxy for ssh-agent

2013-09-14 Thread Timo Weingärtner
Package: wnpp
Severity: wishlist
Owner: Timo Weingärtner t...@tiwe.de

* Package name: ssh-agent-filter
  Version : 0.2
  Upstream Author : Timo Weingärtner t...@tiwe.de
* URL : https://github.com/tiwe-de/ssh-agent-filter
* License : GPL3+
  Programming Lang: C++, Shell
  Description : filtering proxy for ssh-agent

This package solves the all-or-nothing problem regarding ssh-agent
forwarding. It contains:
 * ssh-agent-filter, the filtering proxy itself
 * afssh, a wrapper around ssh-agent-filter and ssh

Packaging is prepared, an upload to mentors is waiting for the bug number.


-- 
To UNSUBSCRIBE, email to debian-wnpp-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20130914181618.12163.70523.report...@timo03.72b.tiwe.de



Bug#662080: ITP: hadori -- Hardlinks identical files

2012-03-04 Thread Timo Weingärtner
Hallo Julian Andres,

2012-03-04 um 12:31:39 schriebst Du:
 But in any case, avoiding yet another tool with the same security
 issues (CVE-2011-3632) and bugs (and more bugs) as we currently
 have would be a good idea.
 
 hadori bugs:
   - Race, possible data loss: Calls unlink() before link(). If
 link() fails the data might be lost (best solution appears
 to be to link to a temporary file in the target directory
 and then rename to target name, making the replacement
 atomic)

I copied that from ln -f, which has the same bug then.

   - Error checking: Errors when opening files or reading
 files are not checked (ifstream uses the failbit and
 stuff).

If only one of the files fails nothing bad happens. If both fail bad things 
might happen, that's right.

 Common security issue, same as CVE-2011-3632 for Fedora's hardlink:
   [Unsafe operations on changing trees]
   - If a regular file is replaced by a non-regular one before an
 open() for reading, the program reads from a non-regular file
   - A source file is replaced by one file with different owner
 or permissions after the stat() and before the link()
   - A component of the path is replaced by a symbolic link after
 the initial stat()ing and readdir()ing. An attacker may use
 that to write outside of the intented directory.
 
 (Fixed in Fedora's hardlink, and my hardlink by adding a section
  to the manual page stating that it is not safe to run the
  program on changing trees).

I think that kind of bugs will stay until it is possible open/link by inode 
number. Perhaps *at() can help at the file currently examined.

Right now I only used it for my backups which are only accessible by me (and 
root).

 Possibly hardlink only bugs:
- Exaggeration of sizes. hardlink currently counts every
  link replaced -st_size, even is st_nlink  1. I don't
  know what hadori does there.

hadori does not have statistics. They should be easy to add, but I had no use 
for them.

 You can also drop your race check. The tool is unsafe on
 changing trees anyway, so you don't need to check whether
 someone else deleted the file, especially if you're then
 linking to it anyway.

I wanted it to exit when something unexpected happens.

 I knew that there were problems on large trees in 2009, but got nowhere with
 a fix in Python. We still have the two passes in hardlink and thus need to
 keep all the files currently, as I did not carry the link-first mode over
 from my temporary C++ rewrite, as memory usage was not much different in my
 test case. But as my test case was just running on /, the whole thing may
 not be representative. If there are lots of duplicates, link-first can
 definitely help.
 
 The one that works exactly as as you want is most likely Fedora's hardlink.

I've searched for other implementations and all the others do two passes when 
one is obviously enough.

 Yes. It looks readable, but also has far less features than hardlink (which
 were added to hardlink because of user requests).

I still don't get what --maximize (and --minimize) are needed for. In my 
incremental full backup scenario I get best results with keep-first. When 
hardlinking only $last and $dest (see below) even --maximize can disconnect 
files from older backups.

  It
  started with tree based map and multimap, now it uses the unordered_
  (hash based) versions which made it twice as fast in a typical workload.
 
 That's strange. In my (not published) C++ version of hardlink, unordered
 (multi) maps were only slightly faster than ordered ones. I then rewrote
 the code in C to make it more readable to the common DD who does not
 want to work with C++, and more portable.
 
 And it does not seem correct if you spend so much time in the map, at
 least not without caching. And normally, you most likely do not have
 the tree(s) you're hardlinking on cached.

I have, because I usually run:
$ rsync -aH $source $dest --link-dest $last
$ hadori $last $dest


Grüße
Timo


signature.asc
Description: This is a digitally signed message part.


Bug#662080: ITP: hadori -- Hardlinks identical files

2012-03-03 Thread Timo Weingärtner
Package: wnpp
Severity: wishlist
X-Debbugs-CC: debian-de...@lists.debian.org

   Package name: hadori
Version: 0.2
Upstream Author: Timo Weingärtner t...@tiwe.de
URL: https://github.com/tiwe-de/hadori
License: GPL3+
Description: Hardlinks identical files
 This might look like yet another hardlinking tool, but it is the only one
 which only memorizes one filename per inode. That results in less merory
 consumption and faster execution compared to its alternatives. Therefore
 (and because all the other names are already taken) it's called
 HArdlinking DOne RIght.
 .
 Advantages over other hardlinking tools:
  * predictability: arguments are scanned in order, each first version is kept
  * much lower CPU and memory consumption
  * hashing option: speedup on many equal-sized, mostly identical files

The initial comparison was with hardlink, which got OOM killed with a hundred 
backups of my home directory. Last night I compared it to duff and rdfind 
which would have happily linked files with different st_mtime and st_mode.

I need a sponsor. I'll upload it to mentors.d.n as soon as I get the bug 
number.


Greetings
Timo


signature.asc
Description: This is a digitally signed message part.


Bug#662080: ITP: hadori -- Hardlinks identical files

2012-03-03 Thread Timo Weingärtner
Hallo Julian Andres,

2012-03-04 um 01:07:42 schriebst Du:
 On Sun, Mar 04, 2012 at 12:31:16AM +0100, Timo Weingärtner wrote:

  The initial comparison was with hardlink, which got OOM killed with a
  hundred backups of my home directory. Last night I compared it to duff
  and rdfind which would have happily linked files with different st_mtime
  and st_mode.
 
 You might want to try hardlink 0.2~rc1. In any case, I don't think we need
 yet another such tool in the archive. If you want that algorithm, we can
 implement it in hardlink 0.2 using probably about 10 lines. I had that
 locally and it works, so if you want it, we can add it and avoid the
 need for one more hack in that space.

And why is lighttpd in the archive? Apache can do the same ...

 hardlink 0.2 is written in C, and uses a binary tree to map
 (dev_t, off_t) to a struct file which contains the stat information
 plus name for linking. It requires two allocations per file, one for
 the struct file with the filename, and one for the node in the tree
 (well, actually we only need the node for the first file with a
  specific (dev_t, off_t) tuple). A node has 3 pointers.

The hardlink I used at that time was written in python and definitely didn't 
do it the way I want.

hadori is written in C++11 which IMHO makes it look a little more readable. It 
started with tree based map and multimap, now it uses the unordered_ (hash 
based) versions which made it twice as fast in a typical workload.

The main logic is in hadori.C, handle_file and uses:

std::unordered_mapino_t, inode const kept;
std::unordered_mapino_t, ino_t to_link;
std::unordered_multimapoff_t, ino_t sizes;

class inode contains a struct stat, a file name and an adler checksum, but I 
plan to drop the last one because I think the hashing option is no great gain.


Grüße
Timo


signature.asc
Description: This is a digitally signed message part.


Bug#534891: ITP: openssh-known-hosts -- known_hosts downloader for OpenSSH

2009-06-28 Thread Timo Weingärtner
Am Sonntag, 28. Juni 2009 schrieb David Paleino:
 On Sun, 28 Jun 2009 01:47:39 +0200, Timo Weingärtner wrote:
  Upstream Author: Timo Weingärtner t...@tiwe.de
  URL: will go to mentors.debian.net as soon as I get the bug
  number

 This should really be upstream URL.
I am upstream and there was no public upstream URL at that time.
It can now be found at 
http://mentors.debian.net/debian/pool/main/o/openssh-known-hosts

  Description: This package allows you to download public hostkeys from
   multiple sources and merge them together into one file
  for use by OpenSSH. Plugins for some types of sources are included, new
  plugins can easily be written.

 This should be an appropriate short description, i.e. 60 characters
 maximum. What you wrote could well be used for the long description.
Actually I used this as the long description, the short description is:
known_hosts downloader for OpenSSH as in the Subject:.


Greetings
Timo



signature.asc
Description: This is a digitally signed message part.


Bug#534891: ITP: openssh-known-hosts -- known_hosts downloader for OpenSSH

2009-06-28 Thread Timo Weingärtner
Am Sonntag, 28. Juni 2009 schrieb Steve Langasek:
 On Sun, Jun 28, 2009 at 01:47:39AM +0200, Timo Weingärtner wrote:
  Package: wnpp
  Severity: wishlist
  X-Debbugs-CC: debian-de...@lists.debian.org
 
 Package name: openssh-known-hosts
  Version: 0.2
  Upstream Author: Timo Weingärtner t...@tiwe.de
  URL: will go to mentors.debian.net as soon as I get the bug
  number License: GPL2+
  Description: This package allows you to download public hostkeys from
   multiple sources and merge them together into one file
  for use by OpenSSH. Plugins for some types of sources are included, new
  plugins can easily be written.

 How does this avoid *totally negating* the security value of doing SSH host
 key validation?

Oh, this is missing in the package description. curl can use https and the 
curl and rsync plugins can do gpg verification.


Greetings
Timo


signature.asc
Description: This is a digitally signed message part.


Bug#534891: ITP: openssh-known-hosts -- known_hosts downloader for OpenSSH

2009-06-27 Thread Timo Weingärtner
Package: wnpp
Severity: wishlist
X-Debbugs-CC: debian-de...@lists.debian.org


   Package name: openssh-known-hosts
Version: 0.2
Upstream Author: Timo Weingärtner t...@tiwe.de
URL: will go to mentors.debian.net as soon as I get the bug number
License: GPL2+
Description: This package allows you to download public hostkeys from
 multiple sources and merge them together into one file for
 use by OpenSSH. Plugins for some types of sources are
 included, new plugins can easily be written.


signature.asc
Description: This is a digitally signed message part.


Bug#397350: Debian packages for vdr-xineliboutput ready, need sponsor

2006-12-28 Thread Timo Weingärtner
Am Donnerstag, 28. Dezember 2006 03:21 schrieb Thomas Schmidt:

 Thank you very much for your work, but i must admit that you should
 have informed us earlier, because there is allready a package for
 xineliboutput in our svn-repository [1] since August 2006, there was
 just the ITP missing.

 Unfortunately your ITP was not sent to debian-devel, in this case it
 would have been possible to prevent you from doing a lot of double
 work.

Reportbug didn't like my name, so I had to send the mail manually.

  Would someone please test my packages and upload them to Debian? It will
  be the first upload since some file headers needed to be fixed upstream
  first.

 Of course we can test your packages and upload, but i think it would be
 be better if you could take a look at the package we have allready in
 our repository and if you would continue maintaining the package in
 our svn-repository. (I guess that Tobias Grimm [2], who did most of the
 work with the package until now, would be very happy if you could help
 him with maintaining it.)

 (I would be very happy anyway, if you could help us with maintaining
 vdr and vdr-related packages, especially the packages we have allready
 in the archive [3], and the other packages in the svn-repository on
 alioth, which are not part of the official archive yet, any help is
 welcome.)

Is there documentation about how your packaging with svn works?

 Giving you commit-access to our repository would be no problem, the
 only thing we need is your username on alioth.debian.org.

My alioth login is timo-guest.


Timo

PS: I'm on the -devel and -changes lists now.
PPS: The -commits list doesn't exist, but is shown on alioth.


pgpTupv2VA3ON.pgp
Description: PGP signature


Bug#397350: Debian packages for vdr-xineliboutput ready, need sponsor

2006-12-27 Thread Timo Weingärtner
Hello,

I made Debian packages for vdr-xineliboutput.

My mentor Philipp Kern helped me with the versions before with tips and 
testing. Unfortunately he is currently unable to test the packages.

Would someone please test my packages and upload them to Debian? It will be 
the first upload since some file headers needed to be fixed upstream first.

Packages for i386 and amd64 can be found at [1].

The corresponding ITP bug is #397350 [2].


Greetings,
Timo

[1] http://www.stud.uni-karlsruhe.de/~uyavo/debian/
[2] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=397350


pgpWTKOBVsDNg.pgp
Description: PGP signature


Bug#397350: ITP: vdr-xineliboutput -- Simple framebuffer and/or X11 front-end for VDR

2006-11-06 Thread Timo Weingärtner
Package: wnpp
Owner: Timo Weingärtner [EMAIL PROTECTED]
Severity: wishlist

* Package name: vdr-xineliboutput
  Version : 1.0.0~pre6
  Upstream Author : Petri Hintukainen [EMAIL PROTECTED]
* URL : http://www.hut.fi/u/phintuka/vdr/vdr-xineliboutput
* License : GPL
  Programming Lang: C
  Description : Simple framebuffer and/or X11 front-end for VDR

Simple framebuffer and/or X11 front-end for VDR.
(displays OSD and video in raw X/Xv/XvMC window or
Linux framebuffer/DirectFB.)

Support for local and remote frontends.

Built-in image and media player supports playback of most known media
files (avi/mp3/divx/jpeg/...) and network radio/video streams
(http, rtsp, ...) directly from VDR.


pgpEpdSuKug2u.pgp
Description: PGP signature


Bug#262121: ITP: libpam-require -- PAM module to allow and/or deny particular users and/or groups

2005-09-24 Thread Timo Weingärtner
Am Mittwoch, 14. September 2005 15:50 schrieb Timo Weingärtner:
 Am Mittwoch, 14. September 2005 15:13 schrieb Steve Langasek:
  What does this PAM module do that can't already be done with
  pam_listfile or pam_group?

 pam_group is good, exactly what I need. A few weeks ago I searched for such
 a module and only found this one.

pam_group as documented at [1] is what I need, the version in Debian [2] does 
not do what I need. 
pam_listfile [3] can do it, but needs an additional file to store just one 
group name.


Timo Weingärtner


[1] http://www.daemon-systems.org/man/pam_group.8.html
[2] http://www.kernel.org/pub/linux/libs/pam/Linux-PAM-html/pam-6.html#ss6.8
[3] http://www.kernel.org/pub/linux/libs/pam/Linux-PAM-html/pam-6.html#ss6.13


pgpSwAI5Arhck.pgp
Description: PGP signature


Bug#262121: ITP: libpam-require -- PAM module to allow and/or deny particular users and/or groups

2005-09-14 Thread Timo Weingärtner
retitle 262121 ITP: libpam-require -- PAM module to allow and/or deny 
particular users and/or groups
owner 262121 !
thanks

I want to become the maintainer for this new package.

As I am not (yet) a Debian Developer, I will need a sponsor/mentor for this.
Somebody near the university of Karlsruhe (Germany) would be ideal, since I am
a student there.

The package can currently be found at
deb http://www.stud.uni-karlsruhe.de/~uyavo/debian/ ./
deb-src http://www.stud.uni-karlsruhe.de/~uyavo/debian/ ./

As soon as the BTS-Control-Bot processes this message, I will upload my 
package to sponsors.debian.net.

Would be nice, if someone could sponsor this package (and probably others in 
the future).


Timo Weingärtner


pgpmQCQrmanE2.pgp
Description: PGP signature


Bug#262121: ITP: libpam-require -- PAM module to allow and/or deny particular users and/or groups

2005-09-14 Thread Timo Weingärtner
Am Mittwoch, 14. September 2005 15:13 schrieb Steve Langasek:
 What does this PAM module do that can't already be done with
 pam_listfile or pam_group?

pam_group is good, exactly what I need. A few weeks ago I searched for such a 
module and only found this one.

Shall we close this ITP?


Thanks for the hint,
Timo Weingärtner


pgp98wikWUhKY.pgp
Description: PGP signature