Re: dpkg feature implementation

2010-01-05 Thread Mark Brown
On Tue, Jan 05, 2010 at 06:59:40PM +0530, dE . wrote:

 Problem is you have to make these DVD/CD, or in general storage media.
 Windows people are not willing to do that...they just want click and
 install.

It really sounds like you want to write some kind of apt frontend which
can process big package bundles here rather than do something with dpkg.


-- 
To UNSUBSCRIBE, email to debian-dpkg-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Thoughts about tdebs

2008-12-03 Thread Mark Brown
On Tue, Dec 02, 2008 at 05:51:30PM +0100, Thomas Viehmann wrote:
 Raphael Hertzog wrote:

  I guess you understand better the idea by now. This needs more thoughts
  but I wanted to share it so that you can think about it too.

 From what I understand of your idea, you seem to think about
 translations mainly as updates. While it is one of the goals to enable
 translation-only updates, it is not quite obvious to me what your
 proposal has to offer in terms of splitting translations out of the
 debs, which is an explicit goal. Also, the proposal specifically aims at
 limiting the amount of data that the archive and apt have to handle.

Surely translations can be modeled as updates to translationless .deb
files (ie, you have a .deb with no translations and then patch that
package to add the translations)?

-- 
You grabbed my hand and we fell into it, like a daydream - or a fever.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Generated changes and patch systems

2008-05-28 Thread Mark Brown
On Tue, May 27, 2008 at 06:06:28PM -0700, Russ Allbery wrote:
 Neil Williams [EMAIL PROTECTED] writes:

  With these gtk-doc files, it's not so much that the tmpl/*.sgml files
  are generated but that a tool essential to the build modifies them in a
  way that cannot be patched because the results of the patches are
  variable according to that third party tool.

 If the tool behaves and only behaves in that way (I haven't checked), that
 tool is broken and we should fix that tool.

I've run into a similar situation before with a vaugely borked upstream
build system - it was straightforward enough to work around the problem
by moving the files out of the way and then replacing them if there was
a backup present when clean was run.

-- 
You grabbed my hand and we fell into it, like a daydream - or a fever.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: git bikeshedding (Re: triggers in dpkg, and dpkg maintenance)

2008-02-29 Thread Mark Brown
On Thu, Feb 28, 2008 at 08:51:41PM +0100, Raphael Hertzog wrote:
 On Thu, 28 Feb 2008, Ian Jackson wrote:

  Does this not also suffer from the problem that branches made from my
  triggers branch become unuseable or difficult to merge ?

 git merge --squash is more or less equivalent to applying the patch
 corresponding to the whole branch. So it will also break merging from
 other branches based on the merged branch.

Well, they'd need to be squashed down too or similar were they merged
but it just reduces to the same problem as you've got with keeping the
history of the triggers branch out of mainline.

-- 
You grabbed my hand and we fell into it, like a daydream - or a fever.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: git bikeshedding (Re: triggers in dpkg, and dpkg maintenance)

2008-02-29 Thread Mark Brown
On Fri, Feb 29, 2008 at 05:11:17PM +0100, Raphael Hertzog wrote:

 But Guillem wants to review and understand the code. In this process,
 he will rearrange the changes in smaller logical chunks. 

Ah, the impression that has been created on the lists is more that the
patches were being NACKed and wouldn't be looked at until the logs had
been rewritten.  I guess that's a bit less of a bad situation.

-- 
You grabbed my hand and we fell into it, like a daydream - or a fever.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: git bikeshedding (Re: triggers in dpkg, and dpkg maintenance)

2008-02-26 Thread Mark Brown
On Tue, Feb 26, 2008 at 07:09:46PM +, Ian Jackson wrote:

 I will then merge mainline into my branch, do any conflict resolution
 necessary, and give it a quick test to make sure nothing seems to have
 been broken in the meantime.  Then merging my branch back into
 mainline is, as you say, just a fast forward - so I will simply do
 that, and push the result.

I've no idea if anyone involved would consider it acceptable but might
merging the triggers branch into the mainline with --squash be a
suitable comprimise?  This would give a single commit discarding the
branch history which isn't ideal but would avoid having the history
from your branch in the main history.

-- 
You grabbed my hand and we fell into it, like a daydream - or a fever.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Triggers status?

2007-10-24 Thread Mark Brown
On Wed, Oct 24, 2007 at 01:27:22PM -0400, Joey Hess wrote:
 Raphael Hertzog wrote:
  I just gave a quick look to your branch:

  - please rebase it on the current master branch (that way you're sure that
there are no conflicts)

 I have yet to see a use of git rebase that I am comfortable with.
 History is very important to me, even the ugly bits. Being able to see
 every misttep that was committed actually helps understanding the end
 result.

Loosing the missteps like that is one of the explicit goals of doing
this that Linus pushes for rebasing.  The other thing with it
(especailly with -i) is that it supports a workflow where patches are
submitted using git-format-patch and git-send-email and reviews are
iterated - this gives a patch per commit, allowing piecemeal review, but
means that you need to edit history in order to do the iterations.

 Merging it with the current upstream master would have the same effect
 with a more ugly history, surely?

Yes.  As I understand it what's going on with the kernel is that it's
aiming for a revision control history that looks a lot like things being
submitted and reviewed as patches even where that's not technically what
happens in all the stages the patch flows through.

  - you can also restructure the serie of changes with git rebase -i 
  origin/master
(provided that origin/master is an up-to-date copy of the master branch
of the main dpkg repository)
(merge several commits in a single, in particular bug fixes with initial
implementation so that while reviewing diff you don't find bugs which
are in fact fixed by a subsequent commit)

 This loses history, is highly complex, requires a significant knowledge
 of git (more than I am comfortable with after using it for a month), and
 AFAICS is an anourmous time sink for no benefit. I'd rather be coding.

It can be useful if used with a workflow that it supports (I've used it
with ones where individual commits get a fairly large amount of review)
but I'm personally less sure about it when it's just the head of a
branch that's getting reviewed.

-- 
You grabbed my hand and we fell into it, like a daydream - or a fever.


signature.asc
Description: Digital signature


Re: [PATCH] proposed v3 source format using .git.tar.gz

2007-10-08 Thread Mark Brown
On Mon, Oct 08, 2007 at 12:59:52PM +0200, Frank Lichtenheld wrote:
 On Sun, Oct 07, 2007 at 10:59:18PM -0500, Manoj Srivastava wrote:

  How is this magic done? If I have several dozen feature
   branches, all feeding back and forth, and have made lots and lots of
   changes in my sources, how does git preserve all this information
   without a commensurate increase in size?  This makes the information
   theory geek in me very very skeptical.

 By already using compression in the repository and by aggressively
 storing data as delta against earlier versions (both for binary and
 textual data).

For reference, a current clone I have of Linus' linux-2.6 repository
with full history and working tree is 489M of which 194M is .git.

-- 
You grabbed my hand and we fell into it, like a daydream - or a fever.


signature.asc
Description: Digital signature


Re: dpkg flex-based status file parser, for 35% speedup

2007-08-31 Thread Mark Brown
On Fri, Aug 31, 2007 at 02:54:51PM +0100, Ian Jackson wrote:
 Guillem Jover writes (Re: dpkg flex-based status file parser, for 35% 
 speedup):

  At the same time this rises the question of the static linking
  against zlib, libbz2 and libselinux, I've switched those to dynamic

 Why are we not using external programs for this ?  Does anyone know ?

I was told by Scott at one point that it was done that way for robustness.

-- 
You grabbed my hand and we fell into it, like a daydream - or a fever.


signature.asc
Description: Digital signature


Re: replicating package compression used by dpkg-deb

2006-09-28 Thread Mark Brown
Followup to the other lists too...

On Thu, Sep 28, 2006 at 12:15:21PM +0100, Mark Brown wrote:
 On Thu, Sep 28, 2006 at 03:45:16AM -0700, Ian Bruce wrote:
 
  It appears that dpkg-deb does not exec gzip, and it's not dynamically
  linked with anything except glibc. I suppose that it's statically linked
  against zlib1g or something like it. So the question is, how can the
  exact compression algorithm used by dpkg-deb be made available for
  another piece of software? Is it something that's well-specified, or is
  it liable to change at any moment?
 
 dpkg is statically linked against zlib (you really don't want to break
 the ability to unpack packages if you can avoid it).  I presume dpkg-deb
 is too though I haven't checked.  The precise output of zlib can vary
 between versions and depending on the options used when running it.
 
  Of course the whole problem would be radically simplified if the
  --rsyncable option were used when the packages were being built. Does
  anyone know what the rationale for not doing that is?
 
 It's not implemented by zlib.  There is an open bug report asking for
 it with a link to a patch but I wasn't happy with the patch since it
 is enabled using an environment variable and wanted to rework it to have
 an API before forwarding upstream.  I've not yet had sufficient
 enthusiasm to do that; current progress suggests that you shouldn't hold
 your breath on that one.
 
 -- 
 You grabbed my hand and we fell into it, like a daydream - or a fever.
 
 
 -- 
 To UNSUBSCRIBE, email to [EMAIL PROTECTED]
 with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
 
 

-- 
You grabbed my hand and we fell into it, like a daydream - or a fever.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: replicating package compression used by dpkg-deb

2006-09-28 Thread Mark Brown
On Thu, Sep 28, 2006 at 06:50:53AM -0700, Ian Bruce wrote:

 It turns out that the zlib1g-dev package contains a program called
 minigzip in source form. This is what's needed; minigzip -9
 reproduces exactly the compression used by dpkg-deb, unlike regular
 gzip.

This may not produce identical results if the version of zlib used to
produce the package differs from that on your system.

-- 
You grabbed my hand and we fell into it, like a daydream - or a fever.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bug#151591: asterisk Re: Processed: pending

2002-07-05 Thread Mark Brown
On Fri, Jul 05, 2002 at 08:18:02PM +1000, Mark Purcell wrote:

 The asterisk bug report also states that asterisk when run from the command 
 line
 isn't functional either, which wouldn't be dependant on dpkg, unless dpkg is
 installing with incorrect file permissions, but I can't seem to replicate
 that condition here.

Yes - it also happened when I invoked the asterisk binary directly.
I'll try to poke around and see if I can see exactly how it breaks.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]