Hi,
On Tue, Jul 24, 2007 at 06:01:38PM -0400, Yaroslav Halchenko wrote:
> > I guess an evil solution to *** that doesn't cause problems with ###
> > would be to create a dummy source package that Build-Depends: on the
> > exact version of the package it builds, so that uploads include a
> > >...<
Le Tue, Jul 24, 2007 at 06:01:38PM -0400, Yaroslav Halchenko a écrit :
> Hi,
>
> I am sorry to reincarnate the thread
I think that it is a very good idea indeed. Here is a rough summary of
it: http://wiki.debian.org/DataPackages
Was there any interesting discussion on the subject during Debconf
Hi,
I am sorry to reincarnate the thread but I just wanted a simple
clarification and give also few cents of my thoughts.
> * it's better to have stuff distributed by Debian than sourced
> elsewhere; we're a distribution, distributing is What We Do
> * it's better for users to
Wouter Verhelst dijo [Sun, Jun 10, 2007 at 06:59:49PM +0100]:
> > Honnestly, no, this is not true anymore nowadays. With a 500Gb sata
> > hard drive, you're able to have a full debian mirror (all archs). Such a
> > disk is around 100??? nowadays.
>
> ... but it will break down in three months wi
On Tue, 12 Jun 2007, Wouter Verhelst wrote:
I tried to give a meaningful answer to that in
<[EMAIL PROTECTED]>, but received no reply; I guess
it got drowned in the silly "all disks are reliable!" noise.
Perhaps you may want to read the three final paragraphs there and give
your opinion.
Well
On Tue, Jun 12, 2007 at 02:57:51PM +0200, Andreas Tille wrote:
> On Tue, 12 Jun 2007, Tim Cutts wrote:
>
> >That's not true, unfortunately. They also have different design
> >criteria for duty cycles, and more stringent MTBF testing
> >requirements. There's been a lot of assertion in this thread
On Tue, 12 Jun 2007, Tim Cutts wrote:
That's not true, unfortunately. They also have different design
criteria for duty cycles, and more stringent MTBF testing
requirements. There's been a lot of assertion in this thread,
without any real data, so this post provides links to some hard data
pro
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11 Jun 2007, at 9:22 pm, Josselin Mouette wrote:
You seem to strongly believe the cheap desktop hard disk is different
from the server hard disk. This is entirely wrong. Apart from 10k and
15k rpm disks, these are all strictly the same. Only the
On Mon, Jun 11, 2007 at 10:11:25PM +0100, Steve McIntyre wrote:
> In article <[EMAIL PROTECTED]> you write:
> >-=-=-=-=-=-
> >
> >Le lundi 11 juin 2007 à 21:16 +0100, Wouter Verhelst a écrit :
> >> The point wasn't that you can't set up a professional RAID array using
> >> cheap desktop hard disks;
In article <[EMAIL PROTECTED]> you write:
>-=-=-=-=-=-
>
>Le lundi 11 juin 2007 à 21:16 +0100, Wouter Verhelst a écrit :
>> The point wasn't that you can't set up a professional RAID array using
>> cheap desktop hard disks; you can, if you really want to, though I
>> wouldn't recommend it. And yes,
On Mon, Jun 11, 2007 at 10:22:32PM +0200, Josselin Mouette wrote:
> Le lundi 11 juin 2007 à 21:16 +0100, Wouter Verhelst a écrit :
> > The point wasn't that you can't set up a professional RAID array using
> > cheap desktop hard disks; you can, if you really want to, though I
> > wouldn't recommend
Le lundi 11 juin 2007 à 21:16 +0100, Wouter Verhelst a écrit :
> The point wasn't that you can't set up a professional RAID array using
> cheap desktop hard disks; you can, if you really want to, though I
> wouldn't recommend it. And yes, you're completely free to ignore that
> particular advise, s
On Mon, Jun 11, 2007 at 01:24:34PM -0600, Warren Turkal wrote:
> On Monday 11 June 2007 13:09:40 Roberto C. Sánchez wrote:
> > That may be true when it comes to breakdowns. However, I challenge you
> > to show me a "cheap" desktop disk that is also SCSI or SAS *and*
> > hotpluggable.
>
> While no
On Monday 11 June 2007 13:09:40 Roberto C. Sánchez wrote:
> That may be true when it comes to breakdowns. However, I challenge you
> to show me a "cheap" desktop disk that is also SCSI or SAS *and*
> hotpluggable.
While not SCSI or SAS, there are SATA controllers that support hotplugging
drives.
On Mon, Jun 11, 2007 at 07:28:13PM +0200, Mike Hommey wrote:
> Actual data seems to show the cheap desktop disks are not worse than
> so-called server-class disks.
My "actual data" does not back up your claim. Moreover, desktop-class
hard disks never support hotplugging, which you really want for
On Mon, Jun 11, 2007 at 07:28:13PM +0200, Mike Hommey wrote:
> On Sun, Jun 10, 2007 at 06:59:49PM +0100, Wouter Verhelst <[EMAIL PROTECTED]>
> wrote:
> >
> > A typical 300GB server-class hotpluggable SATA or SAS disk is quite a
> > bit more expensive than a typical desktop-class 500GB hard disk,
On 06/11/07 12:28, Mike Hommey wrote:
On Sun, Jun 10, 2007 at 06:59:49PM +0100, Wouter Verhelst <[EMAIL PROTECTED]>
wrote:
On Tue, Jun 05, 2007 at 11:31:37AM +0200, Pierre Habouzit wrote:
On Tue, Jun 05, 2007 at 10:27:26AM +0200, Marco d'Itri wrote:
Diskspace *is* a problem for mirrors, as is
On Sun, Jun 10, 2007 at 06:59:49PM +0100, Wouter Verhelst <[EMAIL PROTECTED]>
wrote:
> On Tue, Jun 05, 2007 at 11:31:37AM +0200, Pierre Habouzit wrote:
> > On Tue, Jun 05, 2007 at 10:27:26AM +0200, Marco d'Itri wrote:
> > > Diskspace *is* a problem for mirrors, as is bandwidth in many countries.
>
On Tue, Jun 05, 2007 at 11:31:37AM +0200, Pierre Habouzit wrote:
> On Tue, Jun 05, 2007 at 10:27:26AM +0200, Marco d'Itri wrote:
> > Diskspace *is* a problem for mirrors, as is bandwidth in many countries.
> > Also, you should think about this issue not just in the context of the
> > single package
On 10 Jun 2007, at 6:38 pm, Steffen Moeller wrote:
On Sunday 10 June 2007 17:20:54 you wrote:
On 9 Jun 2007, at 11:27 am, Steffen Moeller wrote:
Once a (computational) biologist starts a new
project, (s)he wants the latest data no matter what and anything
older than
three months (or a week so
On Sunday 10 June 2007 17:20:54 you wrote:
> On 9 Jun 2007, at 11:27 am, Steffen Moeller wrote:
> > Once a (computational) biologist starts a new
> > project, (s)he wants the latest data no matter what and anything
> > older than
> > three months (or a week sometimes) is likely not to be acceptable
On Sat, 9 Jun 2007, Steffen Moeller wrote:
It would be lovely if we could agree on a set of databases to support in
Debian and to have a permanent location in the file system for them. For the
reasons that Tim has already outlined I do not see to distribute the larger
database as Debian packages
On Wednesday 06 June 2007 13:00:19 Andreas Tille wrote:
> On Wed, 6 Jun 2007, Tim Cutts wrote:
> 0. Find a solution for large data sets in generel
> 1. Find a solution for static biological data (I couldn't believe
>that all biological data are really changing that frequently).
>
Le Tue, Jun 05, 2007 at 11:14:31PM +1000, Anthony Towns a écrit :
>
> Some thoughts on constraints:
Hi all,
there has been much brainstorming since the begining of this thread, so
I tried to summarise grossly the ideas on the wiki:
http://wiki.debian.org/DataPackages
Unfortunately, I will not
On 6 Jun 2007, at 12:00 pm, Andreas Tille wrote:
On Wed, 6 Jun 2007, Tim Cutts wrote:
... (some interesting points)
There were many valid points in your mail but even if the issue
was raised at the example of biological data it is a more general
issue for others as well. It might be that we
Frank Küster escreveu:
Santiago Vila <[EMAIL PROTECTED]> wrote:
On Wed, 6 Jun 2007, Tim Cutts wrote:
(aside: I'd love it if we could have some sort of "user package"
system which could allow non-root users to install software packages
in areas they have access to, and yet have full de
Santiago Vila <[EMAIL PROTECTED]> wrote:
> On Wed, 6 Jun 2007, Tim Cutts wrote:
>
>> (aside: I'd love it if we could have some sort of "user package"
>> system which could allow non-root users to install software packages
>> in areas they have access to, and yet have full dependency checking
>> on
On Wed, 6 Jun 2007, Tim Cutts wrote:
... (some interesting points)
There were many valid points in your mail but even if the issue
was raised at the example of biological data it is a more general
issue for others as well. It might be that we could:
0. Find a solution for large data sets i
On Wed, 6 Jun 2007, Tim Cutts wrote:
> (aside: I'd love it if we could have some sort of "user package"
> system which could allow non-root users to install software packages
> in areas they have access to, and yet have full dependency checking
> on the main system packages)
dpkg --admindir=$HOME
Charles Plessy <[EMAIL PROTECTED]> wrote:
> Obviously, this strongly increases the size that would be taken on the
> mirrors. Also, in the (mid-term) future, Debian can have many more
> mainstream tools, and I am quite sure that they do not all use the same
> format. So there is the risk of a pack
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 5 Jun 2007, at 9:47 pm, Roger Leigh wrote:
Anthony Towns <[EMAIL PROTECTED]> writes:
On Tue, Jun 05, 2007 at 06:28:53PM +0900, Charles Plessy wrote:
Le Tue, Jun 05, 2007 at 10:09:07AM +0200, Michael Hanke a ?crit :
My question is now: Is it r
On Tue, Jun 05, 2007 at 01:32:34PM -0400, Joey Hess <[EMAIL PROTECTED]> wrote:
> Anthony Towns wrote:
> > Debug packages: (369MB) (not arch:all)
> > 53959746 boson-dbg
> > 55430908 icedove-dbg
> > 56274922 koffice-dbg
> > 59787420 iceape-dbg
> > 86404478 libgl1-mesa-dri-dbg
Le Tue, Jun 05, 2007 at 09:47:37PM +0100, Roger Leigh a écrit :
> Anthony Towns <[EMAIL PROTECTED]> writes:
> >
> > Are either of you going to debconf, or able to point out some example
> > large (free?) data sets that should be packaged like this as a test case
> > for playing with over debconf?
>
Anthony Towns <[EMAIL PROTECTED]> writes:
> On Tue, Jun 05, 2007 at 06:28:53PM +0900, Charles Plessy wrote:
>> Le Tue, Jun 05, 2007 at 10:09:07AM +0200, Michael Hanke a ?crit :
>> > My question is now: Is it reasonable to provide this rather huge amount
>> > of data in a package in the archive?
>>
On Wed, 6 Jun 2007, Anthony Towns wrote:
Are either of you going to debconf, or able to point out some example
large (free?) data sets that should be packaged like this as a test case
for playing with over debconf?
For a first shot we could play with sauerbraten-data. I just
stumbled upon it
On Wed, Jun 06, 2007 at 01:58:33AM +1000, Anthony Towns wrote:
> On Tue, Jun 05, 2007 at 06:28:53PM +0900, Charles Plessy wrote:
> > Le Tue, Jun 05, 2007 at 10:09:07AM +0200, Michael Hanke a ?crit :
> > > My question is now: Is it reasonable to provide this rather huge amount
> > > of data in a pac
Hi!
* Anthony Towns <[EMAIL PROTECTED]> [070605 17:42]:
> Moving game data elsewhere would require some way for games in main to
> depend on data elsewhere.
That's one of topics the pkg-games team is planing to adress during a
BoF at DebConf7 (beside some other stuff). Hints welcome ;)
Yours
Anthony Towns wrote:
> Debug packages: (369MB) (not arch:all)
> 53959746 boson-dbg
> 55430908 icedove-dbg
> 56274922 koffice-dbg
> 59787420 iceape-dbg
> 86404478 libgl1-mesa-dri-dbg
These seem to be built with separated debugging symbols. They could
probably still be reduc
On Tue, Jun 05, 2007 at 06:28:53PM +0900, Charles Plessy wrote:
> Le Tue, Jun 05, 2007 at 10:09:07AM +0200, Michael Hanke a ?crit :
> > My question is now: Is it reasonable to provide this rather huge amount
> > of data in a package in the archive?
> many thanks for bringing this crucial question o
On Tue, Jun 05, 2007 at 03:58:08PM +0200, Frans Pop wrote:
> IMO it would be worth it if we could split out gigabytes of data from the
> main archive and thus significantly reduce the bandwidth needed for
> mirror syncs. Especially if that data is only used by an extremely small
> subset of user
Frans Pop wrote:
On Tuesday 05 June 2007 15:14, Anthony Towns wrote:
I'm not sure if avoiding duplicating the data (1G of data is bad, but
1G of the same data in a .orig.tar.gz _and_ a .deb is absurd) is enough
to just use the existing archive and mirror network, or if it'd still
be worth set
On 06/05/07 08:58, Frans Pop wrote:
On Tuesday 05 June 2007 15:14, Anthony Towns wrote:
I'm not sure if avoiding duplicating the data (1G of data is bad, but
1G of the same data in a .orig.tar.gz _and_ a .deb is absurd) is enough
to just use the existing archive and mirror network, or if it'd st
On Tuesday 05 June 2007 15:14, Anthony Towns wrote:
> I'm not sure if avoiding duplicating the data (1G of data is bad, but
> 1G of the same data in a .orig.tar.gz _and_ a .deb is absurd) is enough
> to just use the existing archive and mirror network, or if it'd still
> be worth setting up a separ
On Tue, 5 Jun 2007, Anthony Towns wrote:
Bug#38902 for hysterical interest, btw.
Ahh, my memory that this topic came up in 2000 was not that bad -
just missed it by 7 months.
I wonder, whether there is a more verbose explanation for tagging
it wontfix
http://bugs.debian.org/cgi-bin/bugrep
On Tue, Jun 05, 2007 at 06:28:53PM +0900, Charles Plessy wrote:
> Le Tue, Jun 05, 2007 at 10:09:07AM +0200, Michael Hanke a ?crit :
> > My question is now: Is it reasonable to provide this rather huge amount
> > of data in a package in the archive?
> > An alternative to a dedicated package would be
[EMAIL PROTECTED] (Marco d'Itri) wrote:
>> > Also, you should think about this issue not just in the context of the
>> > single package you are interested in but as a general policy.
>> I was hoping to give that impression...
> Then it should be obvious that it's a bad idea to add to the archive
>
On Tue, 5 Jun 2007, Marco d'Itri wrote:
Then it should be obvious
obvious = common sense
... but the "commons sense" has to be defined in a technical document.
that it's a bad idea to add to the archive
multiple packages each containing hundred of megabits of data which are
only useful f
On Jun 05, Michael Hanke <[EMAIL PROTECTED]> wrote:
> I believe this is a valid problem. I think that is exactly the reason why
> the Debian archive also provides the sources of each package
> (orig.tar.gz) and does not simply point to the upstream sites while
> keeping only the diffs in the archi
On Tue, Jun 05, 2007 at 10:27:26AM +0200, Marco d'Itri wrote:
> On Jun 05, Michael Hanke <[EMAIL PROTECTED]> wrote:
> > - diskspace is rather cheap and bandwith should be no problem as the
> >number of downloads will remain relatively low.
> Diskspace *is* a problem for mirrors, as is bandwid
Le Tue, Jun 05, 2007 at 10:09:07AM +0200, Michael Hanke a écrit :
>
> My question is now: Is it reasonable to provide this rather huge amount
> of data in a package in the archive?
>
> An alternative to a dedicated package would be to provide a
> download/install script for the data (like the mst
Hi,
On Tue, Jun 05, 2007 at 10:27:26AM +0200, Marco d'Itri wrote:
> On Jun 05, Michael Hanke <[EMAIL PROTECTED]> wrote:
> > - much easier to handle for users (thinking of offline machines)
> I could not care less, since the number of users affected is with very
> good approximation zero.
Agreed
On ti, 2007-06-05 at 10:37 +0200, Andreas Tille wrote:
> We also have some funny 3D games with huge data packages. So
> were is the borderline for this. Does it make sense to install
> a data repository that is not mirrored?
I suggest that it makes sense to a) package the data as .debs, for
eas
On Tue, 5 Jun 2007, Marco d'Itri wrote:
Also, you should think about this issue not just in the context of the
single package you are interested in but as a general policy.
I think because Michael actually is thinking about a general
policy he just asked this question here. He was asking for
2007/6/5, Michael Hanke <[EMAIL PROTECTED]>:
Hi,
I'm packaging some neuroimaging tools that come with datasets that
are required for those tools to work properly. The size of these
datasets is up to 400 MB (some others at least well over 100 MB).
My question is now: Is it reasonable to provide
On Jun 05, Michael Hanke <[EMAIL PROTECTED]> wrote:
> My question is now: Is it reasonable to provide this rather huge amount
> of data in a package in the archive?
Not for a niche package, at least.
> - much easier to handle for users (thinking of offline machines)
I could not care less, since
Hi,
I'm packaging some neuroimaging tools that come with datasets that
are required for those tools to work properly. The size of these
datasets is up to 400 MB (some others at least well over 100 MB).
My question is now: Is it reasonable to provide this rather huge amount
of data in a package in
56 matches
Mail list logo