Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-18 Thread Ulrik Mikaelsson
2011/2/17 Bruno Medeiros :
>
> Yeah, that's true. Some projects, the Linux kernel being one of the best
> examples, are more distributed in nature than not, in actual organizational
> terms. But projects like that are (and will remain) in the minority, a
> minority which is probably a very, very small.
>
Indeed. However, I think it will be interesting to see how things
develop, if this will be the case in the future too. The Linux kernel,
and a few other projects were probably decentralized from start by
necessity, filling very different purposes. However, new tools tends
to affect models, which might make it a bit more common in the future.
In any case, it's an interesting time to do software development.


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-18 Thread Bruno Medeiros

On 16/02/2011 17:54, Ulrik Mikaelsson wrote:

2011/2/16 Russel Winder:


Definitely the case.  There can only be one repository that represents
the official state of a given project.  That isn't really the issue in
the move from CVCS systems to DVCS systems.


Just note that not all projects have a specific "state" to represent.
Many projects are centered around the concept of a centralized
project, a "core"-team, and all-around central organisation and
planning. Some projects however, I guess the Linux kernel is a prime
example, have been quite de-centralized even in their nature for a
long time.

In the case of KDE, for a centralized example, there is a definite
"project version", which is the version currently blessed by the
central project team. There is a centralized project planning,
including meetings, setting out goals for the coming development.

In the case of Linux, it's FAR less obvious. Sure, most people see
master@torvalds/linux-2.6.git as THE Linux-version. However, there are
many other trees interesting to track as well, such as the various
distribution-trees which might incorporate many drivers not in
mainline, especially for older stability-oriented kernels, RHEL or
Debian is probably THE version to care about. You might also be
interested in special-environment-kernels, such as non x86-kernels, in
which case you're probably more interested in the central repo for
that architecture, which is rarely Linuses. Also, IIRC, hard and soft
realtime-enthusiasts neither looks at linuses tree first.

Above all, in the Linux-kernel, there is not much of "centralised
planning". Linus doesn't call to a big planning-meeting quarterly to
set up specific milestones for the next kernel release, but in the
beginning of each cycle, he is spammed with things already developed
independently, scratching someones itch. He then cherry-picks the
things that has got good reviews and are interesting for where he
wants to go with the kernel. That is not to say that there aren't a
lot of coordination and communication, but there isn't a clear
centralized authority steering development in the same ways as in many
other projects.

The bottom line is, many projects, even ones using DVCS, are often
centrally organized. However, the Linux kernel is clear evidence it is
not the only project model that works.


Yeah, that's true. Some projects, the Linux kernel being one of the best 
examples, are more distributed in nature than not, in actual 
organizational terms. But projects like that are (and will remain) in 
the minority, a minority which is probably a very, very small.


--
Bruno Medeiros - Software Engineer


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-16 Thread Ulrik Mikaelsson
2011/2/16 Russel Winder :
>
> Definitely the case.  There can only be one repository that represents
> the official state of a given project.  That isn't really the issue in
> the move from CVCS systems to DVCS systems.
>
Just note that not all projects have a specific "state" to represent.
Many projects are centered around the concept of a centralized
project, a "core"-team, and all-around central organisation and
planning. Some projects however, I guess the Linux kernel is a prime
example, have been quite de-centralized even in their nature for a
long time.

In the case of KDE, for a centralized example, there is a definite
"project version", which is the version currently blessed by the
central project team. There is a centralized project planning,
including meetings, setting out goals for the coming development.

In the case of Linux, it's FAR less obvious. Sure, most people see
master@torvalds/linux-2.6.git as THE Linux-version. However, there are
many other trees interesting to track as well, such as the various
distribution-trees which might incorporate many drivers not in
mainline, especially for older stability-oriented kernels, RHEL or
Debian is probably THE version to care about. You might also be
interested in special-environment-kernels, such as non x86-kernels, in
which case you're probably more interested in the central repo for
that architecture, which is rarely Linuses. Also, IIRC, hard and soft
realtime-enthusiasts neither looks at linuses tree first.

Above all, in the Linux-kernel, there is not much of "centralised
planning". Linus doesn't call to a big planning-meeting quarterly to
set up specific milestones for the next kernel release, but in the
beginning of each cycle, he is spammed with things already developed
independently, scratching someones itch. He then cherry-picks the
things that has got good reviews and are interesting for where he
wants to go with the kernel. That is not to say that there aren't a
lot of coordination and communication, but there isn't a clear
centralized authority steering development in the same ways as in many
other projects.

The bottom line is, many projects, even ones using DVCS, are often
centrally organized. However, the Linux kernel is clear evidence it is
not the only project model that works.


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-16 Thread Russel Winder
On Wed, 2011-02-16 at 14:51 +, Bruno Medeiros wrote:
[ . . . ]
> That stuff about DVCS not having a central repository is another thing 
> that is being said a lot, but is only true in a very shallow (and 
> non-useful) way. Yes, in DVCS there are no more "working copies" as in 
> Subversion, now everyone's working copy is a full fledged 
> repository/clone that in technical terms is peer of any other repository.
> However, from an organizational point of view in a project, there is 
> always going to be a "central" repository. The one that actually 
> represents the product/application/library, where the builds and 
> releases are made from. (Of course, there could be more than one central 
> repository if there are multiple kinds of releases like 
> stable/experimental, or forks of the the product, etc.)

Definitely the case.  There can only be one repository that represents
the official state of a given project.  That isn't really the issue in
the move from CVCS systems to DVCS systems.

> Maybe the DVCS world likes the term public/shared repository better, but 
> that doesn't make much difference.

In the Bazaar community, and I think increasingly in Mercurial and Git
ones, people talk of the "mainline" or "master". 

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@russel.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


signature.asc
Description: This is a digitally signed message part


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-16 Thread Bruno Medeiros

On 11/02/2011 13:14, Jean Crystof wrote:


Since you're a SVN advocate, please explain how well it works with 2500 GB of 
asset files?


I'm not an SVN advocate.
I have started using DVCSs over Subversion, and generally I agree they 
are better, but what I'm saying is that they are not all roses... it is 
not a complete win-win, there are a few important cons, like this one.


--
Bruno Medeiros - Software Engineer


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-16 Thread Bruno Medeiros

On 11/02/2011 18:31, Michel Fortin wrote:


Ideally, if one wants to do push but the ancestor history is
incomplete, the VCS would download from the central repository
whatever revision/changeset information was missing.


Actually, there's no "central" repository in Git.


That stuff about DVCS not having a central repository is another thing 
that is being said a lot, but is only true in a very shallow (and 
non-useful) way. Yes, in DVCS there are no more "working copies" as in 
Subversion, now everyone's working copy is a full fledged 
repository/clone that in technical terms is peer of any other repository.
However, from an organizational point of view in a project, there is 
always going to be a "central" repository. The one that actually 
represents the product/application/library, where the builds and 
releases are made from. (Of course, there could be more than one central 
repository if there are multiple kinds of releases like 
stable/experimental, or forks of the the product, etc.)
Maybe the DVCS world likes the term public/shared repository better, but 
that doesn't make much difference.



--
Bruno Medeiros - Software Engineer


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-16 Thread Bruno Medeiros

On 11/02/2011 23:30, Walter Bright wrote:

Bruno Medeiros wrote:

but seriously, even if I am connected to the Internet I cannot code
with my laptop only, I need it connected to a monitor, as well as a
mouse, (and preferably a keyboard as well).


I found I can't code on my laptop anymore; I am too used to and needful
of a large screen.



Yeah, that was my point as well. The laptop monitor is too small for 
coding, (unless one has a huge laptop).


--
Bruno Medeiros - Software Engineer


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-12 Thread Ulrik Mikaelsson
2011/2/11 Bruno Medeiros :
> On 09/02/2011 23:02, Ulrik Mikaelsson wrote:
>> You don't happen to know about any projects of this kind in any other
>> VCS that can be practically tested, do you?
>>
>
> You mean a project like that, hosted in Subversion or CVS (so that you can
> convert it to Git/Mercurial and see how it is in terms of repo size)?
> I don't know any of the top of my head, except the one in my job, but
> naturally it is commercial and closed-source so I can't share it.
> I'm cloning the Mozilla Firefox repo right now, I'm curious how big it is. (
> https://developer.mozilla.org/en/Mozilla_Source_Code_%28Mercurial%29)
>
> But other than that, what exactly do you want to test? There is no specific
> thing to test, if you add a binary file (from a format that is already
> compressed, like zip, jar, jpg, etc.) of size X, you will increase the repo
> size by X bytes forever. There is no other way around it. (Unless on Git you
> rewrite the history on the repo, which doubtfully will ever be allowed on
> central repositories)
>

I want to test how much overhead the git-version _actually_ is,
compared to the SVN-version. Even though the jpg are unlikely to be
much more compressible with regular compression, with
delta-compression and the fact of growing project-size it might still
be interesting to see how much overhead we're talking, and what the
performance over network is.


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-11 Thread Walter Bright

Bruno Medeiros wrote:
but seriously, even if I am 
connected to the Internet I cannot code with my laptop only, I need it 
connected to a monitor, as well as a mouse, (and preferably a keyboard 
as well).


I found I can't code on my laptop anymore; I am too used to and needful of a 
large screen.


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-11 Thread Michel Fortin
On 2011-02-11 08:05:27 -0500, Bruno Medeiros 
 said:



On 09/02/2011 14:27, Michel Fortin wrote:

On 2011-02-09 07:49:31 -0500, Bruno Medeiros
 said:


I was about to say "Cool!", but then I checked the doc on that link
and it says:
"A shallow repository has a number of limitations (you cannot clone or
fetch from it, nor push from nor into it), but is adequate if you are
only interested in the recent history of a large project with a long
history, and would want to send in fixes as patches. "
So it's actually not good for what I meant, since it is barely usable
(you cannot push from it). :(


Actually, pushing from a shallow repository can work, but if your
history is not deep enough it will be a problem when git tries determine
the common ancestor. Be sure to have enough depth so that your history
contains the common ancestor of all the branches you might want to
merge, and also make sure the remote repository won't rewrite history
beyond that point and you should be safe. At least, that's what I
understand from:



Interesting. 


But it still feels very much like a second-class functionality, not 
something they really have in mind to support well, at least not yet.


Ideally, if one wants to do push but the ancestor history is 
incomplete, the VCS would download from the central repository whatever 
revision/changeset information was missing.


Actually, there's no "central" repository in Git. But I agree with your 
idea in general: one of the remotes could be designated as being a 
source to look for when encountering a missing object, probably the one 
from which you shallowly cloned from. All we need is someone to 
implement that.



--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-11 Thread Jean Crystof
Bruno Medeiros Wrote:

> On 09/02/2011 23:02, Ulrik Mikaelsson wrote:
> > 2011/2/9 Bruno Medeiros:
> >>
> >> It's unlikely you will see converted repositories with a lot of changing
> >> blob data. DVCS, at the least in the way they work currently, simply kill
> >> this workflow/organization-pattern.
> >> I very much suspect this issue will become more important as time goes on -
> >> a lot of people are still new to DVCS and they still don't realize the full
> >> implications of that architecture with regards to repo size. Any file you
> >> commit will add to the repository size *FOREVER*. I'm pretty sure we 
> >> haven't
> >> heard the last word on the VCS battle, in that in a few years time people
> >> are *again* talking about and switching to another VCS :( . Mark these
> >> words. (The only way this is not going to happen is if Git or Mercurial are
> >> able to address this issue in a satisfactory way, which I'm not sure is
> >> possible or easy)
> >>
> >
> > You don't happen to know about any projects of this kind in any other
> > VCS that can be practically tested, do you?
> >
> 
> You mean a project like that, hosted in Subversion or CVS (so that you 
> can convert it to Git/Mercurial and see how it is in terms of repo size)?
> I don't know any of the top of my head, except the one in my job, but 
> naturally it is commercial and closed-source so I can't share it.
> I'm cloning the Mozilla Firefox repo right now, I'm curious how big it 
> is. ( https://developer.mozilla.org/en/Mozilla_Source_Code_%28Mercurial%29)
> 
> But other than that, what exactly do you want to test? There is no 
> specific thing to test, if you add a binary file (from a format that is 
> already compressed, like zip, jar, jpg, etc.) of size X, you will 
> increase the repo size by X bytes forever. There is no other way around 
> it. (Unless on Git you rewrite the history on the repo, which doubtfully 
> will ever be allowed on central repositories)

One thing we've done at work with game asset files is we put them in a separate 
repository and to conserve space we use a cleaned branch as a base for work 
repository. The "graph" below shows how it works

initial state -> alpha1 -> alpha2 -> beta1 -> internal rev X -> internal rev 
X+1 -> internal rev X+2 -> ... -> internal rev X+n -> beta2

Now we have a new beta2. What happens next is we take a snapshot copy of the 
state of beta2, go back to beta1, create a new branch and "paste" the snapshot 
there. Now we move the old working branch with internal revisions to someplace 
safe and start using this as a base. And the work continues with this:

initial state -> alpha1 -> alpha2 -> beta1 -> beta2 > internal rev X+n+1 -> ...

The repository size won't become a problem with text / source code.

Since you're a SVN advocate, please explain how well it works with 2500 GB of 
asset files?


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-11 Thread Bruno Medeiros

On 09/02/2011 14:27, Michel Fortin wrote:

On 2011-02-09 07:49:31 -0500, Bruno Medeiros
 said:


On 04/02/2011 20:11, Michel Fortin wrote:

On 2011-02-04 11:12:12 -0500, Bruno Medeiros
 said:


Can Git really have an usable but incomplete local clone?


Yes, it's called a shallow clone. See the --depth switch of git clone:
<http://www.kernel.org/pub/software/scm/git/docs/git-clone.html>


I was about to say "Cool!", but then I checked the doc on that link
and it says:
"A shallow repository has a number of limitations (you cannot clone or
fetch from it, nor push from nor into it), but is adequate if you are
only interested in the recent history of a large project with a long
history, and would want to send in fixes as patches. "
So it's actually not good for what I meant, since it is barely usable
(you cannot push from it). :(


Actually, pushing from a shallow repository can work, but if your
history is not deep enough it will be a problem when git tries determine
the common ancestor. Be sure to have enough depth so that your history
contains the common ancestor of all the branches you might want to
merge, and also make sure the remote repository won't rewrite history
beyond that point and you should be safe. At least, that's what I
understand from:
<http://git.661346.n2.nabble.com/pushing-from-a-shallow-repo-allowed-td2332252.html>




Interesting. But it still feels very much like a second-class 
functionality, not something they really have in mind to support well, 
at least not yet.


Ideally, if one wants to do push but the ancestor history is incomplete, 
the VCS would download from the central repository whatever 
revision/changeset information was missing.


Before someone says, oh but that defeats some of the purposes of a 
distributed VCS, like being able to work offline. I know, and I 
personally don't care that much, in fact I find this "benefit" of DVCS 
has been overvalued way out of proportion. Does anyone do any serious 
coding while being offline for an extended period of time? Some people 
mentioned coding on the move, with laptops, but seriously, even if I am 
connected to the Internet I cannot code with my laptop only, I need it 
connected to a monitor, as well as a mouse, (and preferably a keyboard 
as well).


--
Bruno Medeiros - Software Engineer


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-11 Thread Bruno Medeiros

On 09/02/2011 23:02, Ulrik Mikaelsson wrote:

2011/2/9 Bruno Medeiros:


It's unlikely you will see converted repositories with a lot of changing
blob data. DVCS, at the least in the way they work currently, simply kill
this workflow/organization-pattern.
I very much suspect this issue will become more important as time goes on -
a lot of people are still new to DVCS and they still don't realize the full
implications of that architecture with regards to repo size. Any file you
commit will add to the repository size *FOREVER*. I'm pretty sure we haven't
heard the last word on the VCS battle, in that in a few years time people
are *again* talking about and switching to another VCS :( . Mark these
words. (The only way this is not going to happen is if Git or Mercurial are
able to address this issue in a satisfactory way, which I'm not sure is
possible or easy)



You don't happen to know about any projects of this kind in any other
VCS that can be practically tested, do you?



You mean a project like that, hosted in Subversion or CVS (so that you 
can convert it to Git/Mercurial and see how it is in terms of repo size)?
I don't know any of the top of my head, except the one in my job, but 
naturally it is commercial and closed-source so I can't share it.
I'm cloning the Mozilla Firefox repo right now, I'm curious how big it 
is. ( https://developer.mozilla.org/en/Mozilla_Source_Code_%28Mercurial%29)


But other than that, what exactly do you want to test? There is no 
specific thing to test, if you add a binary file (from a format that is 
already compressed, like zip, jar, jpg, etc.) of size X, you will 
increase the repo size by X bytes forever. There is no other way around 
it. (Unless on Git you rewrite the history on the repo, which doubtfully 
will ever be allowed on central repositories)


--
Bruno Medeiros - Software Engineer


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-10 Thread nedbrek
Hello all,

"Michel Fortin"  wrote in message 
news:iiu8dm$10te$1...@digitalmars.com...
> On 2011-02-09 07:49:31 -0500, Bruno Medeiros 
>  said:
>> On 04/02/2011 20:11, Michel Fortin wrote:
>>> On 2011-02-04 11:12:12 -0500, Bruno Medeiros
>>>  said:
>>>
 Can Git really have an usable but incomplete local clone?
>>>
>>> Yes, it's called a shallow clone. See the --depth switch of git clone:
>>> 
>>
>> I was about to say "Cool!", but then I checked the doc on that link and 
>> it says:
>> "A shallow repository has a number of limitations (you cannot clone or 
>> fetch from it, nor push from nor into it), but is adequate if you are 
>> only interested in the recent history of a large project with a long 
>> history, and would want to send in fixes as patches. "
>> So it's actually not good for what I meant, since it is barely usable 
>> (you cannot push from it). :(
>
> Actually, pushing from a shallow repository can work, but if your history 
> is not deep enough it will be a problem when git tries determine the 
> common ancestor. Be sure to have enough depth so that your history 
> contains the common ancestor of all the branches you might want to merge, 
> and also make sure the remote repository won't rewrite history beyond that 
> point and you should be safe. At least, that's what

The other way to collaborate is to email someone a diff.  Git has a lot of 
support for extracting diffs from emails and applying the patches.

HTH,
Ned




Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-09 Thread Ulrik Mikaelsson
2011/2/9 Bruno Medeiros :
>
> It's unlikely you will see converted repositories with a lot of changing
> blob data. DVCS, at the least in the way they work currently, simply kill
> this workflow/organization-pattern.
> I very much suspect this issue will become more important as time goes on -
> a lot of people are still new to DVCS and they still don't realize the full
> implications of that architecture with regards to repo size. Any file you
> commit will add to the repository size *FOREVER*. I'm pretty sure we haven't
> heard the last word on the VCS battle, in that in a few years time people
> are *again* talking about and switching to another VCS :( . Mark these
> words. (The only way this is not going to happen is if Git or Mercurial are
> able to address this issue in a satisfactory way, which I'm not sure is
> possible or easy)
>

You don't happen to know about any projects of this kind in any other
VCS that can be practically tested, do you?

Besides, AFAIU this discussion was originally regarding to the D
language components, I.E. DMD, druntime and Phobos. Not a lot of
binaries here.


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-09 Thread Jérôme M. Berger
Bruno Medeiros wrote:
> Yes, Brad had posted some statistics of the size of the Git repositories
> for dmd, druntime, and phobos, and yes, they are pretty small.
> Projects which contains practically only source code, and little to no
> binary data are unlikely to grow much and repo size ever be a problem.
> But it might not be the case for other projects (also considering that
> binary data is usually already well compressed, like .zip, .jpg, .mp3,
> .ogg, etc., so VCS compression won't help much).
> 
> It's unlikely you will see converted repositories with a lot of changing
> blob data. DVCS, at the least in the way they work currently, simply
> kill this workflow/organization-pattern.
> I very much suspect this issue will become more important as time goes
> on - a lot of people are still new to DVCS and they still don't realize
> the full implications of that architecture with regards to repo size.
> Any file you commit will add to the repository size *FOREVER*. I'm
> pretty sure we haven't heard the last word on the VCS battle, in that in
> a few years time people are *again* talking about and switching to
> another VCS :( . Mark these words. (The only way this is not going to
> happen is if Git or Mercurial are able to address this issue in a
> satisfactory way, which I'm not sure is possible or easy)
> 
There are several Mercurial extensions that attempt to address this
issue. See for example: http://wiki.netbeans.org/HgExternalBinaries
or http://mercurial.selenic.com/wiki/BigfilesExtension

I do not know how well they perform in practice.

Jerome
-- 
mailto:jeber...@free.fr
http://jeberger.free.fr
Jabber: jeber...@jabber.fr



signature.asc
Description: OpenPGP digital signature


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-09 Thread Michel Fortin
On 2011-02-09 07:49:31 -0500, Bruno Medeiros 
 said:



On 04/02/2011 20:11, Michel Fortin wrote:

On 2011-02-04 11:12:12 -0500, Bruno Medeiros
 said:


Can Git really have an usable but incomplete local clone?


Yes, it's called a shallow clone. See the --depth switch of git clone:



I was about to say "Cool!", but then I checked the doc on that link and 
it says:
"A shallow repository has a number of limitations (you cannot clone or 
fetch from it, nor push from nor into it), but is adequate if you are 
only interested in the recent history of a large project with a long 
history, and would want to send in fixes as patches. "
So it's actually not good for what I meant, since it is barely usable 
(you cannot push from it). :(


Actually, pushing from a shallow repository can work, but if your 
history is not deep enough it will be a problem when git tries 
determine the common ancestor. Be sure to have enough depth so that 
your history contains the common ancestor of all the branches you might 
want to merge, and also make sure the remote repository won't rewrite 
history beyond that point and you should be safe. At least, that's what 
I understand from:



--


Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-09 Thread Bruno Medeiros

On 06/02/2011 14:17, Ulrik Mikaelsson wrote:

2011/2/4 Bruno Medeiros:


Well, like I said, my concern about size is not so much disk space, but the
time to make local copies of the repository, or cloning it from the internet
(and the associated transfer times), both of which are not neglectable yet.
My project at work could easily have gone to 1Gb of repo size if in the last
year or so it has been stored on a DVCS! :S

I hope this gets addressed at some point. But I fear that the main
developers of both Git and Mercurial may be too "biased" to experience
projects which are typically somewhat small in size, in terms of bytes
(projects that consist almost entirely of source code).
For example, in UI applications it would be common to store binary data
(images, sounds, etc.) in the source control. The other case is what I
mentioned before, wanting to store dependencies together with the project
(in my case including the javadoc and source code of the dependencies - and
there's very good reasons to want to do that).


I think the storage/bandwidth requirements of DVCS:s are very often
exagerated, especially for text, but also somewhat for blobs.
  * For text-content, the compression of archives reduces them to,
perhaps, 1/5 of their original size?
- That means, that unless you completely rewrite a file 5 times
during the course of a project, simple per-revision-compression of the
file will turn out smaller, than the single uncompressed base-file
that subversion transfers and stores.
- The delta-compression applied ensures small changes does not
count as a "rewrite".
  * For blobs, the archive-compression may not do as much, and they
certainly pose a larger challenge for storing history, but:
- AFAIU, at least git delta-compresses even binaries so even
changes in them might be slightly reduced (dunno about the others)
- I think more and more graphics are today are written in SVG?
- I believe, for most projects, audio-files are usually not changed
very often, once entered a project? Usually existing samples are
simply copied in?
  * For both binaries and text, and for most projects, the latest
revision is usually the largest. (Projects usually grow over time,
they don't consistently shrink) I.E. older revisions are, compared to
current, much much smaller, making the size of old history smaller
compared to the size of current history.

Finally, as a test, I tried checking out the last version of druntime
from SVN and compare it to git (AFICT, history were preserved in the
git-migration), the results were about what I expected. Checking out
trunk from SVN, and the whole history from git:
   SVN: 7.06 seconds, 5,3 MB on disk
   Git: 2.88 seconds, 3.5 MB on disk
   Improvement Git/SVN: time reduced by 59%, space reduced by 34%.

I did not measure bandwidth, but my guess is it is somewhere between
the disk- and time- reductions. Also, if someone has an example of a
recently converted repository including some blobs it would make an
interesting experiment to repeat.

Regards
/ Ulrik

-

ulrik@ulrik ~/p/test>  time svn co
http://svn.dsource.org/projects/druntime/trunk druntime_svn
...
0.26user 0.21system 0:07.06elapsed 6%CPU (0avgtext+0avgdata 47808maxresident)k
544inputs+11736outputs (3major+3275minor)pagefaults 0swaps
ulrik@ulrik ~/p/test>  du -sh druntime_svn
5,3Mdruntime_svn

ulrik@ulrik ~/p/test>  time git clone
git://github.com/D-Programming-Language/druntime.git druntime_git
...
0.26user 0.06system 0:02.88elapsed 11%CPU (0avgtext+0avgdata 14320maxresident)k
3704inputs+7168outputs (18major+1822minor)pagefaults 0swaps
ulrik@ulrik ~/p/test>  du -sh druntime_git/
3,5Mdruntime_git/



Yes, Brad had posted some statistics of the size of the Git repositories 
for dmd, druntime, and phobos, and yes, they are pretty small.
Projects which contains practically only source code, and little to no 
binary data are unlikely to grow much and repo size ever be a problem. 
But it might not be the case for other projects (also considering that 
binary data is usually already well compressed, like .zip, .jpg, .mp3, 
.ogg, etc., so VCS compression won't help much).


It's unlikely you will see converted repositories with a lot of changing 
blob data. DVCS, at the least in the way they work currently, simply 
kill this workflow/organization-pattern.
I very much suspect this issue will become more important as time goes 
on - a lot of people are still new to DVCS and they still don't realize 
the full implications of that architecture with regards to repo size. 
Any file you commit will add to the repository size *FOREVER*. I'm 
pretty sure we haven't heard the last word on the VCS battle, in that in 
a few years time people are *again* talking about and switching to 
another VCS :( . Mark these words. (The only way this is not going to 
happen is if Git or Mercurial are able to address this issue in a 
satisfactory way, which I'm not sure is possible or easy)



--
Bruno Medeiros - Software Engineer


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-09 Thread Bruno Medeiros

On 04/02/2011 20:11, Michel Fortin wrote:

On 2011-02-04 11:12:12 -0500, Bruno Medeiros
 said:


Can Git really have an usable but incomplete local clone?


Yes, it's called a shallow clone. See the --depth switch of git clone:





I was about to say "Cool!", but then I checked the doc on that link and 
it says:
"A shallow repository has a number of limitations (you cannot clone or 
fetch from it, nor push from nor into it), but is adequate if you are 
only interested in the recent history of a large project with a long 
history, and would want to send in fixes as patches. "
So it's actually not good for what I meant, since it is barely usable 
(you cannot push from it). :(



--
Bruno Medeiros - Software Engineer


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-06 Thread Ulrik Mikaelsson
2011/2/4 Bruno Medeiros :
>
> Well, like I said, my concern about size is not so much disk space, but the
> time to make local copies of the repository, or cloning it from the internet
> (and the associated transfer times), both of which are not neglectable yet.
> My project at work could easily have gone to 1Gb of repo size if in the last
> year or so it has been stored on a DVCS! :S
>
> I hope this gets addressed at some point. But I fear that the main
> developers of both Git and Mercurial may be too "biased" to experience
> projects which are typically somewhat small in size, in terms of bytes
> (projects that consist almost entirely of source code).
> For example, in UI applications it would be common to store binary data
> (images, sounds, etc.) in the source control. The other case is what I
> mentioned before, wanting to store dependencies together with the project
> (in my case including the javadoc and source code of the dependencies - and
> there's very good reasons to want to do that).

I think the storage/bandwidth requirements of DVCS:s are very often
exagerated, especially for text, but also somewhat for blobs.
 * For text-content, the compression of archives reduces them to,
perhaps, 1/5 of their original size?
   - That means, that unless you completely rewrite a file 5 times
during the course of a project, simple per-revision-compression of the
file will turn out smaller, than the single uncompressed base-file
that subversion transfers and stores.
   - The delta-compression applied ensures small changes does not
count as a "rewrite".
 * For blobs, the archive-compression may not do as much, and they
certainly pose a larger challenge for storing history, but:
   - AFAIU, at least git delta-compresses even binaries so even
changes in them might be slightly reduced (dunno about the others)
   - I think more and more graphics are today are written in SVG?
   - I believe, for most projects, audio-files are usually not changed
very often, once entered a project? Usually existing samples are
simply copied in?
 * For both binaries and text, and for most projects, the latest
revision is usually the largest. (Projects usually grow over time,
they don't consistently shrink) I.E. older revisions are, compared to
current, much much smaller, making the size of old history smaller
compared to the size of current history.

Finally, as a test, I tried checking out the last version of druntime
from SVN and compare it to git (AFICT, history were preserved in the
git-migration), the results were about what I expected. Checking out
trunk from SVN, and the whole history from git:
  SVN: 7.06 seconds, 5,3 MB on disk
  Git: 2.88 seconds, 3.5 MB on disk
  Improvement Git/SVN: time reduced by 59%, space reduced by 34%.

I did not measure bandwidth, but my guess is it is somewhere between
the disk- and time- reductions. Also, if someone has an example of a
recently converted repository including some blobs it would make an
interesting experiment to repeat.

Regards
/ Ulrik

-

ulrik@ulrik ~/p/test> time svn co
http://svn.dsource.org/projects/druntime/trunk druntime_svn
...
0.26user 0.21system 0:07.06elapsed 6%CPU (0avgtext+0avgdata 47808maxresident)k
544inputs+11736outputs (3major+3275minor)pagefaults 0swaps
ulrik@ulrik ~/p/test> du -sh druntime_svn
5,3Mdruntime_svn

ulrik@ulrik ~/p/test> time git clone
git://github.com/D-Programming-Language/druntime.git druntime_git
...
0.26user 0.06system 0:02.88elapsed 11%CPU (0avgtext+0avgdata 14320maxresident)k
3704inputs+7168outputs (18major+1822minor)pagefaults 0swaps
ulrik@ulrik ~/p/test> du -sh druntime_git/
3,5Mdruntime_git/


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-04 Thread Michel Fortin
On 2011-02-04 11:12:12 -0500, Bruno Medeiros 
 said:



Can Git really have an usable but incomplete local clone?


Yes, it's called a shallow clone. See the --depth switch of git clone:



--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-04 Thread Bruno Medeiros

On 01/02/2011 23:07, Walter Bright wrote:

Bruno Medeiros wrote:

A more serious issue that I learned (or rather forgotten about before
and remembered now) is the whole DVCSes keep the whole repository
history locally aspect, which has important ramifications. If the
repository is big, although disk space may not be much of an issue,


I still find myself worrying about disk usage, despite being able to get
a 2T drive these days for under a hundred bucks. Old patterns of thought
die hard.


Well, like I said, my concern about size is not so much disk space, but 
the time to make local copies of the repository, or cloning it from the 
internet (and the associated transfer times), both of which are not 
neglectable yet.
My project at work could easily have gone to 1Gb of repo size if in the 
last year or so it has been stored on a DVCS! :S


I hope this gets addressed at some point. But I fear that the main 
developers of both Git and Mercurial may be too "biased" to experience 
projects which are typically somewhat small in size, in terms of bytes 
(projects that consist almost entirely of source code).
For example, in UI applications it would be common to store binary data 
(images, sounds, etc.) in the source control. The other case is what I 
mentioned before, wanting to store dependencies together with the 
project (in my case including the javadoc and source code of the 
dependencies - and there's very good reasons to want to do that).


In this analysis:
http://code.google.com/p/support/wiki/DVCSAnalysis
they said that Git has some functionality to address this issue:
"Client Storage Management. Both Mercurial and Git allow users to 
selectively pull branches from other repositories. This provides an 
upfront mechanism for narrowing the amount of history stored locally. In 
addition, Git allows previously pulled branches to be discarded. Git 
also allows old revision data to be pruned from the local repository 
(while still keeping recent revision data on those branches). With 
Mercurial, if a branch is in the local repository, then all of its 
revisions (back to the very initial commit) must also be present, and 
there is no way to prune branches other than by creating a new 
repository and selectively pulling branches into it. There has been some 
work addressing this in Mercurial, but nothing official yet."


However I couldn't find more info about this, and other articles and 
comments about Git seem to omit or contradict this... :S

Can Git really have an usable but incomplete local clone?

--
Bruno Medeiros - Software Engineer


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-02 Thread Jérôme M. Berger
Andrej Mitrovic wrote:
> Bleh. I tried to use Git to update some of the doc files, but getting
> the thing to work will be a miracle.
> 
> git can't find the public keys unless I use msysgit. Great. How
> exactly do I cd to D:\ ?
> 
> So I try git-gui. Seems to work fine, I clone the forked repo and make
> a few changes. I try to commit, it says I have to update first. So I
> do that. *Error: crash crash crash*. I try to close the thing, it just
> keeps crashing. CTRL+ALT+DEL time..
> 
> Okay, I try another GUI package, GitExtensions. I make new
> public/private keys and add it to github, I'm about to clone but then
> I get this "fatal: The remote end hung up unexpectedly".
> 
> I don't know what to say..

Why do you think I keep arguing against Git every chance I get?

Jerome
-- 
mailto:jeber...@free.fr
http://jeberger.free.fr
Jabber: jeber...@jabber.fr



signature.asc
Description: OpenPGP digital signature


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-02 Thread David Nadlinger

On 2/2/11 3:17 AM, Andrej Mitrovic wrote:

Bleh. I tried to use Git to update some of the doc files, but getting
the thing to work will be a miracle.

git can't find the public keys unless I use msysgit. Great. How
exactly do I cd to D:\ ?


If you are new to Git or SSH, the folks at GitHub have put up a tutorial 
explaining how to generate and set up a pair of SSH keys: 
http://help.github.com/msysgit-key-setup/. There is also a page 
describing solutions to some SSH setup problems: 
http://help.github.com/troubleshooting-ssh/.


If you already have a private/public key and want to use it with Git, 
either copy them to Git's .ssh/ directory or edit the .ssh/config of the 
SSH instance used by Git accordingly. If you need to refer to 
»D:\somefile« inside the MSYS shell, use »/d/somefile«.


I don't quite get what you mean with »git can't find the public keys 
unless I use msysgit«. Obviously, you need to modify the configuration 
of the SSH program Git uses, but other than that, you don't need to use 
the MSYS shell for setting up stuff – you can just use Windows Explorer 
and your favorite text editor for that as well.


David


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-01 Thread Walter Bright

Andrej Mitrovic wrote:

I've noticed you have "Version Control with Git" listed in your list
of books. Did you just buy that recently, or were you secretly
planning to switch to Git at the instant someone mentioned it? :p


I listed it recently.


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-01 Thread Andrej Mitrovic
On 2/2/11, Andrej Mitrovic  wrote:
> On 2/2/11, Walter Bright  wrote:
>>
>
> ...listed in your list...
>

Crap.. I just made a 2-dimensional book list by accident. My bad.


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-01 Thread Walter Bright

Andrej Mitrovic wrote:

Is this why you've made your own version of make and microemacs for
Windows? I honestly can't blame you. :)


Microemacs floated around the intarnets for free back in the 80's, and I liked 
it because it was very small, fast, and customizable. Having an editor that fit 
in 50k was just the ticket for a floppy based system. Most code editors of the 
day were many times larger, took forever to load, etc.


I wrote my own make because I needed one to sell and so couldn't use someone 
else's.


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-01 Thread Andrej Mitrovic
On 2/2/11, Walter Bright  wrote:
>

I've noticed you have "Version Control with Git" listed in your list
of books. Did you just buy that recently, or were you secretly
planning to switch to Git at the instant someone mentioned it? :p


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-01 Thread Brad Roberts
On 2/1/2011 7:55 PM, Andrej Mitrovic wrote:
> On 2/2/11, Walter Bright  wrote:
>> Andrej Mitrovic wrote:
>>> I don't know what to say..
>>
>> Git is a Linux program and will never work right on Windows. The problems
>> you're
>> experiencing are classic ones I find whenever I attempt to use a Linux
>> program
>> that has been "ported" to Windows.
>>
> 
> Yeah, I know what you mean. "Use my app on Windows too, it works! But
> you have to install this Linux simulator first, though".
> 
> Is this why you've made your own version of make and microemacs for
> Windows? I honestly can't blame you. :)

Of course, it forms a nice vicious circle.  Without users, there's little 
incentive to fix and chances are there's fewer
users reporting bugs.

Sounds.. familiar. :)


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-01 Thread Andrej Mitrovic
On 2/2/11, Walter Bright  wrote:
> Andrej Mitrovic wrote:
>> I don't know what to say..
>
> Git is a Linux program and will never work right on Windows. The problems
> you're
> experiencing are classic ones I find whenever I attempt to use a Linux
> program
> that has been "ported" to Windows.
>

Yeah, I know what you mean. "Use my app on Windows too, it works! But
you have to install this Linux simulator first, though".

Is this why you've made your own version of make and microemacs for
Windows? I honestly can't blame you. :)


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-01 Thread Walter Bright

Andrej Mitrovic wrote:

I don't know what to say..


Git is a Linux program and will never work right on Windows. The problems you're 
experiencing are classic ones I find whenever I attempt to use a Linux program 
that has been "ported" to Windows.


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-01 Thread Brad Roberts
On 2/1/2011 6:17 PM, Andrej Mitrovic wrote:
> Bleh. I tried to use Git to update some of the doc files, but getting
> the thing to work will be a miracle.
> 
> git can't find the public keys unless I use msysgit. Great. How
> exactly do I cd to D:\ ?
> 
> So I try git-gui. Seems to work fine, I clone the forked repo and make
> a few changes. I try to commit, it says I have to update first. So I
> do that. *Error: crash crash crash*. I try to close the thing, it just
> keeps crashing. CTRL+ALT+DEL time..
> 
> Okay, I try another GUI package, GitExtensions. I make new
> public/private keys and add it to github, I'm about to clone but then
> I get this "fatal: The remote end hung up unexpectedly".
> 
> I don't know what to say..

I use cygwin for all my windows work (which I try to keep to a minimum).  Works 
just fine in that environment.


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-01 Thread Andrej Mitrovic
Bleh. I tried to use Git to update some of the doc files, but getting
the thing to work will be a miracle.

git can't find the public keys unless I use msysgit. Great. How
exactly do I cd to D:\ ?

So I try git-gui. Seems to work fine, I clone the forked repo and make
a few changes. I try to commit, it says I have to update first. So I
do that. *Error: crash crash crash*. I try to close the thing, it just
keeps crashing. CTRL+ALT+DEL time..

Okay, I try another GUI package, GitExtensions. I make new
public/private keys and add it to github, I'm about to clone but then
I get this "fatal: The remote end hung up unexpectedly".

I don't know what to say..


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-01 Thread Walter Bright

Brad Roberts wrote:

Ie, essentially negligable.


Yeah, and I caught myself worrying about the disk usage from having two clones 
of the git repository (one for D1, the other for D2).


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-01 Thread Brad Roberts
On Tue, 1 Feb 2011, Walter Bright wrote:

> Bruno Medeiros wrote:
> > A more serious issue that I learned (or rather forgotten about before and
> > remembered now) is the whole DVCSes keep the whole repository history
> > locally aspect, which has important ramifications. If the repository is big,
> > although disk space may not be much of an issue,
> 
> I still find myself worrying about disk usage, despite being able to get a 2T
> drive these days for under a hundred bucks. Old patterns of thought die hard.

For what it's worth, the sizes of the key git dirs on my box:

dmd.git == 4.4 - 5.9M (depends on if the gc has run recently to re-pack 
new objects)

druntime.git == 1.4 - 3.0M

phobos.git == 5.1 - 6.7M

The checked out copy of each of those is considerably more than the packed 
full history.  The size, inclusive of full history and the checked out 
copy, after a make clean:

dmd   15M
druntime   4M
phobos16M

Ie, essentially negligable.

Later,
Brad



Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-01 Thread Jonathan M Davis
On Tuesday, February 01, 2011 15:07:58 Walter Bright wrote:
> Bruno Medeiros wrote:
> > A more serious issue that I learned (or rather forgotten about before
> > and remembered now) is the whole DVCSes keep the whole repository
> > history locally aspect, which has important ramifications. If the
> > repository is big, although disk space may not be much of an issue,
> 
> I still find myself worrying about disk usage, despite being able to get a
> 2T drive these days for under a hundred bucks. Old patterns of thought die
> hard.

And some things will likely _always_ make disk usage a concern. Video would be 
a 
good example. If you have much video, even with good compression, it's going to 
take up a lot of space. Granted, there are _lots_ of use cases which just don't 
take up enough disk space to matter anymore, but you can _always_ find ways to 
use up disk space. Entertainingly, a fellow I know had a friend who joked that 
he could always hold all of his data in a shoebox. Originally, it was punch 
cards. Then it  was 5 1/4" floppy disks. Then it was 3 1/2" floppy disks. Then 
it 
was CDs. Etc. Storage devices keep getting bigger and bigger, but we keep 
finding 
ways to fill them...

- Jonathan M Davis


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-01 Thread Walter Bright

Bruno Medeiros wrote:
A more serious issue that I learned (or rather forgotten about before 
and remembered now) is the whole DVCSes keep the whole repository 
history locally aspect, which has important ramifications. If the 
repository is big, although disk space may not be much of an issue,


I still find myself worrying about disk usage, despite being able to get a 2T 
drive these days for under a hundred bucks. Old patterns of thought die hard.


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-01 Thread foobar
Bruno Medeiros Wrote:

> On 29/01/2011 10:02, "Jérôme M. Berger" wrote:
> > Michel Fortin wrote:
> >> On 2011-01-28 11:29:49 -0500, Bruno Medeiros
> >>   said:
> >>
> >>> I've also been mulling over whether to try out and switch away from
> >>> Subversion to a DVCS, but never went ahead cause I've also been
> >>> undecided about Git vs. Mercurial. So this whole discussion here in
> >>> the NG has been helpful, even though I rarely use branches, if at all.
> >>>
> >>> However, there is an important issue for me that has not been
> >>> mentioned ever, I wonder if other people also find it relevant. It
> >>> annoys me a lot in Subversion, and basically it's the aspect where if
> >>> you delete, rename, or copy a folder under version control in a SVN
> >>> working copy, without using the SVN commands, there is a high
> >>> likelihood your working copy will break! It's so annoying, especially
> >>> since sometimes no amount of svn revert, cleanup, unlock, override and
> >>> update, etc. will fix it. I just had one recently where I had to
> >>> delete and re-checkout the whole project because it was that broken.
> >>> Other situations also seem to cause this, even when using SVN tooling
> >>> (like partially updating from a commit that delete or moves
> >>> directories, or something like that) It's just so brittle.
> >>> I think it may be a consequence of the design aspect of SVN where each
> >>> subfolder of a working copy is a working copy as well (and each
> >>> subfolder of repository is a repository as well)
> >>>
> >>> Anyways, I hope Mercurial and Git are better at this, I'm definitely
> >>> going to try them out with regards to this.
> >>
> >> Git doesn't care how you move your files around. It track files by their
> >> content. If you rename a file and most of the content stays the same,
> >> git will see it as a rename. If most of the file has changed, it'll see
> >> it as a new file (with the old one deleted). There is 'git mv', but it's
> >> basically just a shortcut for moving the file, doing 'git rm' on the old
> >> path and 'git add' on the new path.
> >>
> >> I don't know about Mercurial.
> >>
> > Mercurial can record renamed or copied files after the fact (simply
> > pass the -A option to "hg cp" or "hg mv"). It also has the
> > "addremove" command which will automatically remove any missing
> > files and add any unknown non-ignored files. Addremove can detect
> > renamed files if they are similar enough to the old file (the
> > similarity level is configurable) but it will not detect copies.
> >
> > Jerome
> 
> Indeed, that's want I found out now that I tried Mercurial. So that's 
> really nice (especially the "addremove" command), it's actually 
> motivation enough for me to switch to Mercurial or Git, as it's a major 
> annoyance in SVN.
> 
> I've learned a few more things recently: there's a minor issue with Git 
> and Mercurial in that they both are not able to record empty 
> directories. A very minor annoyance (it's workaround-able), but still 
> conceptually lame, I mean, directories are resources too! It's curious 
> that the wiki pages for both Git and Mercurial on this issue are exactly 
> the same, word by word most of them:
> http://mercurial.selenic.com/wiki/MarkEmptyDirs
> https://git.wiki.kernel.org/index.php/MarkEmptyDirs
> (I guess it's because they were written by the same guy)
> 
> A more serious issue that I learned (or rather forgotten about before 
> and remembered now) is the whole DVCSes keep the whole repository 
> history locally aspect, which has important ramifications. If the 
> repository is big, although disk space may not be much of an issue, it's 
> a bit annoying when copying the repository locally(*), or cloning it 
> from the internet and thus having to download large amounts of data.
> For example in the DDT Eclipse IDE I keep the project dependencies 
> (https://svn.codespot.com/a/eclipselabs.org/ddt/trunk/org.dsource.ddt-build/target/)
>  
> on source control, which is 141Mb total on a single revision, and they 
> might change ever semester or so...
> I'm still not sure what to do about this. I may split this part of the 
> project into a separate Mercurial repository, although I do lose some 
> semantic information beca

Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-01 Thread David Nadlinger

On 2/1/11 2:44 PM, Bruno Medeiros wrote:

[…] a direct association between each
revision in the source code projects, and the corresponding revision in
the dependencies project. […]


With Git, you could use submodules for that task – I don't know if 
something similar exists for Mercurial.


David


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-02-01 Thread Bruno Medeiros

On 29/01/2011 10:02, "Jérôme M. Berger" wrote:

Michel Fortin wrote:

On 2011-01-28 11:29:49 -0500, Bruno Medeiros
  said:


I've also been mulling over whether to try out and switch away from
Subversion to a DVCS, but never went ahead cause I've also been
undecided about Git vs. Mercurial. So this whole discussion here in
the NG has been helpful, even though I rarely use branches, if at all.

However, there is an important issue for me that has not been
mentioned ever, I wonder if other people also find it relevant. It
annoys me a lot in Subversion, and basically it's the aspect where if
you delete, rename, or copy a folder under version control in a SVN
working copy, without using the SVN commands, there is a high
likelihood your working copy will break! It's so annoying, especially
since sometimes no amount of svn revert, cleanup, unlock, override and
update, etc. will fix it. I just had one recently where I had to
delete and re-checkout the whole project because it was that broken.
Other situations also seem to cause this, even when using SVN tooling
(like partially updating from a commit that delete or moves
directories, or something like that) It's just so brittle.
I think it may be a consequence of the design aspect of SVN where each
subfolder of a working copy is a working copy as well (and each
subfolder of repository is a repository as well)

Anyways, I hope Mercurial and Git are better at this, I'm definitely
going to try them out with regards to this.


Git doesn't care how you move your files around. It track files by their
content. If you rename a file and most of the content stays the same,
git will see it as a rename. If most of the file has changed, it'll see
it as a new file (with the old one deleted). There is 'git mv', but it's
basically just a shortcut for moving the file, doing 'git rm' on the old
path and 'git add' on the new path.

I don't know about Mercurial.


Mercurial can record renamed or copied files after the fact (simply
pass the -A option to "hg cp" or "hg mv"). It also has the
"addremove" command which will automatically remove any missing
files and add any unknown non-ignored files. Addremove can detect
renamed files if they are similar enough to the old file (the
similarity level is configurable) but it will not detect copies.

Jerome


Indeed, that's want I found out now that I tried Mercurial. So that's 
really nice (especially the "addremove" command), it's actually 
motivation enough for me to switch to Mercurial or Git, as it's a major 
annoyance in SVN.


I've learned a few more things recently: there's a minor issue with Git 
and Mercurial in that they both are not able to record empty 
directories. A very minor annoyance (it's workaround-able), but still 
conceptually lame, I mean, directories are resources too! It's curious 
that the wiki pages for both Git and Mercurial on this issue are exactly 
the same, word by word most of them:

http://mercurial.selenic.com/wiki/MarkEmptyDirs
https://git.wiki.kernel.org/index.php/MarkEmptyDirs
(I guess it's because they were written by the same guy)

A more serious issue that I learned (or rather forgotten about before 
and remembered now) is the whole DVCSes keep the whole repository 
history locally aspect, which has important ramifications. If the 
repository is big, although disk space may not be much of an issue, it's 
a bit annoying when copying the repository locally(*), or cloning it 
from the internet and thus having to download large amounts of data.
For example in the DDT Eclipse IDE I keep the project dependencies 
(https://svn.codespot.com/a/eclipselabs.org/ddt/trunk/org.dsource.ddt-build/target/) 
on source control, which is 141Mb total on a single revision, and they 
might change ever semester or so...
I'm still not sure what to do about this. I may split this part of the 
project into a separate Mercurial repository, although I do lose some 
semantic information because of this: a direct association between each 
revision in the source code projects, and the corresponding revision in 
the dependencies project. Conceptually I would want this to be a single 
repository.


(*) Yeah, I know Mercurial and Git may use hardlinks to speed up the 
cloning process, even on Windows, but that solution is not suitable to 
me, as I my workflow is usually to copy entire Eclipse workspaces when I 
want to "branch" on some task. Doesn't happen that often though.


--
Bruno Medeiros - Software Engineer


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-01-29 Thread Jérôme M. Berger
Michel Fortin wrote:
> On 2011-01-28 11:29:49 -0500, Bruno Medeiros
>  said:
> 
>> I've also been mulling over whether to try out and switch away from
>> Subversion to a DVCS, but never went ahead cause I've also been
>> undecided about Git vs. Mercurial. So this whole discussion here in
>> the NG has been helpful, even though I rarely use branches, if at all.
>>
>> However, there is an important issue for me that has not been
>> mentioned ever, I wonder if other people also find it relevant. It
>> annoys me a lot in Subversion, and basically it's the aspect where if
>> you delete, rename, or copy a folder under version control in a SVN
>> working copy, without using the SVN commands, there is a high
>> likelihood your working copy will break! It's so annoying, especially
>> since sometimes no amount of svn revert, cleanup, unlock, override and
>> update, etc. will fix it. I just had one recently where I had to
>> delete and re-checkout the whole project because it was that broken.
>> Other situations also seem to cause this, even when using SVN tooling
>> (like partially updating from a commit that delete or moves
>> directories, or something like that) It's just so brittle.
>> I think it may be a consequence of the design aspect of SVN where each
>> subfolder of a working copy is a working copy as well (and each
>> subfolder of repository is a repository as well)
>>
>> Anyways, I hope Mercurial and Git are better at this, I'm definitely
>> going to try them out with regards to this.
> 
> Git doesn't care how you move your files around. It track files by their
> content. If you rename a file and most of the content stays the same,
> git will see it as a rename. If most of the file has changed, it'll see
> it as a new file (with the old one deleted). There is 'git mv', but it's
> basically just a shortcut for moving the file, doing 'git rm' on the old
> path and 'git add' on the new path.
> 
> I don't know about Mercurial.
> 
Mercurial can record renamed or copied files after the fact (simply
pass the -A option to "hg cp" or "hg mv"). It also has the
"addremove" command which will automatically remove any missing
files and add any unknown non-ignored files. Addremove can detect
renamed files if they are similar enough to the old file (the
similarity level is configurable) but it will not detect copies.

Jerome
-- 
mailto:jeber...@free.fr
http://jeberger.free.fr
Jabber: jeber...@jabber.fr



signature.asc
Description: OpenPGP digital signature


Re: DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-01-28 Thread Michel Fortin
On 2011-01-28 11:29:49 -0500, Bruno Medeiros 
 said:


I've also been mulling over whether to try out and switch away from 
Subversion to a DVCS, but never went ahead cause I've also been 
undecided about Git vs. Mercurial. So this whole discussion here in the 
NG has been helpful, even though I rarely use branches, if at all.


However, there is an important issue for me that has not been mentioned 
ever, I wonder if other people also find it relevant. It annoys me a 
lot in Subversion, and basically it's the aspect where if you delete, 
rename, or copy a folder under version control in a SVN working copy, 
without using the SVN commands, there is a high likelihood your working 
copy will break! It's so annoying, especially since sometimes no amount 
of svn revert, cleanup, unlock, override and update, etc. will fix it. 
I just had one recently where I had to delete and re-checkout the whole 
project because it was that broken.
Other situations also seem to cause this, even when using SVN tooling 
(like partially updating from a commit that delete or moves 
directories, or something like that) It's just so brittle.
I think it may be a consequence of the design aspect of SVN where each 
subfolder of a working copy is a working copy as well (and each 
subfolder of repository is a repository as well)


Anyways, I hope Mercurial and Git are better at this, I'm definitely 
going to try them out with regards to this.


Git doesn't care how you move your files around. It track files by 
their content. If you rename a file and most of the content stays the 
same, git will see it as a rename. If most of the file has changed, 
it'll see it as a new file (with the old one deleted). There is 'git 
mv', but it's basically just a shortcut for moving the file, doing 'git 
rm' on the old path and 'git add' on the new path.


I don't know about Mercurial.

--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: DVCS (was Re: Moving to D)

2011-01-28 Thread Eric Poggel

On 1/12/2011 6:41 PM, Walter Bright wrote:

All semiconductors have a lifetime that is measured by the area under
the curve of their temperature over time.


Oddly enough, milk has the same behavior.


DVCS vs. Subversion brittleness (was Re: Moving to D)

2011-01-28 Thread Bruno Medeiros

On 06/01/2011 19:19, "Jérôme M. Berger" wrote:

Andrei Alexandrescu wrote:

What are the advantages of Mercurial over git? (git does allow multiple
branches.)





I've also been mulling over whether to try out and switch away from 
Subversion to a DVCS, but never went ahead cause I've also been 
undecided about Git vs. Mercurial. So this whole discussion here in the 
NG has been helpful, even though I rarely use branches, if at all.


However, there is an important issue for me that has not been mentioned 
ever, I wonder if other people also find it relevant. It annoys me a lot 
in Subversion, and basically it's the aspect where if you delete, 
rename, or copy a folder under version control in a SVN working copy, 
without using the SVN commands, there is a high likelihood your working 
copy will break! It's so annoying, especially since sometimes no amount 
of svn revert, cleanup, unlock, override and update, etc. will fix it. I 
just had one recently where I had to delete and re-checkout the whole 
project because it was that broken.
Other situations also seem to cause this, even when using SVN tooling 
(like partially updating from a commit that delete or moves directories, 
or something like that) It's just so brittle.
I think it may be a consequence of the design aspect of SVN where each 
subfolder of a working copy is a working copy as well (and each 
subfolder of repository is a repository as well)


Anyways, I hope Mercurial and Git are better at this, I'm definitely 
going to try them out with regards to this.


--
Bruno Medeiros - Software Engineer


Re: DVCS (was Re: Moving to D)

2011-01-28 Thread Bruno Medeiros

On 16/01/2011 04:47, Nick Sabalausky wrote:

There's two reasons it's good for games:

1. Like you indicated, to get a better framerate. Framerate is more
important in most games than resolution.



This reason was valid at least at some point in time, for me it actually 
hold me back from transitioning from CRTs to LCDs for some time. But 
nowadays the screen resolutions have stabilized (stopped increasing, in 
terms of DPI), and graphics cards have improved in power enough that you 
can play nearly any game with the LCDs native resolution with max 
framerate, so no worries with this anymore (you may have to tone down 
the graphics settings a bit in some cases, but that is fine with me)



2. For games that aren't really designed for multiple resolutions,
particularly many 2D ones, and especially older games (which are often some
of the best, but they look like shit on an LCD).


Well, if your LCD supports it, you have the option of not expanding the 
screen if output resolution is not the native one. How good or bad that 
would be, depends on the game I guess.
I actually did this some years ago on certain (recent) games for a some 
time, use only 1024×768 of the 1280x1024 native, to have better framerate.
It's not a problem for me for old games, since most of them that I 
occasionally play are played in console emulator. DOS games 
unfortunately were very hard to play correctly in XP in the first place 
(especially with soundblaster), so it's not a concern for me.




PS: here's a nice thread for anyone looking to purchase a new LCD:
http://forums.anandtech.com/showthread.php?t=39226
It explains a lot of things about LCD technology, and ranks several LCDs 
according to intended usage (office work, hardcore gaming, etc.).


--
Bruno Medeiros - Software Engineer


Re: DVCS (was Re: Moving to D)

2011-01-28 Thread Bruno Medeiros

On 16/01/2011 19:38, Andrei Alexandrescu wrote:

On 1/15/11 10:47 PM, Nick Sabalausky wrote:

"Daniel Gibson"  wrote in message
news:igtq08$2m1c$1...@digitalmars.com...
There's two reasons it's good for games:

1. Like you indicated, to get a better framerate. Framerate is more
important in most games than resolution.

2. For games that aren't really designed for multiple resolutions,
particularly many 2D ones, and especially older games (which are often
some
of the best, but they look like shit on an LCD).


It's a legacy issue. Clearly everybody except you is using CRTs for
gaming and whatnot. Therefore graphics hardware producers and game
vendors are doing what it takes to adapt to a fixed resolution.


Actually, not entirely true, although not for the reasons of old games. 
Some players of hardcore twitch FPS games (like Quake), especially 
professional players, still use CRTs, due to the near-zero input lag 
that LCDs, although having improved in that regard, are still not able 
to match exactly.


But other than that, I really see no reason to stick with CRTs vs a good 
LCD, yeah.



--
Bruno Medeiros - Software Engineer


Re: DVCS

2011-01-22 Thread retard
Sat, 22 Jan 2011 14:47:48 -0800, Walter Bright wrote:

> retard wrote:
>> Does the new Ubuntu overall work better than the old one? Would be
>> amazing if the media players are still all broken.
> 
> I haven't tried the sound yet, but the video playback definitely is
> better.
> 
> Though the whole screen flashes now and then, like the video mode is
> being reset badly. This is new behavior.

Ubuntu probably uses Compiz if you have enabled desktop effects. This 
might not work with ati's (open source) drivers. Turning Compiz off makes 
it use a "safer" 2d engine. In Gnome the setting can be changed here 
http://www.howtoforge.com/enabling-compiz-fusion-on-an-ubuntu-10.10-
desktop-nvidia-geforce-8200-p2

It's the "none" option in the second figure.


Re: DVCS

2011-01-22 Thread Walter Bright

retard wrote:
Does the new Ubuntu overall work better than the old one? Would be 
amazing if the media players are still all broken.


I haven't tried the sound yet, but the video playback definitely is better.

Though the whole screen flashes now and then, like the video mode is being reset 
badly. This is new behavior.


Re: easy to upgrade OS (was Re: DVCS)

2011-01-22 Thread Walter Bright

Andrei Alexandrescu wrote:
Google takes email privacy very seriously. Only last week they fired an 
employee for snooping through someone else's email.


http://techcrunch.com/2010/09/14/google-engineer-spying-fired/


That's good to know. On the other hand, Google keeps information forever. 
Ownership, management, policies, and practices change.


And to be frank, the fact that some of Google's employees are not authorized to 
look at emails means that others are. And those others are subject to the usual 
human weaknesses of bribery, blackmail, temptation, voyeurism, etc. Heck, the 
White House is famous for being a leaky organization, despite extensive security.


I rent storage on Amazon's servers, but the stuff I send there is encrypted 
before Amazon ever sees it. I don't have to depend at all on Amazon having a 
privacy policy or airtight security.


Google could implement their Calendar, etc., stuff the same way. I'd even pay 
for it (like I pay Amazon).


Re: DVCS

2011-01-22 Thread Walter Bright

Daniel Gibson wrote:
And is the support for the graphics chip better, i.e. can you use full 
resolution?


Yes, it recognized my resolution automatically. That's a nice improvement.


Re: DVCS

2011-01-22 Thread Daniel Gibson

Am 22.01.2011 22:31, schrieb retard:

Sat, 22 Jan 2011 13:12:26 -0800, Walter Bright wrote:


Vladimir Panteleev wrote:

http://brizoma.wordpress.com/2010/05/04/sunbird-and-lightning-removed-

from-ubuntu-10-04-lucid-lynx/


Thanks for finding that. But I think I'll stick for now with the ipod's
calendar. It's more useful anyway, as it moves with me.


Does the new Ubuntu overall work better than the old one? Would be
amazing if the media players are still all broken.


And is the support for the graphics chip better, i.e. can you use full 
resolution?


Re: easy to upgrade OS (was Re: DVCS)

2011-01-22 Thread Andrei Alexandrescu

On 1/22/11 3:03 PM, Walter Bright wrote:

retard wrote:

Ubuntu doesn't drop support for widely used software. I'd use Google's
Calendar instead.


I'm really not interested in Google owning my private data.


Google takes email privacy very seriously. Only last week they fired an 
employee for snooping through someone else's email.


http://techcrunch.com/2010/09/14/google-engineer-spying-fired/

Of course, that could be framed either as a success or a failure of 
Google's privacy enforcement.


Several companies are using gmail for their email infrastructure.


Andrei


Re: DVCS

2011-01-22 Thread retard
Sat, 22 Jan 2011 13:12:26 -0800, Walter Bright wrote:

> Vladimir Panteleev wrote:
>> http://brizoma.wordpress.com/2010/05/04/sunbird-and-lightning-removed-
from-ubuntu-10-04-lucid-lynx/
> 
> Thanks for finding that. But I think I'll stick for now with the ipod's
> calendar. It's more useful anyway, as it moves with me.

Does the new Ubuntu overall work better than the old one? Would be 
amazing if the media players are still all broken.


Re: DVCS

2011-01-22 Thread Walter Bright

Vladimir Panteleev wrote:
http://brizoma.wordpress.com/2010/05/04/sunbird-and-lightning-removed-from-ubuntu-10-04-lucid-lynx/ 


Thanks for finding that. But I think I'll stick for now with the ipod's 
calendar. It's more useful anyway, as it moves with me.


Re: easy to upgrade OS (was Re: DVCS)

2011-01-22 Thread Walter Bright

retard wrote:
Ubuntu doesn't drop support for widely used software. I'd use Google's 
Calendar instead.


I'm really not interested in Google owning my private data.


Re: easy to upgrade OS (was Re: DVCS)

2011-01-22 Thread Daniel Gibson

Am 22.01.2011 17:36, schrieb Andrej Mitrovic:

On 1/22/11, Christopher Nicholson-Sauls  wrote:

  If it was possible to do the same with OS
X, I would.  (Anyone know a little trick for that, using VirtualBox?)



No, that is illegal!

But you might want to do a google search for *cough* iDeneb *cough*
and download vmware player. :p


A google search for virtualbox osx takwing may be interesting as well.


Re: easy to upgrade OS (was Re: DVCS)

2011-01-22 Thread Andrej Mitrovic
On 1/22/11, Christopher Nicholson-Sauls  wrote:
>  If it was possible to do the same with OS
> X, I would.  (Anyone know a little trick for that, using VirtualBox?)
>

No, that is illegal!

But you might want to do a google search for *cough* iDeneb *cough*
and download vmware player. :p


Re: easy to upgrade OS (was Re: DVCS)

2011-01-22 Thread Daniel Gibson

Am 22.01.2011 13:21, schrieb retard:

Sat, 22 Jan 2011 00:58:59 -0800, Walter Bright wrote:


Gour wrote:

I'm very seriously considering to put PC-BSD on my desktop and of
several others in order to reduce my admin-time required to maint. all
those machines.


OSX is the only OS (besides DOS) I've had that had painless upgrades.
Windows upgrades never ever work in place (at least not for me). You
have to wipe the disk, install from scratch, then reinstall all your
apps and reconfigure them.

You're hosed if you lose an install disk or the serial # for it.

Ubuntu isn't much better, but at least you don't have to worry about
install disks and serial numbers. I just keep a list of sudo apt-get
commands! That works pretty good until the Ubuntu gods just decide to
drop kick your apps (like sunbird) out of the repository.


Don't blame Ubuntu, http://en.wikipedia.org/wiki/Mozilla_Sunbird

"It was developed as a standalone version of the Lightning calendar and
scheduling extension for Mozilla Thunderbird. Development of Sunbird was
ended with release 1.0 beta 1 to focus on development of Mozilla
Lightning.[6][7]"

Ubuntu doesn't drop support for widely used software. I'd use Google's
Calendar instead.


Ubuntu doesn't include Lightning, either.

Walter: You could add the lightning plugin to your thunderbird from the 
mozilla page: http://www.mozilla.org/projects/calendar/lightning/index.html
Hopefully it automatically imports your sunbird data or is at least able 
to import it manually.




Re: easy to upgrade OS (was Re: DVCS)

2011-01-22 Thread Christopher Nicholson-Sauls
On 01/22/11 03:57, spir wrote:
> On 01/22/2011 09:58 AM, Walter Bright wrote:
>> Gour wrote:
>>> I'm very seriously considering to put PC-BSD on my desktop and of
>>> several others in order to reduce my admin-time required to maint. all
>>> those machines.
>>
>> OSX is the only OS (besides DOS) I've had that had painless upgrades.
>> Windows upgrades never ever work in place (at least not for me). You
>> have to wipe the disk, install from scratch, then reinstall all your
>> apps and reconfigure them.
> 
> Same in my experience. I had to recently re-install from scratch my
> ubuntu box recently (recently why I have the same amusing info as Walter
> telling my machine runs ubuntu 11.04?) because 10.04 --> 10.10 upgrade
> miserably crashed (at the end of the procedure, indeed).
> 
> And no, this is not due to me naughtily the system; instead while
> userland is highly personalised I do not touch the rest (mainly my brain
> cannot cope with the standard unix filesystem hierarchy).
> 
> (I use linux only for philosophical reasons, else would happily switch
> to mac.)
> 
> Denis
> _
> vita es estrany
> spir.wikidot.com
> 

Likewise I had occasional issues with Ubuntu/Kubuntu upgrades when I was
using it.  Moving to a "rolling release" style distribution (Gentoo)
changed everything for me.  I haven't had a single major issue since.
(I put "major" in there because there have been issues, but of the
"glance at the screen, notice the blocker, type out the one very short
command that will fix it, continue updating" variety.)

Heck, updating has proven so straight-forward that I check for updates
almost daily.  I originally went to Linux for "philosophical" reasons,
as well, but now that I've had a taste of a "real distro" I really don't
have any interest in toying around with anything else.

I do have a Windows install for development/testing purposes though...
running in a VM.  ;)  Amazingly enough, Windows seems to be perfectly
happy running as a guest O/S.  If it was possible to do the same with OS
X, I would.  (Anyone know a little trick for that, using VirtualBox?)

-- Chris N-S


Re: easy to upgrade OS (was Re: DVCS)

2011-01-22 Thread retard
Sat, 22 Jan 2011 00:58:59 -0800, Walter Bright wrote:

> Gour wrote:
>> I'm very seriously considering to put PC-BSD on my desktop and of
>> several others in order to reduce my admin-time required to maint. all
>> those machines.
> 
> OSX is the only OS (besides DOS) I've had that had painless upgrades.
> Windows upgrades never ever work in place (at least not for me). You
> have to wipe the disk, install from scratch, then reinstall all your
> apps and reconfigure them.
> 
> You're hosed if you lose an install disk or the serial # for it.
> 
> Ubuntu isn't much better, but at least you don't have to worry about
> install disks and serial numbers. I just keep a list of sudo apt-get
> commands! That works pretty good until the Ubuntu gods just decide to
> drop kick your apps (like sunbird) out of the repository.

Don't blame Ubuntu, http://en.wikipedia.org/wiki/Mozilla_Sunbird

"It was developed as a standalone version of the Lightning calendar and 
scheduling extension for Mozilla Thunderbird. Development of Sunbird was 
ended with release 1.0 beta 1 to focus on development of Mozilla 
Lightning.[6][7]"

Ubuntu doesn't drop support for widely used software. I'd use Google's 
Calendar instead.


Re: DVCS

2011-01-22 Thread spir

On 01/22/2011 10:34 AM, Vladimir Panteleev wrote:

On Sat, 22 Jan 2011 08:35:55 +0200, Walter Bright
 wrote:


The only real problem I've run into (so far) is the sunbird calendar
has been unceremoniously dumped from Ubuntu. The data file for it is
in some crappy binary format, so poof, there goes all my calendar data.


Hi Walter, have you seen this yet? It's an article on how to import your
calendar data in Lightning, the official Thunderbird calendar extension.
I hope it'll help you:

http://brizoma.wordpress.com/2010/05/04/sunbird-and-lightning-removed-from-ubuntu-10-04-lucid-lynx/


Yes, lightning seems to have been the successor mozilla project to 
sunbird (wikipedia would probably tell you more).


Denis
_
vita es estrany
spir.wikidot.com



Re: easy to upgrade OS (was Re: DVCS)

2011-01-22 Thread spir

On 01/22/2011 09:58 AM, Walter Bright wrote:

Gour wrote:

I'm very seriously considering to put PC-BSD on my desktop and of
several others in order to reduce my admin-time required to maint. all
those machines.


OSX is the only OS (besides DOS) I've had that had painless upgrades.
Windows upgrades never ever work in place (at least not for me). You
have to wipe the disk, install from scratch, then reinstall all your
apps and reconfigure them.


Same in my experience. I had to recently re-install from scratch my 
ubuntu box recently (recently why I have the same amusing info as Walter 
telling my machine runs ubuntu 11.04?) because 10.04 --> 10.10 upgrade 
miserably crashed (at the end of the procedure, indeed).


And no, this is not due to me naughtily the system; instead while 
userland is highly personalised I do not touch the rest (mainly my brain 
cannot cope with the standard unix filesystem hierarchy).


(I use linux only for philosophical reasons, else would happily switch 
to mac.)


Denis
_
vita es estrany
spir.wikidot.com



Re: DVCS

2011-01-22 Thread spir

On 01/22/2011 07:35 AM, Walter Bright wrote:

I finally did do it, but as a clean install. I found an old 160G drive,
wiped it, and installed 10.10 on it. (Amusingly, the "About Ubuntu" box
says it's version 11.04, and /etc/issue says it's 10.10.)


Same for me ;-)
_
vita es estrany
spir.wikidot.com



Re: DVCS

2011-01-22 Thread Vladimir Panteleev
On Sat, 22 Jan 2011 08:35:55 +0200, Walter Bright  
 wrote:


The only real problem I've run into (so far) is the sunbird calendar has  
been unceremoniously dumped from Ubuntu. The data file for it is in some  
crappy binary format, so poof, there goes all my calendar data.


Hi Walter, have you seen this yet? It's an article on how to import your  
calendar data in Lightning, the official Thunderbird calendar extension. I  
hope it'll help you:


http://brizoma.wordpress.com/2010/05/04/sunbird-and-lightning-removed-from-ubuntu-10-04-lucid-lynx/

--
Best regards,
 Vladimirmailto:vladi...@thecybershadow.net


Re: easy to upgrade OS (was Re: DVCS)

2011-01-22 Thread Walter Bright

Gour wrote:

I'm very seriously considering to put PC-BSD on my desktop and of
several others in order to reduce my admin-time required to maint. all
those machines.


OSX is the only OS (besides DOS) I've had that had painless upgrades. Windows 
upgrades never ever work in place (at least not for me). You have to wipe the 
disk, install from scratch, then reinstall all your apps and reconfigure them.


You're hosed if you lose an install disk or the serial # for it.

Ubuntu isn't much better, but at least you don't have to worry about install 
disks and serial numbers. I just keep a list of sudo apt-get commands! That 
works pretty good until the Ubuntu gods just decide to drop kick your apps (like 
sunbird) out of the repository.


Re: DVCS

2011-01-22 Thread Walter Bright

Andrei Alexandrescu wrote:

On 1/22/11 12:35 AM, Walter Bright wrote:

Phobos1 on 10.10 is dying in its unit tests because Ubuntu changed how
gcc's strtof() works. Erratic floating point is typical of C runtime
library implementations (the transcendentals are often sloppily done),
which is why more and more Phobos uses its own implementations that Don
has put together.


I think we must change to our own routines anyway. One strategic 
advantage of native implementations of strtof (and the converse sprintf 
etc.) is that we can CTFE them, which opens the door to interesting 
applications.


We can also make our own conversion routines consistent, pure, thread safe and 
locale-independent.


easy to upgrade OS (was Re: DVCS)

2011-01-21 Thread Gour
On Fri, 21 Jan 2011 22:35:55 -0800
Walter Bright  wrote:

Hello Walter,

> I finally did do it, but as a clean install. I found an old 160G
> drive, wiped it, and installed 10.10 on it. (Amusingly, the "About
> Ubuntu" box says it's version 11.04, and /etc/issue says it's 10.10.)

in last few days I did a little research about 'easy-to-admin OS-es'
and the result of it is: PC-BSD (http://www.pcbsd.org/) or Ubuntu-like
PC-BSD with a GUI installer. The possible advantage is that here OS
means kernel+tools which are strictly separated fro the other 'add-on'
packages which should guarantee smooth upgrade.

Moreover, PC-BSD deploys so called PBI installer which installs every
'add-on' package with complete set of required libs preventing
upgrade-breakages.

Of course, some more HD space is wasted but this will be resolved in
June/July 9.0 release where such add-on packages will use kind of
spool of common-libs, but the main OS is still kept intact.

I'm very seriously considering to put PC-BSD on my desktop and of
several others in order to reduce my admin-time required to maint. all
those machines.

Finally, there is latest dmd2 available in 'ports' and having you on
PC-BSD will make it even better. ;)


Sincerely,
Gour

-- 

Gour  | Hlapicina, Croatia  | GPG key: CDBF17CA



signature.asc
Description: PGP signature


Re: DVCS

2011-01-21 Thread Andrei Alexandrescu

On 1/22/11 12:35 AM, Walter Bright wrote:

Phobos1 on 10.10 is dying in its unit tests because Ubuntu changed how
gcc's strtof() works. Erratic floating point is typical of C runtime
library implementations (the transcendentals are often sloppily done),
which is why more and more Phobos uses its own implementations that Don
has put together.


I think we must change to our own routines anyway. One strategic 
advantage of native implementations of strtof (and the converse sprintf 
etc.) is that we can CTFE them, which opens the door to interesting 
applications.


I have something CTFEable starting from your dmc code, but never got 
around to handling all of the small details.



Andrei


Re: DVCS

2011-01-21 Thread Walter Bright

Gour wrote:

Otoh, with Ubuntu, upgrade from 8.10 to 10.10 is always a major
undertaking (I'm familiar with it since  '99 when I used SuSE and had
experience with deps hell.)


I finally did do it, but as a clean install. I found an old 160G drive, wiped 
it, and installed 10.10 on it. (Amusingly, the "About Ubuntu" box says it's 
version 11.04, and /etc/issue says it's 10.10.)


I attached the old drive through a usb port, and copied everything on it into a 
subdirectory of the new drive. Then, file and directory by file and directory, I 
moved the files into place on my new home directory.


The main difficulty was the . files, which litter the home directory and gawd 
knows what they do or are for. This is one reason why I tend to stick with all 
defaults.


The only real problem I've run into (so far) is the sunbird calendar has been 
unceremoniously dumped from Ubuntu. The data file for it is in some crappy 
binary format, so poof, there goes all my calendar data. Why do I bother with 
this crap. I think I'll stick with the ipod calendar.


Phobos1 on 10.10 is dying in its unit tests because Ubuntu changed how gcc's 
strtof() works. Erratic floating point is typical of C runtime library 
implementations (the transcendentals are often sloppily done), which is why more 
and more Phobos uses its own implementations that Don has put together.


Re: DVCS

2011-01-20 Thread arch 4 ever
Jeff Nowakowski Wrote:

> On 01/20/2011 07:33 AM, Gour wrote:
> > On Thu, 20 Jan 2011 06:39:08 -0500
> > Jeff Nowakowski  wrote:
> >
> >
> >> No, I haven't tried it. I'm not going to try every OS that comes down
> >> the pike.
> >
> > Then please, without any offense, do not give advises about something
> > which you did not try. I did use Ubuntu...
> 
> Please yourself. I quoted from the FAQ from the distribution's main 
> site. If that's wrong, then Arch has a big public relations problem. I 
> can make rational arguments without having used a system.

Listen you haven't used Arch so u don't know a shit. Stop bashing other distros 
and stick with your Noobuntu. You suck

> 
> >> That's a heavy investment of time, especially for somebody
> >> unfamiliar with Linux.
> >
> > Again, you're speaking without personal experience...
> 
>  From Jonathan M Davis in this thread:
> 
> "There is no question that Arch takes more to manage than a number of 
> other distros. [..] Arch really doesn't take all that much to maintain, 
> but it does have a higher setup cost than your average distro, and you 
> do have to do some level of manual configuration that I'd expect a more 
> typical distro like OpenSuSE or Ubuntu to take care of for you."

That was just bullshit. Gour said Arch is easier to administrate and it's true. 
Pacman creates new conf files in /etc. Use meld to fix them. Much easier than 
Nbuntu.
> 
> 
> > Moreover, in TDPL's foreword, Walter speaks about himself as "..of an
> > engineer..", so I'm sure he is capable to handle The Arch Way
> 
> You're talking about somebody who is running a nearly 3 year old version 
> of Ubuntu because he had one bad upgrade experience, and is probably 
> running software full of security holes. If he can't spend a day a year 
> to upgrade his OS, what makes you think he wants to spend time on a more 
> demanding distro?

Once he learns the Linux way or the Arch way, he starts to suffer from sleep 
deprivation because the administration is so fun. 

> 
> > There are no incompatibilities...if I upgrade kernel, it means that
> > package manager will figure out what components has to be updated...
> 
> And what happens when the kernel, as it often does, changes the way it 
> handles things like devices, and expects the administrator to do some 
> tweaking to handle the upgrade? What happens when you upgrade X and it 
> no longer supports your video chipset? What happens when you upgrade 
> something as basic as the DNS library, and it reacts badly with your router?

It just manages it. Try it.

> 
> Is Arch going to maintain your config files for you? Is it going to 
> handle jumping 2 or 3 versions for software that can only upgrade from 
> one version ago?

Yes.

> 
> These are real world examples. Arch is not some magic distribution that 
> will make upgrade problems go away.

The point is, it's better than Nooobuntu or Gentoo. It doesn't need more merits.

> 
> > Remember: there are no packages 'tagged' for any specific release!
> 
> Yeah, I know. I also run Debian Testing, which is a "rolling release". 
> I'm not some Ubuntu noob.

It's GNU/Debian Linux, not just Debian, you insensitive clod! Debian only 
contains ancient packages like kde 3 in their stable. It's for old bearded 
communists. 


Re: DVCS

2011-01-20 Thread retard
Thu, 20 Jan 2011 13:33:58 +0100, Gour wrote:

> On Thu, 20 Jan 2011 06:39:08 -0500
> Jeff Nowakowski  wrote:
> 
> 
>> No, I haven't tried it. I'm not going to try every OS that comes down
>> the pike.
> 
> Then please, without any offense, do not give advises about something
> which you did not try. I did use Ubuntu...
> 
>> So instead of giving you a bunch of sane defaults, you have to make a
>> bunch of choices up front.
> 
> Right. That's why there is no need for separate distro based on DE user
> wants to have, iow, by simple: pacman -Sy xfce4 you get XFCE environment
> installed...same wit GNOME & KDE.

It's the same in Ubuntu. You can install the minimal server build and 
install the DE of your choice in similar way. The prebuilt images 
(Ubuntu, Kubuntu, Xubuntu, Lubuntu, ...) are for those who can't decide 
and don't want to fire up a terminal for writing down bash code. In 
Ubuntu you have even more choice. The huge metapackage or just the DE 
packages, with or without recommendations. A similar system just doesn't 
exist for Arch. For the lazy user Ubuntu is a dream come true - you never 
need to launch xterm if you don't want. There's a GUI for almost 
everything.

> 
>> That's a heavy investment of time, especially for somebody unfamiliar
>> with Linux.
> 
> Again, you're speaking without personal experience...

You're apparently a Linux fan, but have you got any idea which BSD or 
Solaris distro to choose? The choice isn't as simple if you have zero 
experience with the system. 

> 
> Moreover, in TDPL's foreword, Walter speaks about himself as "..of an
> engineer..", so I'm sure he is capable to handle The Arch Way (see
> section Simplicity at https://wiki.archlinux.org/index.php/Arch_Linux)
> which says: "The Arch Way is a philosophy aimed at keeping it simple.

I think Walter's system isn't up to date because he is a lazy bitch. Has 
all the required competence but never bothers to update if it just works 
(tm). The same philosophy can be found in dmd/dmc. The code is sometimes 
hard to read and hard to maintain and buggy, but if it works, why fix it?

> The Arch Linux base system is quite simply the minimal, yet functional
> GNU/Linux environment; the Linux kernel, GNU toolchain, and a handful of
> optional, extra command line utilities like links and Vi. This clean and
> simple starting point provides the foundation for expanding the system
> into whatever the user requires." and from there install one of the
> major DEs (GNOME, KDE or XFCE) to name a few.

I'd give my vote for LFS. It's quite minimal.

> 
>> The upgrade problems are still there. *Every package* you upgrade has a
>> chance to be incompatible with the previous version. The longer you
>> wait, the more incompatibilities there will be.
> 
> There are no incompatibilities...if I upgrade kernel, it means that
> package manager will figure out what components has to be updated...
> 
> Remember: there are no packages 'tagged' for any specific release!

Even if the package manager works perfectly, the repositories have bugs 
in their dependencies and other metadata.

> 
>> Highlighting the problem of waiting too long to upgrade. You're
>> skipping an entire release. I'd like to see you take a snapshot of Arch
>> from 2008, use the system for 2 years without updating, and then
>> upgrade to the latest packages. Do you think Arch is going to magically
>> have no problems?
> 
> I did upgrade on my father-in-law's machine which was more then 1yr old
> without any problem.
> 
> You think there must be some magic to handle it...ask some FreeBSD user
> how they do it. ;)

There's usually a safe upgrade period. If you wait too much, package 
conflicts will appear. It's simply too much work to keep rules for all 
possible package transitions. For example libc update breaks kde, but 
it's now called kde4. The system needs to know how to first remove all 
kde4 packages and update them. Chromium was previously a game, but now 
it's a browser, the game becomes chromium-bsu or something. I have hard 
time believing the minimal Arch does all this.


Re: DVCS

2011-01-20 Thread Gour
On Thu, 20 Jan 2011 09:19:54 -0500
Jeff Nowakowski  wrote:

> Please yourself. I quoted from the FAQ from the distribution's main 
> site. If that's wrong, then Arch has a big public relations problem.

Arch simply does not offer false promises that system will "Just
work". Still, I see the number of users has rapidly increased in last
year or so...mostly Ubuntu 'refugees'.

> You're talking about somebody who is running a nearly 3 year old
> version of Ubuntu because he had one bad upgrade experience, and is
> probably running software full of security holes. If he can't spend a
> day a year to upgrade his OS, what makes you think he wants to spend
> time on a more demanding distro?

My point is that due to rolling-release nature, distro like Archlinux
require less work in the case when one 'forgets' to update OS and has
to do 'major upgrade'. It was my experience with both SuSE and Ubuntu.

> And what happens when the kernel, as it often does, changes the way
> it handles things like devices, and expects the administrator to do
> some tweaking to handle the upgrade? What happens when you upgrade X
> and it no longer supports your video chipset? What happens when you
> upgrade something as basic as the DNS library, and it reacts badly
> with your router?

In the above cases, there is no distro which can save you from some
admin work...and the problem is that people expect such system where,
often, the only admin work is re-install. :-)

> These are real world examples. Arch is not some magic distribution
> that will make upgrade problems go away.

Sure. But upgrade in rolling-release distro is simpler than in
Ubuntu-like one.

> Yeah, I know. I also run Debian Testing, which is a "rolling
> release". I'm not some Ubuntu noob.

Heh, I could imagine you like 'bleeding edge' considering you lived
with ~x86 and 'unstable' repos. ;)

Now we may close this thread...at least, I do not have anything more
to say. :-D


Sincerely,
Gour

-- 

Gour  | Hlapicina, Croatia  | GPG key: CDBF17CA



signature.asc
Description: PGP signature


Re: DVCS

2011-01-20 Thread Jeff Nowakowski

On 01/20/2011 07:33 AM, Gour wrote:

On Thu, 20 Jan 2011 06:39:08 -0500
Jeff Nowakowski  wrote:



No, I haven't tried it. I'm not going to try every OS that comes down
the pike.


Then please, without any offense, do not give advises about something
which you did not try. I did use Ubuntu...


Please yourself. I quoted from the FAQ from the distribution's main 
site. If that's wrong, then Arch has a big public relations problem. I 
can make rational arguments without having used a system.



That's a heavy investment of time, especially for somebody
unfamiliar with Linux.


Again, you're speaking without personal experience...


From Jonathan M Davis in this thread:

"There is no question that Arch takes more to manage than a number of 
other distros. [..] Arch really doesn't take all that much to maintain, 
but it does have a higher setup cost than your average distro, and you 
do have to do some level of manual configuration that I'd expect a more 
typical distro like OpenSuSE or Ubuntu to take care of for you."




Moreover, in TDPL's foreword, Walter speaks about himself as "..of an
engineer..", so I'm sure he is capable to handle The Arch Way


You're talking about somebody who is running a nearly 3 year old version 
of Ubuntu because he had one bad upgrade experience, and is probably 
running software full of security holes. If he can't spend a day a year 
to upgrade his OS, what makes you think he wants to spend time on a more 
demanding distro?



There are no incompatibilities...if I upgrade kernel, it means that
package manager will figure out what components has to be updated...


And what happens when the kernel, as it often does, changes the way it 
handles things like devices, and expects the administrator to do some 
tweaking to handle the upgrade? What happens when you upgrade X and it 
no longer supports your video chipset? What happens when you upgrade 
something as basic as the DNS library, and it reacts badly with your router?


Is Arch going to maintain your config files for you? Is it going to 
handle jumping 2 or 3 versions for software that can only upgrade from 
one version ago?


These are real world examples. Arch is not some magic distribution that 
will make upgrade problems go away.



Remember: there are no packages 'tagged' for any specific release!


Yeah, I know. I also run Debian Testing, which is a "rolling release". 
I'm not some Ubuntu noob.


Re: DVCS

2011-01-20 Thread Andrew Wiley
On Thu, Jan 20, 2011 at 5:39 AM, Jeff Nowakowski  wrote:

> On 01/20/2011 12:24 AM, Gour wrote:
>
>> Otoh, with Ubuntu, upgrade from 8.10 to 10.10 is always a major
>> undertaking (I'm familiar with it since  '99 when I used SuSE and had
>> experience with deps hell.)
>>
>
> Highlighting the problem of waiting too long to upgrade. You're skipping an
> entire release. I'd like to see you take a snapshot of Arch from 2008, use
> the system for 2 years without updating, and then upgrade to the latest
> packages. Do you think Arch is going to magically have no problems?
>

Ironically, I did this a few years back with an Arch box that was setup,
then banished to the TV room as a gaming system, then reconnected to the
internet about two years later (I didn't have wifi at the time, and I still
haven't put a wifi dongle on the box). It updated with no problems and is
still operating happily.
Now, I was expecting problems, but on the other hand, since *all* packages
are in the rolling release model and individual packages contain specific
version dependencies, problems are harder to find than you'd think.


Re: DVCS

2011-01-20 Thread Gour
On Thu, 20 Jan 2011 06:39:08 -0500
Jeff Nowakowski  wrote:


> No, I haven't tried it. I'm not going to try every OS that comes down 
> the pike. 

Then please, without any offense, do not give advises about something
which you did not try. I did use Ubuntu...

> So instead of giving you a bunch of sane defaults, you have to make a 
> bunch of choices up front. 

Right. That's why there is no need for separate distro based on DE
user wants to have, iow, by simple: pacman -Sy xfce4 you get XFCE
environment installed...same wit GNOME & KDE.

> That's a heavy investment of time, especially for somebody
> unfamiliar with Linux.

Again, you're speaking without personal experience...

Moreover, in TDPL's foreword, Walter speaks about himself as "..of an
engineer..", so I'm sure he is capable to handle The Arch Way (see
section Simplicity at https://wiki.archlinux.org/index.php/Arch_Linux)
which says: "The Arch Way is a philosophy aimed at keeping it
simple. The Arch Linux base system is quite simply the minimal, yet
functional GNU/Linux environment; the Linux kernel, GNU toolchain, and
a handful of optional, extra command line utilities like links and
Vi. This clean and simple starting point provides the foundation for
expanding the system into whatever the user requires." and from there
install one of the major DEs (GNOME, KDE or XFCE) to name a few.

> The upgrade problems are still there. *Every package* you upgrade has
> a chance to be incompatible with the previous version. The longer you 
> wait, the more incompatibilities there will be.

There are no incompatibilities...if I upgrade kernel, it means that
package manager will figure out what components has to be updated...

Remember: there are no packages 'tagged' for any specific release!

> Highlighting the problem of waiting too long to upgrade. You're
> skipping an entire release. I'd like to see you take a snapshot of
> Arch from 2008, use the system for 2 years without updating, and then
> upgrade to the latest packages. Do you think Arch is going to
> magically have no problems?

I did upgrade on my father-in-law's machine which was more then 1yr
old without any problem.

You think there must be some magic to handle it...ask some FreeBSD
user how they do it. ;)


Sincerely,
Gour

-- 

Gour  | Hlapicina, Croatia  | GPG key: CDBF17CA



signature.asc
Description: PGP signature


Re: DVCS

2011-01-20 Thread Jonathan M Davis
On Thursday 20 January 2011 03:39:08 Jeff Nowakowski wrote:
> On 01/20/2011 12:24 AM, Gour wrote:
> > I've feeling that you just copied the above from FAQ and never
> > actually tried Archlinux.
> 
> No, I haven't tried it. I'm not going to try every OS that comes down
> the pike. If the FAQ says that you're going to have to be more of an
> expert with your system, then I believe it. If it's wrong, then maybe
> you can push them to update it.
> 
> > The "do-it-yourself" from the above means that in Arch user is not
> > forced to use specific DE, WM etc., can choose whether he prefers WiCD
> > over NM etc.
> 
> So instead of giving you a bunch of sane defaults, you have to make a
> bunch of choices up front. That's a heavy investment of time, especially
> for somebody unfamiliar with Linux.
> 
> > That's not true...In Arch there is simply no Arch-8.10 or Arch-10.10
> > which means that whenever you update your system package manager will
> > simply pull all the packages which are required for desired kernel,
> > gcc version etc.
> 
> The upgrade problems are still there. *Every package* you upgrade has a
> chance to be incompatible with the previous version. The longer you
> wait, the more incompatibilities there will be.
> 
> > Otoh, with Ubuntu, upgrade from 8.10 to 10.10 is always a major
> > undertaking (I'm familiar with it since  '99 when I used SuSE and had
> > experience with deps hell.)
> 
> Highlighting the problem of waiting too long to upgrade. You're skipping
> an entire release. I'd like to see you take a snapshot of Arch from
> 2008, use the system for 2 years without updating, and then upgrade to
> the latest packages. Do you think Arch is going to magically have no
> problems?

There is no question that Arch takes more to manage than a number of other 
distros. However, it takes _far_ less than Gentoo. Things generally just work 
in 
Arch, whereas you often have to figure out how to fix problems when updating on 
Gentoo. I wouldn't suggest Arch to a beginner, but I'd be _far_ more likely to 
suggest it to someone than Gentoo.

Arch really doesn't take all that much to maintain, but it does have a higher 
setup cost than your average distro, and you do have to do some level of manual 
configuration that I'd expect a more typical distro like OpenSuSE or Ubuntu to 
take care of for you.

So, I'd say that your view of Arch is likely a bit skewed, because you haven't 
actually used it, but it still definitely isn't a distro where you just stick 
in 
the install disk, install it, and then go on your merry way either.

- Jonathan M Davis


Re: DVCS

2011-01-20 Thread Jeff Nowakowski

On 01/20/2011 12:24 AM, Gour wrote:


I've feeling that you just copied the above from FAQ and never
actually tried Archlinux.


No, I haven't tried it. I'm not going to try every OS that comes down 
the pike. If the FAQ says that you're going to have to be more of an 
expert with your system, then I believe it. If it's wrong, then maybe 
you can push them to update it.



The "do-it-yourself" from the above means that in Arch user is not
forced to use specific DE, WM etc., can choose whether he prefers WiCD
over NM etc.


So instead of giving you a bunch of sane defaults, you have to make a 
bunch of choices up front. That's a heavy investment of time, especially 
for somebody unfamiliar with Linux.



That's not true...In Arch there is simply no Arch-8.10 or Arch-10.10
which means that whenever you update your system package manager will
simply pull all the packages which are required for desired kernel,
gcc version etc.


The upgrade problems are still there. *Every package* you upgrade has a 
chance to be incompatible with the previous version. The longer you 
wait, the more incompatibilities there will be.



Otoh, with Ubuntu, upgrade from 8.10 to 10.10 is always a major
undertaking (I'm familiar with it since  '99 when I used SuSE and had
experience with deps hell.)


Highlighting the problem of waiting too long to upgrade. You're skipping 
an entire release. I'd like to see you take a snapshot of Arch from 
2008, use the system for 2 years without updating, and then upgrade to 
the latest packages. Do you think Arch is going to magically have no 
problems?


Re: DVCS

2011-01-19 Thread Gour
On Wed, 19 Jan 2011 21:57:46 -0500
Gary Whatmore  wrote:

> This is something the Gentoo and Arch fanboys don't get. 

First of all I spent >5yrs with Gentoo before jumping to Arch and
those are really two different beasts.

With Arch I practically have zero-admin time after I did my 1st
install.


> They don't have any idea how little time a typical Ubuntu user
> spends maintaining the system and installing updates.

Moreover, I spent enough time servicing Ubuntu for new Linux users
(refugees from Windows) and upgrading (*)Ubuntu from e.g. 8.10 to
10.10 was never easy and smooth, while with Arch there is no such
thing as 'no packages for my version'.

> Another option is to turn on all automatic updates. Everything
> happens in the background. It might ask for a sudo password once in a
> week.

What if automatic update breaks something which happens? With Arch and
without automatic update I can always wait few days to be sure that
new stuff (e.g. kernel) do not bring some undesired regressions.

> I personally use CentOS for anything stable. I *Was* a huge Gentoo
> fanboy, but the compilation simply takes too much time, and something
> is constantly broken if you enable ~x86 packages. 

/me nods having experience with ~amd64

> I've also tried Arch. All the cool kids use it, BUT it doesn't automatically 
> handle
> any configuration files in /etc and even worse, 

You can see what new config files are there (*.pacnew) and simple
merge with e.g. meld/ediff is something what I'd always prefer than
having my conf files automatically overwritten. ;)

> if you enable the "unstable" community repositories, the packages
> won't stay there long in the repository - a few days! The
> replacement policy is nuts. One of the packages was already removed
> from the server before pacman (the package manager) started
> downloading it! Arch is a pure community based distro for hardcore
> enthusiastics. It's fundamentally incompatible with stability.

You gott what you asked for. :-)

What you say does not make sense: You speak about Ubuntu's stability
and comparing it with using 'unstable' packages in Arch which means
you're comparing apples with oranges...

Unstable packages (now 'testing') are for devs & geeks, but normal
users can have very decent system by using core/extra/community
packages only without much hassle.

Sincerely,
Gour (satisfied with Arch, just offering friendly advice and not
caring much what OS people are using as long as it's Linux)

-- 

Gour  | Hlapicina, Croatia  | GPG key: CDBF17CA



signature.asc
Description: PGP signature


Re: DVCS

2011-01-19 Thread Gour
On Wed, 19 Jan 2011 20:28:43 -0500
Jeff Nowakowski  wrote:

> "Q) Why would I not want to use Arch?
> 
> A) [...] you do not have the ability/time/desire for a
> 'do-ityourself' GNU/Linux distribution"

I've feeling that you just copied the above from FAQ and never
actually tried Archlinux.

The "do-it-yourself" from the above means that in Arch user is not
forced to use specific DE, WM etc., can choose whether he prefers WiCD
over NM etc. On the Ubuntu side, there are, afaik, at least 3 distros
achieving the same thing (Ubuntu, KUbuntu, XUBuntu) with less
flexibility. :-D

> I also don't see how Archlinux protects you from an outdated system. 
> It's up to you to update your system. The longer you wait, the more 
> chance incompatibilities creep in.

That's not true...In Arch there is simply no Arch-8.10 or Arch-10.10
which means that whenever you update your system package manager will
simply pull all the packages which are required for desired kernel,
gcc version etc.

I service my father-in-law's machine and he is practically illiterate
for computers and often I do not update his system for months knowing
well he does not require bleeding edge stuff, so when there is time
for the update it is simple: pacman -Syu with some more packages in a
queue than on my machine. ;)

Otoh, with Ubuntu, upgrade from 8.10 to 10.10 is always a major
undertaking (I'm familiar with it since  '99 when I used SuSE and had
experience with deps hell.)


Sincerely,
Gour

-- 

Gour  | Hlapicina, Croatia  | GPG key: CDBF17CA



signature.asc
Description: PGP signature


Re: DVCS

2011-01-19 Thread Gary Whatmore
Jeff Nowakowski Wrote:

> On 01/19/2011 04:18 PM, Gour wrote:
> >
> > That's why we wrote it would be better to use some rolling release
> > like Archlinux where distro cannot become so outdated that it's not
> > possible to upgrade easily.
> 
> https://wiki.archlinux.org/index.php/FAQ :
> 
> "Q) Why would I not want to use Arch?
> 
> A) [...] you do not have the ability/time/desire for a 'do-ityourself' 
> GNU/Linux distribution"

This is something the Gentoo and Arch fanboys don't get. They don't have any 
idea how little time a typical Ubuntu user spends maintaining the system and 
installing updates.

The best solution is to hire some familiar with computers (e.g. nephew with 
chocolate). It's almost free and they will want to spend hours configuring your 
system. This way you spend none of your own time maintaining.

Another option is to turn on all automatic updates. Everything happens in the 
background. It might ask for a sudo password once in a week.

In any case the Ubuntu user spends less than 10 minutes per month maintaining 
the system. It's possible but you need compatible hardware (Nvidia graphics and 
Wifi without a proprietary firmware, at least). You can't beat that.

> I also don't see how Archlinux protects you from an outdated system. 
> It's up to you to update your system. The longer you wait, the more 
> chance incompatibilities creep in.

I personally use CentOS for anything stable. I *Was* a huge Gentoo fanboy, but 
the compilation simply takes too much time, and something is constantly broken 
if you enable ~x86 packages. I've also tried Arch. All the cool kids use it, 
BUT it doesn't automatically handle any configuration files in /etc and even 
worse, if you enable the "unstable" community repositories, the packages won't 
stay there long in the repository - a few days! The replacement policy is nuts. 
One of the packages was already removed from the server before pacman (the 
package manager) started downloading it! Arch is a pure community based distro 
for hardcore enthusiastics. It's fundamentally incompatible with stability.

> 
> However, the tradeoff is that if you update weekly or monthly, then you 
> will spend more time encountering problems between upgrades. There's no 
> silver bullet here.

Yes. Although I fail to see why upgrating Ubuntu is so hard. It only takes one 
hour or two every 6 months or every 3 years. The daily security updates should 
work automatically just like in Windows.

> 
> Personally, I think you should just suck it up, make a backup of your 
> system (which you should be doing routinely anyways), and upgrade once a 
> year.

Dissing Walter has become a sad tradition here. I'm sure a long time software 
professional knows how to make backups and he has likely written his own backup 
software and RAID drivers before you were even born.

The reason Waltzy feels so clumsy in Linux world is probably the Windows XP 
attitude we all long time Windows users suffer from. Many powerusers are still 
using Windows XP, and it has a long term support plan. The support might last 
forever. You've updated Windows XP only three times. Probably 20 versions of 
Ubuntu have appeared since Windows XP was launched. Ubuntu is stuck with the 
"we MUST release SOMETHING at least every 3 years" just like WIndows did before 
XP: Win 3.11 -> 95 -> 98 -> XP (all intervals exactly 3 years).



Re: DVCS

2011-01-19 Thread Jeff Nowakowski

On 01/19/2011 04:18 PM, Gour wrote:


That's why we wrote it would be better to use some rolling release
like Archlinux where distro cannot become so outdated that it's not
possible to upgrade easily.


https://wiki.archlinux.org/index.php/FAQ :

"Q) Why would I not want to use Arch?

A) [...] you do not have the ability/time/desire for a 'do-ityourself' 
GNU/Linux distribution"


I also don't see how Archlinux protects you from an outdated system. 
It's up to you to update your system. The longer you wait, the more 
chance incompatibilities creep in.


However, the tradeoff is that if you update weekly or monthly, then you 
will spend more time encountering problems between upgrades. There's no 
silver bullet here.


Personally, I think you should just suck it up, make a backup of your 
system (which you should be doing routinely anyways), and upgrade once a 
year.


The worst case scenario is that you re-install from scratch. It's 
probably better to do that once in a while anyways, as cruft tends to 
accumulate when upgrading in place.


Re: DVCS

2011-01-19 Thread Vladimir Panteleev

On Wed, 19 Jan 2011 23:18:13 +0200, Gour  wrote:


On Wed, 19 Jan 2011 19:15:54 + (UTC)
retard  wrote:


"..your Ubuntu version isn't supported anymore. They might have
already removed the package repositories for unsupported versions and
that might indeed lead to problems"


That's why we wrote it would be better to use some rolling release
like Archlinux where distro cannot become so outdated that it's not
possible to upgrade easily.


Walter needs something he can install and get on with compiler hacking.  
ArchLinux sounds quite far from that.


I'd just recommend upgrading to an Ubuntu LTS (to also minimize the  
requirement of familiarizing yourself with a new distribution).


--
Best regards,
 Vladimirmailto:vladi...@thecybershadow.net


Re: DVCS

2011-01-19 Thread Gour
On Wed, 19 Jan 2011 19:15:54 + (UTC)
retard  wrote:

> "..your Ubuntu version isn't supported anymore. They might have
> already removed the package repositories for unsupported versions and
> that might indeed lead to problems"

That's why we wrote it would be better to use some rolling release
like Archlinux where distro cannot become so outdated that it's not
possible to upgrade easily.


Sincerely,
Gour

-- 

Gour  | Hlapicina, Croatia  | GPG key: CDBF17CA



signature.asc
Description: PGP signature


Re: DVCS (was Re: Moving to D)

2011-01-19 Thread retard
Wed, 19 Jan 2011 19:15:54 +, retard wrote:

> Wed, 19 Jan 2011 03:11:07 -0800, Walter Bright wrote:
> 
>> KennyTM~ wrote:
>>> You should use LF ending, not CRLF ending.
>> 
>> I never thought of that. Fixing that, it gets further, but still
>> innumerable errors:
>> 
>> 
>> [snip]
> 
> I already told you in message digitalmars.d:126586
> 
> "..your Ubuntu version isn't supported anymore. They might have already
> removed the package repositories for unsupported versions and that might
> indeed lead to problems"

So.. the situation is so bad that you can't install ANY packages anymore. 
Accidently removing packages can make the system unbootable and those 
application are gone for good (unless you do a fresh reinstall). My bet 
is that if it isn't already impossible to upgrade to a new version, when 
they remove the repositories for the next Ubuntu version, you're 
completely fucked up.


Re: DVCS (was Re: Moving to D)

2011-01-19 Thread retard
Wed, 19 Jan 2011 03:11:07 -0800, Walter Bright wrote:

> KennyTM~ wrote:
>> You should use LF ending, not CRLF ending.
> 
> I never thought of that. Fixing that, it gets further, but still
> innumerable errors:
> 

> [snip]

I already told you in message digitalmars.d:126586

"..your Ubuntu version isn't supported anymore. They might have already 
removed the package repositories for unsupported versions and that might 
indeed lead to problems"

It's exactly like using Windows 3.11 now. Totally unsupported. I'd so sad 
the leader of the D language is so incompetent with open source 
technologies. If you really want to stick with outdated operating system 
versions, why don't you install all the "stable" and "important" services 
on some headless virtual server (on another machine) and update the 
latest Ubuntu on your main desktop. It's hard to believe making backups 
of your /home/walter is so hard. That ought to be everything you need to 
do with desktop Ubuntu..


Re: DVCS (was Re: Moving to D)

2011-01-19 Thread Walter Bright

Vladimir Panteleev wrote:
On Wed, 19 Jan 2011 13:11:07 +0200, Walter Bright 
 wrote:



KennyTM~ wrote:

You should use LF ending, not CRLF ending.


I never thought of that. Fixing that, it gets further, but still 
innumerable errors:


If apt-get update doesn't fix it, only an update will - looks like your 
Ubuntu version is so old, Canonical is no longer maintaining 
repositories for it. The only alternative is downloading and installing 
the components manually, and that probably will take half a day :P


Yeah, I figured that. Thanks for the try, anyway!


Re: DVCS (was Re: Moving to D)

2011-01-19 Thread Vladimir Panteleev
On Wed, 19 Jan 2011 13:11:07 +0200, Walter Bright  
 wrote:



KennyTM~ wrote:

You should use LF ending, not CRLF ending.


I never thought of that. Fixing that, it gets further, but still  
innumerable errors:


If apt-get update doesn't fix it, only an update will - looks like your  
Ubuntu version is so old, Canonical is no longer maintaining repositories  
for it. The only alternative is downloading and installing the components  
manually, and that probably will take half a day :P


--
Best regards,
 Vladimirmailto:vladi...@thecybershadow.net


Re: DVCS (was Re: Moving to D)

2011-01-19 Thread Walter Bright

KennyTM~ wrote:

You should use LF ending, not CRLF ending.


I never thought of that. Fixing that, it gets further, but still innumerable 
errors:

walter@mercury:~$ ./buildmeld
[sudo] password for walter:
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  autoconf automake1.7 autotools-dev cdbs debhelper fdupes gettext 
gnome-pkg-tools html2text intltool
  intltool-debian libmail-sendmail-perl libsys-hostname-long-perl m4 po-debconf 
python-dev

  python2.5-dev
0 upgraded, 17 newly installed, 0 to remove and 0 not upgraded.
Need to get 7387kB of archives.
After this operation, 23.9MB of additional disk space will be used.
Do you want to continue [Y/n]? Y
WARNING: The following packages cannot be authenticated!
  m4 autoconf autotools-dev automake1.7 html2text gettext intltool-debian 
po-debconf debhelper fdupes
  intltool cdbs gnome-pkg-tools libsys-hostname-long-perl libmail-sendmail-perl 
python2.5-dev

  python-dev
Install these packages without verification [y/N]? y
Get:1 http://ca.archive.ubuntu.com intrepid/main m4 1.4.11-1 [263kB]
Err http://ca.archive.ubuntu.com intrepid/main autoconf 2.61-7ubuntu1
  404 Not Found [IP: 91.189.92.170 80]
Err http://ca.archive.ubuntu.com intrepid/main autotools-dev 20080123.1
  404 Not Found [IP: 91.189.92.170 80]
Get:2 http://ca.archive.ubuntu.com intrepid/main automake1.7 1.7.9-9 [391kB]
Get:3 http://ca.archive.ubuntu.com intrepid/main html2text 1.3.2a-5 [95.6kB]
Err http://ca.archive.ubuntu.com intrepid/main gettext 0.17-3ubuntu2
  404 Not Found [IP: 91.189.92.170 80]
Get:4 http://ca.archive.ubuntu.com intrepid/main intltool-debian 
0.35.0+20060710.1 [31.6kB]

Get:5 http://ca.archive.ubuntu.com intrepid/main po-debconf 1.0.15ubuntu1 
[237kB]
Err http://ca.archive.ubuntu.com intrepid/main debhelper 7.0.13ubuntu1
  404 Not Found [IP: 91.189.92.170 80]
Get:6 http://ca.archive.ubuntu.com intrepid/main fdupes 1.50-PR2-1 [19.1kB]
Err http://ca.archive.ubuntu.com intrepid/main intltool 0.40.5-0ubuntu1
  404 Not Found [IP: 91.189.92.170 80]
Err http://ca.archive.ubuntu.com intrepid/main cdbs 0.4.52ubuntu7
  404 Not Found [IP: 91.189.92.170 80]
Err http://ca.archive.ubuntu.com intrepid/main gnome-pkg-tools 0.13.6ubuntu1
  404 Not Found [IP: 91.189.92.170 80]
Get:7 http://ca.archive.ubuntu.com intrepid/main libsys-hostname-long-perl 1.4-2 
[11.4kB]

Err http://ca.archive.ubuntu.com intrepid/main libmail-sendmail-perl 0.79-5
  404 Not Found [IP: 91.189.92.170 80]
Err http://ca.archive.ubuntu.com intrepid-updates/main python2.5-dev 
2.5.2-11.1ubuntu1.1

  404 Not Found [IP: 91.189.92.170 80]
Err http://ca.archive.ubuntu.com intrepid/main python-dev 2.5.2-1ubuntu1
  404 Not Found [IP: 91.189.92.170 80]
Err http://security.ubuntu.com intrepid-security/main python2.5-dev 
2.5.2-11.1ubuntu1.1

  404 Not Found [IP: 91.189.92.167 80]
Fetched 1050kB in 2s (403kB/s)
Failed to fetch 
http://ca.archive.ubuntu.com/ubuntu/pool/main/a/autoconf/autoconf_2.61-7ubuntu1_all.deb 
404 Not Found [IP: 91.189.92.170 80]
Failed to fetch 
http://ca.archive.ubuntu.com/ubuntu/pool/main/a/autotools-dev/autotools-dev_20080123.1_all.deb 
404 Not Found [IP: 91.189.92.170 80]
Failed to fetch 
http://ca.archive.ubuntu.com/ubuntu/pool/main/g/gettext/gettext_0.17-3ubuntu2_amd64.deb 
404 Not Found [IP: 91.189.92.170 80]
Failed to fetch 
http://ca.archive.ubuntu.com/ubuntu/pool/main/d/debhelper/debhelper_7.0.13ubuntu1_all.deb 
404 Not Found [IP: 91.189.92.170 80]
Failed to fetch 
http://ca.archive.ubuntu.com/ubuntu/pool/main/i/intltool/intltool_0.40.5-0ubuntu1_all.deb 
404 Not Found [IP: 91.189.92.170 80]
Failed to fetch 
http://ca.archive.ubuntu.com/ubuntu/pool/main/c/cdbs/cdbs_0.4.52ubuntu7_all.deb 
404 Not Found [IP: 91.189.92.170 80]
Failed to fetch 
http://ca.archive.ubuntu.com/ubuntu/pool/main/g/gnome-pkg-tools/gnome-pkg-tools_0.13.6ubuntu1_all.deb 
404 Not Found [IP: 91.189.92.170 80]
Failed to fetch 
http://ca.archive.ubuntu.com/ubuntu/pool/main/libm/libmail-sendmail-perl/libmail-sendmail-perl_0.79-5_all.deb 
404 Not Found [IP: 91.189.92.170 80]
Failed to fetch 
http://security.ubuntu.com/ubuntu/pool/main/p/python2.5/python2.5-dev_2.5.2-11.1ubuntu1.1_amd64.deb 
404 Not Found [IP: 91.189.92.167 80]
Failed to fetch 
http://ca.archive.ubuntu.com/ubuntu/pool/main/p/python-defaults/python-dev_2.5.2-1ubuntu1_all.deb 
404 Not Found [IP: 91.189.92.170 80]
E: Unable to fetch some archives, try running apt-get update or apt-get 
--fix-missing.

E: Failed to process build dependencies
--2011-01-19 03:07:16-- 
http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2

Resolving ftp.gnome.org... 130.239.18.163, 130.239.18.173
Connecting to ftp.gnome.org|130.239.18.163|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 330845 (323K) [application/x-bzip2]
Saving to: `meld-1.5.0.tar.bz2'

100%[=>] 330,845 
  179K/s   in 1.8s


2011-01-19 03:0

Re: DVCS (was Re: Moving to D)

2011-01-18 Thread KennyTM~

On Jan 19, 11 13:38, Walter Bright wrote:

Vladimir Panteleev wrote:

On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright
 wrote:


Yeah, I could spend an afternoon doing that.


sudo apt-get build-dep meld
wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2
tar jxf meld-1.5.0.tar.bz2
cd meld-1.5.0
make
sudo make install

You're welcome ;)

(Yes, I just tested it on a Ubuntu install, albeit 10.10. No, no
./configure needed. For anyone else who tries this and didn't already
have meld, you may need to apt-get install python-gtk2 manually.)



It doesn't work:

walter@mercury:~$ ./buildmeld
[sudo] password for walter:
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to find a source package for meld
--2011-01-18 21:35:07--
http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2%0D
Resolving ftp.gnome.org... 130.239.18.163, 130.239.18.173
Connecting to ftp.gnome.org|130.239.18.163|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2011-01-18 21:35:08 ERROR 404: Not Found.

tar: meld-1.5.0.tar.bz2\r: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error exit delayed from previous errors
: No such file or directoryld-1.5.0
: command not found: make
'. Stop. No rule to make target `install



You should use LF ending, not CRLF ending.


Re: DVCS (was Re: Moving to D)

2011-01-18 Thread Walter Bright

Vladimir Panteleev wrote:
On Sun, 09 Jan 2011 00:34:19 +0200, Walter Bright 
 wrote:



Yeah, I could spend an afternoon doing that.


sudo apt-get build-dep meld
wget http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2
tar jxf meld-1.5.0.tar.bz2
cd meld-1.5.0
make
sudo make install

You're welcome ;)

(Yes, I just tested it on a Ubuntu install, albeit 10.10. No, no 
./configure needed. For anyone else who tries this and didn't already 
have meld, you may need to apt-get install python-gtk2 manually.)




It doesn't work:

walter@mercury:~$ ./buildmeld
[sudo] password for walter:
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to find a source package for meld
--2011-01-18 21:35:07-- 
http://ftp.gnome.org/pub/gnome/sources/meld/1.5/meld-1.5.0.tar.bz2%0D

Resolving ftp.gnome.org... 130.239.18.163, 130.239.18.173
Connecting to ftp.gnome.org|130.239.18.163|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2011-01-18 21:35:08 ERROR 404: Not Found.

tar: meld-1.5.0.tar.bz2\r: Cannot open: No such file or directory
tar: Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error exit delayed from previous errors
: No such file or directoryld-1.5.0
: command not found: make
'.  Stop. No rule to make target `install




Re: DVCS (was Re: Moving to D)

2011-01-18 Thread Johann MacDonagh

On 1/16/2011 5:07 PM, Walter Bright wrote:

We'll be moving dmd, phobos, druntime, and the docs to Github shortly.
The accounts are set up, it's just a matter of getting the svn
repositories moved and figuring out how it all works.

I know very little about git and github, but the discussions about it
here and elsewhere online have thoroughly convinced me (and the other
devs) that this is the right move for D.


I'm sure you've already seen this, but Pro Git is probably the best 
guide for git. http://progit.org/book/


Once you understand what a commit is, what a tree is, what a merge is, 
what a branch is, etc... its actually really simple (Chapter 9 in Pro 
Git). Definitely a radical departure from svn, and a good one for D.


Re: DVCS (was Re: Moving to D)

2011-01-18 Thread Robert Clipsham

On 18/01/11 01:09, Brad Roberts wrote:

On Mon, 17 Jan 2011, Walter Bright wrote:


Robert Clipsham wrote:

Speaking of which, are you able to remove the "The Software was not designed
to operate after December 31, 1999" sentence at all, or does that require
you to mess around contacting symantec? Not that anyone reads it, it is kind
of off putting to see that over a decade later though for anyone who bothers
reading it :P


Consider it like the DNA we all still carry around for fish gills!


In all seriousness, the backend license makes dmd look very strange.  It
threw the lawyers I consulted for a serious loop.  At a casual glance it
gives the impression of software that's massively out of date and out of
touch with the real world.

I know that updating it would likely be very painful, but is it just
painful or impossible?  Is it something that money could solve?

I'd chip in to a fund to replace the license with something less... odd.

Later,
Brad


Make that a nice open source license and I'm happy to throw some money 
at it too :>


--
Robert
http://octarineparrot.com/


Re: DVCS (was Re: Moving to D)

2011-01-17 Thread Brad Roberts
On Mon, 17 Jan 2011, Walter Bright wrote:

> Robert Clipsham wrote:
> > Speaking of which, are you able to remove the "The Software was not designed
> > to operate after December 31, 1999" sentence at all, or does that require
> > you to mess around contacting symantec? Not that anyone reads it, it is kind
> > of off putting to see that over a decade later though for anyone who bothers
> > reading it :P
> 
> Consider it like the DNA we all still carry around for fish gills!

In all seriousness, the backend license makes dmd look very strange.  It 
threw the lawyers I consulted for a serious loop.  At a casual glance it 
gives the impression of software that's massively out of date and out of 
touch with the real world.

I know that updating it would likely be very painful, but is it just 
painful or impossible?  Is it something that money could solve?

I'd chip in to a fund to replace the license with something less... odd.

Later,
Brad



Re: HT SMART (was Re: DVCS (was Re: Moving to D))

2011-01-17 Thread Jérôme M. Berger
Nick Sabalausky wrote:
> ""J�r�me M. Berger""  wrote in message 
> news:iguask$1dur$1...@digitalmars.com...
>> Simple curiosity: what do you use for SMART monitoring on Windows?
>> I use smard (same as Linux) but where I am reasonably confident that
>> on Linux it will email me if it detects an error condition, I am not
>> as sure of being notified on Windows (where email is not an option
>> because it is at work and Lotus will not accept email from sources
>> other than those explicitly allowed by the IT admins).
>>
> 
> Hard Disk Sentinel. I'm not married to it or anything, but it seems to be 
> pretty good.
> 
> 
Thanks, I'll have a look.

Jerome
-- 
mailto:jeber...@free.fr
http://jeberger.free.fr
Jabber: jeber...@jabber.fr



signature.asc
Description: OpenPGP digital signature


Re: DVCS (was Re: Moving to D)

2011-01-17 Thread Robert Clipsham

On 17/01/11 20:29, Walter Bright wrote:

Robert Clipsham wrote:

Speaking of which, are you able to remove the "The Software was not
designed to operate after December 31, 1999" sentence at all, or does
that require you to mess around contacting symantec? Not that anyone
reads it, it is kind of off putting to see that over a decade later
though for anyone who bothers reading it :P


Consider it like the DNA we all still carry around for fish gills!


I don't know about you, but I take full advantage of my gills!

--
Robert
http://octarineparrot.com/


Re: DVCS (was Re: Moving to D)

2011-01-17 Thread Walter Bright

Robert Clipsham wrote:
Speaking of which, are you able to remove the "The Software was not 
designed to operate after December 31, 1999" sentence at all, or does 
that require you to mess around contacting symantec? Not that anyone 
reads it, it is kind of off putting to see that over a decade later 
though for anyone who bothers reading it :P


Consider it like the DNA we all still carry around for fish gills!


Re: DVCS (was Re: Moving to D)

2011-01-17 Thread Robert Clipsham

On 17/01/11 06:25, Walter Bright wrote:

Daniel Gibson wrote:

How will the licensing issue (forks of the dmd backend are only
allowed with your permission) be solved?


It shouldn't be a problem as long as those forks are for the purpose of
developing patches to the main branch, as is done now in svn. I view it
like people downloading the source from digitalmars.com.

Using the back end to develop a separate compiler, or set oneself up as
a distributor of dmd, incorporate it into some other product, etc.,
please ask for permission.

Basically, anyone using it has to agree not to sue Symantec or Digital
Mars, and conform to:

http://www.digitalmars.com/download/dmcpp.html


Speaking of which, are you able to remove the "The Software was not 
designed to operate after December 31, 1999" sentence at all, or does 
that require you to mess around contacting symantec? Not that anyone 
reads it, it is kind of off putting to see that over a decade later 
though for anyone who bothers reading it :P


--
Robert
http://octarineparrot.com/


Re: DVCS (was Re: Moving to D)

2011-01-16 Thread Jonathan M Davis
On Sunday 16 January 2011 23:17:22 Nick Sabalausky wrote:
> "retard"  wrote in message
> news:ih0b1t$g2g$3...@digitalmars.com...
> 
> > For example used 17" TFTs cost less than $40.
> 
> Continuing to use my 21" CRT costs me nothing.
> 
> > Even the prices aren't very competitive. I only remember that all refresh
> > rates below 85 Hz caused me headache and eye fatigue. You can't use the
> > max resolution @ 60 Hz for very long.
> 
> I run mine no lower than 85 Hz. It's about 100Hz at the moment.

I've heard that the eye fatigue at 60 Hz is because it matches electricity for 
the light bulbs in the room, so then the flickering of the light bulbs and the 
screen match. Keeping it above 60 Hz avoids the problem. 100Hz is obviously 
well 
above that.

> And I never need to run it at the max rez for long. It's just nice to be
> able to bump it up now and then when I want to. Then it goes back down. And
> yet people feel the need to bitch about me liking that ability.

You can use whatever you want for all I care. It's your computer, your money, 
and your time. I just don't understand what the point of messing with your 
resolution is. I've always just set it at the highest possible level that I 
can. 
I've currently got 1920 x 1200 on a 24" monitor, but it wouldn't hurt my 
feelings any to get a higher resolution. I probably won't, simply because I'm 
more interested in getting a second monitor than a higher resolution, and I 
don't want to fork out for two monitors to get a dual monitor setup (since I 
want both monitors to be the same size) when I already have a perfectly good 
monitor, but I'd still like a higher resolution.

So, the fact that you have and want a CRT and actually want the ability to 
adjust the resolution baffles me, but I see no reason to try and correct you or 
complain about it.

- Jonathan M Davis


Re: DVCS (was Re: Moving to D)

2011-01-16 Thread Nick Sabalausky
"retard"  wrote in message 
news:ih0b1t$g2g$3...@digitalmars.com...
>
> For example used 17" TFTs cost less than $40.
>

Continuing to use my 21" CRT costs me nothing.


> Even the prices aren't very competitive. I only remember that all refresh
> rates below 85 Hz caused me headache and eye fatigue. You can't use the
> max resolution @ 60 Hz for very long.
>

I run mine no lower than 85 Hz. It's about 100Hz at the moment.

And I never need to run it at the max rez for long. It's just nice to be 
able to bump it up now and then when I want to. Then it goes back down. And 
yet people feel the need to bitch about me liking that ability.


>> Why should *I* spend the money to replace something that already
> works fine for me?
>
> You might get more things done by using a bigger screen. Maybe get some
> money to buy better equipment and stop complaining.
>

You've got to be kidding me...*other* people start giving *me* crap about 
what *I* choose to use, and you try to tell me *I'm* the one that needs to 
stop complaining? I normally try very much to avoid direct personal comments 
and only attack the arguments not the arguer, but seriously, what the hell 
is wrong with your head that you could even think of such an enormously 
idiotic thing to say?

Meh, I'm not going to bother with the rest...





Re: DVCS (was Re: Moving to D)

2011-01-16 Thread Walter Bright

Daniel Gibson wrote:
How will the licensing issue (forks of the dmd backend are only allowed 
with your permission) be solved?


It shouldn't be a problem as long as those forks are for the purpose of 
developing patches to the main branch, as is done now in svn. I view it like 
people downloading the source from digitalmars.com.


Using the back end to develop a separate compiler, or set oneself up as a 
distributor of dmd, incorporate it into some other product, etc., please ask for 
permission.


Basically, anyone using it has to agree not to sue Symantec or Digital Mars, and 
conform to:


http://www.digitalmars.com/download/dmcpp.html


  1   2   3   >