Re: What hurts you the most in Perl?

2010-12-01 Thread Fergal Daly
2010/12/1 Jason Purdy :
> To add my five cents, the thing that hurts me the most is that Perl is not
> an accepted language when it comes to the differnet new platforms.
>
> Our work has adopted Drupal as a CMS and it's written in PHP. It would be
> awesome if it was written in Perl, but as someone else posted in this
> thread, we can pick up languages pretty easily (better than foreign
> languages, no? ;)) and be productive in a few weeks.
>
> I'm also attracted to the new Android and iPad platforms, but there's no
> Perl there, either.

Veering off-topic briefly.

Perl is available through the android scripting engine

http://code.google.com/p/android-scripting/

although only Java has first-class support with access to all the GUI
and other stuff. You can run command-line perl no problem so you can
script fetching things to your phone etc. You could also run a server
in Perl and interact with it through the browser (I know of at least
one python app that does this for android),

F

> There's no Perl when it comes to creating client-side web applications
> (using JavaScript).
>
> IMHO, Perl is getting relegated to server-side/backend applications and when
> more power is getting brought to the front, it's losing mindshare/focus.
>
> - Jason
>
> http://use.perl.org/~Purdy/journal/31280
>
> On 11/24/2010 07:01 AM, Gabor Szabo wrote:
>>
>> The other day I was at a client that uses Perl in part of their system and
>> we
>> talked a bit about the language and how we try to promote it at various
>> events.
>>
>> Their "Perl person" then told me he would not use Perl now for a large
>> application because:
>>
>> 1) Threads do not work well - they are better in Python and in Java.
>>
>> 2) Using signals and signal handlers regularly crashes perl.
>>
>> 3) He also mentioned that he thinks the OO system of Perl is a hack -
>>     that the objects are hash refs and there is no privacy.
>>
>> So I wonder what hurts *you* the most in Perl?
>>
>> Gabor
>>
>> --
>> Gabor Szabo                     http://szabgab.com/
>> Perl Ecosystem Group       http://perl-ecosystem.org/
>>
>


Re: Goodbye Perl6::Say

2010-04-16 Thread Fergal Daly
On 15 April 2010 21:32, Daniel Staal  wrote:
>
> On Thu, April 15, 2010 4:09 pm, Eric Wilhelm wrote:
>
>> Honestly, if you're setting up a blank machine next week with less than
>> 5.10, not finding Perl6::Say in the index is going to be the least of
>> your problems anyway.  But you should be able to purchase some
>> complaint tokens if you really need them.
>
> I believe RedHat still ships with 5.8.8...  (Not that I'd run RedHat by
> choice if Perl were a consideration at all.)  I don't know about others.
> In general the larger and more commercial the vendor the further behind
> I'd expect them to be.

Ubuntu long-term support is 8.04 and ships with 5.8.8 too.

So there is a cost associated with deleting it. Is there a cost
associated with leaving a single working version on CPAN?

F

> In general I'm for putting in a depreciated warning for some period before
> final removal of any public API.
>
> Daniel T. Staal
>
> ---
> This email copyright the author.  Unless otherwise noted, you
> are expressly allowed to retransmit, quote, or otherwise use
> the contents for non-commercial purposes.  This copyright will
> expire 5 years after the author's death, or in 30 years,
> whichever is longer, unless such a period is in excess of
> local copyright law.
> ---
>
>


Re: Writing tests

2009-12-13 Thread Fergal Daly
2009/12/13 Rene Schickbauer :
> Hi!
>
> I'm currently writing some tests for my Maplat framework.
>
> Except for really simple tests, having PostgreSQL server and memcached
> installed is quite essential (both cam be started as temporary instances if
> required as long as the binaries are available).
>
> What is the reasonable response if one or both of them are not available
> during "make test"?
>
> *) FAIL the tests?

If you do this you'll just get spammed to bits.

> *) SKIP the tests?

Maybe but see below.

> *) DIAG("Warning") and skip the tests?

Skip comes with a reason. If you want to give more detail then diag's are fine.

> In my case, skipping the tests will probably exclude > 80% of the
> functionality, so what do i do? I probably can't just assume every
> cpantester has postgresql and memcached installed, can i?

It might be good to factor out all of the independent stuff into its
own module(s) if that makes sense, so that gets widely tested

Basically you want to avoid the tests being run on systems where they
are doomed to fail. You can do that either by

- refusing to install (a bad idea, e.g. pgsl may be installed after your module)
- reducing your dependency by making things work with one of the
lighter in-memory or testing-oriented SQL DBMSs (I think there is at
least 1 pure perl one) and then have that as a prereq for the tests
- reducing your dependency by using a mock database module that is set
up just to respond to the test queries
- skipping them on such systems - I've had big arguments along these
lines before, I think that declaring a "pass" having skipped important
tests due to unsatisfied deps is a bad idea. Users expect a pass to
mean a pass and will probably not even noticed skips whizzing past
during an automated install. Ideally tests should only be skipped when
they are irrelevant - e.g. windows only functions on a linux install.
Skipping them for code that _will_ be called but can't be tested right
now is worse than not testing that code at all - the user is left with
a false-confidence in the module.

- a final odd idea  - if you can detect that you are running under
cpan tester  (not sure if this is possible), you can dynamically add a
dependency to a sacrifical "postgres_installed" module - this module's
tests always fail if postgres is not available. You will cpantesters
spam about it but you can just /dev/null that. For testers that have
postgresql it will pass and install and then your real module will run
its full test suite,

F

> LG
> Rene
>
> --
> #!/usr/bin/perl # 99 bottles Wikipedia Edition
> $c=99;do{print "$c articles of wikipedia on the net,\n$c articles of wiki".
> "pedia,\ndiscuss one at length, delete it at will,\n".--$c." articles of ".
> "wikipedia on the net.\n\n";}while($c);print"So long & thx for the fish\n";
>


Re: Exporter::Safe?

2008-06-22 Thread Fergal Daly
2008/6/21 Ovid <[EMAIL PROTECTED]>:
>
> --- Hans Dieter Pearcey <[EMAIL PROTECTED]> wrote:
>
>> > never does anything to the importing package
>> >
>> > use Foo::Bar as => Bar;
>> >
>> > plops a constant function Bar into your package. The constant is an
>> > object on which you can do
>> >
>> > Bar->some_function(@args)
>> >
>> > and it is the equivalent of calling
>> >
>> > Foo::Bar::some_function(@args)
>>
>> In my TODO is an entry for implementing this for Sub::Exporter.
>>
>> You don't even need to use AUTOLOAD:
>>
>> * create Some::Long::Generated::Package
>> * import symbols from Foo::Bar into SLGP, wrapping each with
>>   sub { shift; $original_code->(@_) }
>> * export Bar() into the calling package
>>   sub Bar () { "Some::Long::Generated::Package" }
>
> This is sort of on the CPAN now.
>
>  use aliased 'Some::Long::Module::Name::For::Customer';
>  my $customer = Customer->new;

That does it for OO modules which would cover a lot but not for
functions which is where the evil of default exports really kicks in.
I wrote the code this evening to make

use pi X::Y::Z as A;
A->foo(1) # same as X::Y::Z::foo(1)

work as I was suggesting. It's a fair bit of symbol poking but nothing
terribly clever.

Unfortunately, I've just realised that it restricts you to functions,
you can't abbreviate X::Y::Z->method() with this.

I suppose people shouldn't be mixing the two but it's annoying. I
can't see any way around that limitation besides also using aliased
and giving it two distinct short names.

http://www.fergaldaly.com/computer/pi/

I won't put it on CPAN because I'm not serious about it, if someone
else wants to use the code, feel free,

F

> It's moderately popular and people are quite happy with it (at least
> from the feedback I've gotten).  Though I'm rather interested in this
> take on the idea.
>
> Cheers,
> Ovid
>
> --
> Buy the book - http://www.oreilly.com/catalog/perlhks/
> Personal blog- http://publius-ovidius.livejournal.com/
> Tech blog- http://use.perl.org/~Ovid/journal/
> Official Perl 6 Wiki - http://www.perlfoundation.org/perl6
> Official Parrot Wiki - http://www.perlfoundation.org/parrot
>


Re: Exporter::Safe?

2008-06-21 Thread Fergal Daly
2008/6/20 Hans Dieter Pearcey <[EMAIL PROTECTED]>:
> On Fri, Jun 20, 2008 at 04:19:41PM +0100, Fergal Daly wrote:
>> To be a little more constructive. Here's something that is
>> implementable and I think reasonable.
>>
>> use Foo::Bar;
>>
>> never does anything to the importing package
>>
>> use Foo::Bar as => Bar;
>>
>> plops a constant function Bar into your package. The constant is an
>> object on which you can do
>>
>> Bar->some_function(@args)
>>
>> and it is the equivalent of calling
>>
>> Foo::Bar::some_function(@args)
>
> In my TODO is an entry for implementing this for Sub::Exporter.
>
> You don't even need to use AUTOLOAD:
>
> * create Some::Long::Generated::Package

If you're going to use a generated package then you can use AUTOLOAD
just for the first call and oyu can put AUTOLOAD into a base class.

> * import symbols from Foo::Bar into SLGP, wrapping each with
>  sub { shift; $original_code->(@_) }

I would

sub { shift;  &goto $oringal_code }

so the whole thing is entirely transparent from a stack point of view.

> * export Bar() into the calling package
>  sub Bar () { "Some::Long::Generated::Package" }

Glad I'm not on my own wanting this :)

F

> hdp.
>


Re: Exporter::Safe?

2008-06-20 Thread Fergal Daly
2008/6/20 Ovid <[EMAIL PROTECTED]>:
> Buried deep within some code, someone used a module (Test::Most 0.03)
> which exports a 'set' function.  They weren't actually using that
> module.  It was just leftover cruft.  Unfortunately, the parent class
> of that module inherited from Class::Accessor.
>
> Test::Most exports 'set' and Class::Accessor calls a 'set' method.
> Oops.
>
> I'm trying to think of the best way to deal with this.  My first
> thought is to create a drop in replacement for Exporter which will not
> export a function if caller->can($function) *unless* the person
> explicitly lists it in the import list with a unary plus:

# 2008
use Foo; # exports nothing
use Bar; # exports set with Exporter::Safe

set() # Bar

# 2009 after upgrading some modules
use Foo; # new version in 2009 exports set
use Bar; # exports set with Exporter::Safe

set() # now Foo and triggers rm -rf / :)


Of course switching the order of imports gives the problems without
Exporter::Safe.

The upshot is that I believe there is no such thing as safe default
exports. Python gets this right with

import Foo
import Bar

Bar.set() # always works no matter what Foo suddenly starts doing.

It deals with long package names by doing

from Stupid.Long.Package import Name
Name.Foo

So, what would be interesting would be to find a way to bring the
short name in my current namespace ebenefits of Python to Perl and
abandon default exports entirely,

F

>  use Test::Most plan => 3, '+set';
>
> Are there better strategies?
>
> Cheers,
> Ovid
>
> --
> Buy the book  - http://www.oreilly.com/catalog/perlhks/
> Personal blog - http://publius-ovidius.livejournal.com/
> Tech blog - http://use.perl.org/~Ovid/journal/
>


Re: Exporter::Safe?

2008-06-20 Thread Fergal Daly
To be a little more constructive. Here's something that is
implementable and I think reasonable.

use Foo::Bar;

never does anything to the importing package

use Foo::Bar as => Bar;

plops a constant function Bar into your package. The constant is an
object on which you can do

Bar->some_function(@args)

and it is the equivalent of calling

Foo::Bar::some_function(@args)

Yes it would be slower as it would have to go through AUTOLOAD and
method called but whether that's a problem depends on whether you
value CPU cycles more than brain cycles.

Since I'm in maintenance-only mode for perl, these days I'm not
actually going to implement this. Most of my coding is in python now
and I miss plenty about Perl but not imports, exports and
really::long::symbol::names::that::have::to::replace::everywhere::if::you::drop::in::a::different::module::with::the::same::interface,

F

2008/6/20 Fergal Daly <[EMAIL PROTECTED]>:
> 2008/6/20 Ovid <[EMAIL PROTECTED]>:
>> Buried deep within some code, someone used a module (Test::Most 0.03)
>> which exports a 'set' function.  They weren't actually using that
>> module.  It was just leftover cruft.  Unfortunately, the parent class
>> of that module inherited from Class::Accessor.
>>
>> Test::Most exports 'set' and Class::Accessor calls a 'set' method.
>> Oops.
>>
>> I'm trying to think of the best way to deal with this.  My first
>> thought is to create a drop in replacement for Exporter which will not
>> export a function if caller->can($function) *unless* the person
>> explicitly lists it in the import list with a unary plus:
>
> # 2008
> use Foo; # exports nothing
> use Bar; # exports set with Exporter::Safe
>
> set() # Bar
>
> # 2009 after upgrading some modules
> use Foo; # new version in 2009 exports set
> use Bar; # exports set with Exporter::Safe
>
> set() # now Foo and triggers rm -rf / :)
>
>
> Of course switching the order of imports gives the problems without
> Exporter::Safe.
>
> The upshot is that I believe there is no such thing as safe default
> exports. Python gets this right with
>
> import Foo
> import Bar
>
> Bar.set() # always works no matter what Foo suddenly starts doing.
>
> It deals with long package names by doing
>
> from Stupid.Long.Package import Name
> Name.Foo
>
> So, what would be interesting would be to find a way to bring the
> short name in my current namespace ebenefits of Python to Perl and
> abandon default exports entirely,
>
> F
>
>>  use Test::Most plan => 3, '+set';
>>
>> Are there better strategies?
>>
>> Cheers,
>> Ovid
>>
>> --
>> Buy the book  - http://www.oreilly.com/catalog/perlhks/
>> Personal blog - http://publius-ovidius.livejournal.com/
>> Tech blog - http://use.perl.org/~Ovid/journal/
>>
>


Re: Exporter::Safe?

2008-06-20 Thread Fergal Daly
Hmm. I seem to have misunderstood your problem. The stuff below
remains true but to be relevant to you mail, should include stuff
about subclasses. The principle is that same, changing what you export
based on something other than what the importer is requesting will
cause mysterious breakage,

F

2008/6/20 Fergal Daly <[EMAIL PROTECTED]>:
> 2008/6/20 Ovid <[EMAIL PROTECTED]>:
>> Buried deep within some code, someone used a module (Test::Most 0.03)
>> which exports a 'set' function.  They weren't actually using that
>> module.  It was just leftover cruft.  Unfortunately, the parent class
>> of that module inherited from Class::Accessor.
>>
>> Test::Most exports 'set' and Class::Accessor calls a 'set' method.
>> Oops.
>>
>> I'm trying to think of the best way to deal with this.  My first
>> thought is to create a drop in replacement for Exporter which will not
>> export a function if caller->can($function) *unless* the person
>> explicitly lists it in the import list with a unary plus:
>
> # 2008
> use Foo; # exports nothing
> use Bar; # exports set with Exporter::Safe
>
> set() # Bar
>
> # 2009 after upgrading some modules
> use Foo; # new version in 2009 exports set
> use Bar; # exports set with Exporter::Safe
>
> set() # now Foo and triggers rm -rf / :)
>
>
> Of course switching the order of imports gives the problems without
> Exporter::Safe.
>
> The upshot is that I believe there is no such thing as safe default
> exports. Python gets this right with
>
> import Foo
> import Bar
>
> Bar.set() # always works no matter what Foo suddenly starts doing.
>
> It deals with long package names by doing
>
> from Stupid.Long.Package import Name
> Name.Foo
>
> So, what would be interesting would be to find a way to bring the
> short name in my current namespace ebenefits of Python to Perl and
> abandon default exports entirely,
>
> F
>
>>  use Test::Most plan => 3, '+set';
>>
>> Are there better strategies?
>>
>> Cheers,
>> Ovid
>>
>> --
>> Buy the book  - http://www.oreilly.com/catalog/perlhks/
>> Personal blog - http://publius-ovidius.livejournal.com/
>> Tech blog - http://use.perl.org/~Ovid/journal/
>>
>


Re: Why is use_ok failing in this test script?

2008-05-17 Thread Fergal Daly
2008/5/17 David Fleck <[EMAIL PROTECTED]>:
> I hope someone can help out this novice test writer.  I have a module that
> runs several test scripts, and recently they have started to fail on some
> tester's machines.  The tests work fine for me, and I can't see anything
> in the Test::More documentation that tells me what's going on.
>
> An example test script starts like this:
>
>
>  # Before `make install' is performed this script should be runnable with
>  # `make test'. After `make install' it should work as `perl Gtest.t'
>
>  #
>
>  use Test::More; BEGIN { use_ok('Statistics::Gtest') };
>
>  #
>
>  my $twothreefile = "t/2x3int.txt";
> [... rest of file follows ...]
>
>
> and, increasingly, the test fails, according to the emails I get and the
> test results I see on CPAN:
>
>
>  /usr/bin/perl.exe "-MExtUtils::Command::MM" "-e" "test_harness(0, 
> 'blib/lib', 'blib/arch')" t/*.t
>  t/file_input..You tried to run a test without a plan at 
> t/file_input.t line 6.

As it says here, you ran a test before you set set the plan.

use Test::More tests => 1; # or however many tests you have
BEGIN { use_ok('Statistics::Gtest') };

is what you should be doing.

The puzzling thing is how this ever worked for you. The only thing I
can think of is that somehow a plan was being set from within
Statistics::Gtest,

F

>  BEGIN failed--compilation aborted at t/file_input.t line 6.
>   Dubious, test returned 255 (wstat 65280, 0xff00)
>   ANo subtests run
>
>
> Line 6 is the 'use Test::More' line, which is copied pretty much straight
> from the POD.  But again, it works fine on my one local machine.  What's
> going on here? And how do I fix it?
>
> (Incidentally, I do declare a plan, a few lines further down in the test
> script:
>
>  plan tests => scalar (@file_objects) * 17;
>
> but I didn't think that was needed in the BEGIN block.)
>
> --
> David Fleck
> [EMAIL PROTECTED]
>
>


Re: XS wrapper around system - how to test the wrapper but not the system?

2008-01-28 Thread Fergal Daly
You could make the called function mockable

int (*ptr_getaddrinfo)(const char *node, const char *service,
const struct addrinfo *hints,
struct addrinfo **res);

ptr_getaddrinfo = &getaddrinfo

void mock_it(... new_ptr) {
  ptr_getaddrinfo = new_ptr;
}

so that when testing you're not calling the system one. It's a fairly
standard mocking technique, it just gets bit ugly in C because it's
not a dynamic language -  you have to replace all your calls to
getaddrinfo with calls to ptr_getaddrinfo - maybe there's some jiggery
pokery you could do to avoid that, I'm not sure.

The other alternative it to create a small library with a mock
getaddrinfo function in it and when compiling the tests, make sure it
gets linked in ahead of the libc but I fear that doing that in a
cross-platform way while maintaining your sanity may be tricky,

F


On 29/01/2008, Paul LeoNerd Evans <[EMAIL PROTECTED]> wrote:
> I'm finding it difficult to come up with a good testing strategy for an
> XS module that's just a thin wrapper around an OS call, without
> effectively also testing that function itself. Since its behaviour has
> minor variations from system to system, writing a test script that can
> cope is getting hard.
>
> The code is the 0.08 developer releases of Socket::GetAddrInfo; see
>
>   http://search.cpan.org/~pevans/Socket-GetAddrInfo-0.08_5/
>
> for latest.
>
> The code itself seems to be behaving on most platforms; most of the test
> failures come from such things as different OSes behaving differently if
> asked to resolve a host called "something.invalid", or quite whether any
> system knows the "ftp" service, or what happens if it wants to reverse
> resolve unnamed 1918 addresses (e.g. 192.168.2.2).
>
> The smoke testers page is showing a number of FAILs on most platforms not
> Linux (where I develop), probably because of assumptions the tests make
> that don't hold there any more. E.g. one problem I had was BSD4.4-based
> systems, whose struct sockaddr_in includes the sin_len field.
>
>   http://cpantesters.perl.org/show/Socket-GetAddrInfo.html
>
> Does anyone have any strategy suggestions for this?
>
> --
> Paul "LeoNerd" Evans
>
> [EMAIL PROTECTED]
> ICQ# 4135350   |  Registered Linux# 179460
> http://www.leonerd.org.uk/
>
>


Re: lambda - a shortcut for sub {...}

2007-10-13 Thread Fergal Daly
On 12/10/2007, Bill Ward <[EMAIL PROTECTED]> wrote:
> On 10/11/07, A. Pagaltzis <[EMAIL PROTECTED]> wrote:
> > * Eric Wilhelm <[EMAIL PROTECTED]> [2007-10-11 01:05]:
> > >   http://search.cpan.org/~ewilhelm/lambda-v0.0.1/lib/lambda.pm
> >
> > If I saw this in production code under my responsibility, I'd
> > submit it to DailyWTF. However, I have nothing against its use
> > in code I'll never see. Carry on.
> >
> > This opinion brought to you by Andy Lester's Perlbuzz rant.
>
> What worries me is someone's gonna submit an otherwise useful module
> to CPAN that uses this feature.

I doubt it. Anyone who can produce a genuinely useful module on CPAN
is unlikely to want add a dependency for the sake of a few keystrokes.
There are people who won't even use "better" testing modules because
it would add a dependency,

F


Re: what's the right way to test a source filter?

2007-08-08 Thread Fergal Daly
I've never used source filters but if Perl allows you to extract the
post-filtered source code then I'd test that with a whole bunch of
snippets. If not then I'd test the compiled code again expected
compiled code by running both through B::Deparse (or something like
it, demerphq has a module for sub comparisons),

F

On 07/08/07, David Nicol <[EMAIL PROTECTED]> wrote:
> so I am closer than ever to releasing my way-cool source filter module,
> which is based on Filter::Simple.  Big question:  how do I write the test
> script?
>


Re: Test failures - I can't work out why

2007-04-29 Thread Fergal Daly

On 28/04/07, Eric Wilhelm <[EMAIL PROTECTED]> wrote:

# from Fergal Daly
# on Saturday 28 April 2007 06:28 am:

>You don't have it as a prereq in Makefile.PL. It's possible the
>machines running the test don't have it installed (people do weird
>things to their perl instlls some times),

Like delete core modules?  I don't think it's a prereq issue.


It must be nice to live in a world where all bug reports come from
people with sane configurations :)

F



# from Paul LeoNerd Evans on Saturday 28 April 2007 05:29 am:

> /home/cpan/perl588/lib/5.8.8/i686-linux-thread-multi-64int-ld/auto/B/
>B.so: undefined symbol: Perl_Icheckav_save_ptr at
> /home/cpan/perl588/lib/5.8.8/XSLoader.pm line 70.

I think the problem is the "$ENV{PERL} || 'perl'" bit.  You want $^X.

>I can't see any common differences between the machines it fails on,
> and the machines it passes on

If you look again, you might find that they all have something like this
is common:

  Perl: $^X = /home/cpan/perl588/bin/perl

I'm guessing that the PERL5LIB in the testing rig combined with your
test script forcing use of the system perl is causing perl5.6 or
whatever to try to load the .so for 5.8.8.

--Eric
--
The first rule about Debian is you don't talk about Debian
---
http://scratchcomputing.com
---



Re: Test failures - I can't work out why

2007-04-28 Thread Fergal Daly

You don't have it as a prereq in Makefile.PL. It's possible the
machines running the test don't have it installed (people do weird
things to their perl instlls some times),

F

On 28/04/07, Paul LeoNerd Evans <[EMAIL PROTECTED]> wrote:

I've got a large number of failures (9 fail vs. 6 pass) on one module of
mine, which is dragging my stats down quite a bit, and I've no idea why:

  http://cpantesters.perl.org/show/B-LintSubs.html#B-LintSubs-0.03

They all seem to fail on some variant of:

  t/01happyCan't load
'/home/cpan/perl588/lib/5.8.8/i686-linux-thread-multi-64int-ld/auto/B/B.so' for 
module B: 
/home/cpan/perl588/lib/5.8.8/i686-linux-thread-multi-64int-ld/auto/B/B.so: 
undefined symbol: Perl_Icheckav_save_ptr at 
/home/cpan/perl588/lib/5.8.8/XSLoader.pm line 70.

That looks very much like a problem in B.so itself. But my module,
B::LintSubs is just a single pure-perl module of that name, I don't go
anywhere near B itself, so why does B fail here?

I can't see any common differences between the machines it fails on, and
the machines it passes on (7, including my desktop at home I tested it
on).

Does anyone have any ideas?

--
Paul "LeoNerd" Evans

[EMAIL PROTECTED]
ICQ# 4135350   |  Registered Linux# 179460
http://www.leonerd.org.uk/




Re: Another non-free license - PerlBuildSystem

2007-02-21 Thread Fergal Daly

On 20/02/07, Shlomi Fish <[EMAIL PROTECTED]> wrote:

Hi Ashley!

On Tuesday 20 February 2007, Ashley Pond V wrote:
> I didn't want to feed this so responded personally to a couple off
> list. Y'all couldn't resist sharing your politics and goofs though so…
> I apologize to the disinterested if this just feeds it.
>
> I find it difficult to believe, being a middling hacker compared to
> some of you guys, that I'm the only one on this list who has ever
> written code that ended up used by a military group; or the only one
> who regretted it.
>
> I expressed interest in such a license getting hammered out by some
> experts because I don't like being a party to mass murder. Between
> 200,000 and 750,000 (depending on whose figures you prefer) Iraqis have
> died at the hands of the US government since 1990. They can take my tax
> money to do it at the threat of prison but I would like to think it
> *might* be possible to stop them from taking my otherwise freely given
> work (the lack of Earth-moving nature of which is entirely irrelevant
> to any such debate) to do it. If such a license would be immaterial
> then so are all other petitions.
>
> The license I'd love to see would be a Non-Governmental (Personal and
> Private Industry Only). One can crack wise or politicize the idea but
> it is worth bringing up. Whether or not others would honor such a
> license does not mitigate one's attempt to live ethically.
>

As you may well be aware the Free Software Definition:

http://www.gnu.org/philosophy/free-sw.html

Specifically says that the software should have:

<<<
The freedom to run the program, for any purpose.
>>>

The Open Source Definition ( http://www.opensource.org/docs/definition.php )
in articles 5 6 prohibit discrimination against persons or groups or against
fields of endeavour.

Thus, if you prohibit use of your code by militaries or otherwise government
entities, it won't be free software or open source. Furthermore, your code
will be rendered incompatible with the GPL and similar licences that can only
be linked against a certain subset of such licences. See for example:

http://www.dwheeler.com/essays/gpl-compatible.html

Now, why was free software defined as such that is available to be used "for
any purpose"? I don't know for sure, but I have my own reasons for that.

Let's suppose you and a few people make your software prohibited for used by
armed forces. Now there are also many anarchists in the world, who dislike
governments, and some of them are going to restrict their software from being
used by governments. Then I would decide that due to the fact I hate racism,
then my software cannot be used for racist purposes. And a bunch of
Antisemites are going to restrict their software from being used by Jews.

As a result, the "open-source" software world will become fractured by such
restricted software, and people who would like to make use of various pieces
of software for their own use will have to carefully look at all of their
licences for such incompatibilities with their purposes.

Furthermore, let's suppose I'm a consultant who sets up web-sites. I'd like to
write a Content Management System for facilitating my present and future
work. However, since I don't know who my future clients are going to be I
won't be able to use any of this software for fear my future client would be
a military group, a government, a racist person or organisation, a Jew or
someone whose first name starts with the letter "S". Eventually, I may have
to implement everything from scratch.


Isn't that the point? If you object to group A then you'll be quite
happy when people who want to work with group A have to implement
everything from scratch. This is exactly what happens if you base your
code on GPL code and then want to turn it into a closed product.

Of course it makes you less likely to receive code contributions from
other but that's obviously the price you're willing to pay for your
politics,

F


As someone wise has once commented "The road to hell is paved with good
intentions", and what I said just proved it.

I find a lot of value in keeping open source software usable by everybody for
every purpose. If you want to make your software unlike this, you have the
right to, but be aware that I and many other people won't get near it with a
ten foot pole, and it won't become part of most distributions, or be used by
most open-source projects. So you'll essentially make it unusable.

So you should choose whether you want to make your software popular, or you
want to protect its "abuse" but also prevent almost every legitimate use of
it.

Regards,

Shlomi Fish

-
Shlomi Fish  [EMAIL PROTECTED]
Homepage:http://www.shlomifish.org/

Chuck Norris wrote a complete Perl 6 implementation in a day but then
destroyed all evidence with his bare hands, so no one will know his secrets.



Re: Another non-free license - PerlBuildSystem

2007-02-21 Thread Fergal Daly

On 20/02/07, Arthur Corliss <[EMAIL PROTECTED]> wrote:

On Tue, 20 Feb 2007, Ashley Pond V wrote:

> I didn't want to feed this so responded personally to a couple off list.
> Y'all couldn't resist sharing your politics and goofs though so… I apologize
> to the disinterested if this just feeds it.
>
> I find it difficult to believe, being a middling hacker compared to some of
> you guys, that I'm the only one on this list who has ever written code that
> ended up used by a military group; or the only one who regretted it.

I've not only written code used by the military, but I also served in the
military.  Despite the idiots who like to portray us a baby killers I'm
proud of it.  And you're so surprised that I find you an offensive jackass
(that's right -- I looked at your site).

> I expressed interest in such a license getting hammered out by some experts
> because I don't like being a party to mass murder. Between 200,000 and
> 750,000 (depending on whose figures you prefer) Iraqis have died at the hands
> of the US government since 1990. They can take my tax money to do it at the
> threat of prison but I would like to think it *might* be possible to stop
> them from taking my otherwise freely given work (the lack of Earth-moving
> nature of which is entirely irrelevant to any such debate) to do it. If such
> a license would be immaterial then so are all other petitions.

You're an idiot who thinks we're the blame for everything that's wrong in
the world.  That's your right, of course, and its my right to call you for
the bogus numbers.  Only a drooling, spoon-fed moron who's incapable of
research could come up with those kinds of errors.  Where's the proof of those
numbers?  At least sites like iraqbodycount.org actually give you access to
the database of incidents and reported body counts, and they're only up to
62k.  With the exception of Desert Storm this has been the safest war for
both sides we've ever conducted.


Read iraq body counts FAQ:

"What we are attempting to provide is a credible compilation  of
civilian deaths that have been reported by recognized sources. Our
maximum therefore refers to reported deaths - which can only be a
sample of true deaths unless one assumes that every civilian death has
been reported."

In fact their criteria is that the death must be reported in at least
2 "credible" sources and given that "credible" journalists cannot
travel in Iraq this means the numbers are only somewhat related to
reality. So IBC's accurately counts something that just confuses the
issue.

The Lancet study on the other hand is the same methodology used in
Darfur, the Congo, the Balkans and a variety of other conflict zones.
Strangely the numbers have been accepted without argument for all
those other places but the Iraq studies are hotly disputed by all
kinds of people who know nothing about statistics and/or how to count
deaths in a war zone. They are generally not disputed by
statisticians.

F


This is the wrong kind of forum for this kind of stupidity.  Just code, damn
it, and quite whining.

--Arthur Corliss
  Bolverk's Lair -- http://arthur.corlissfamily.org/
  Digital Mages -- http://www.digitalmages.com/
  "Live Free or Die, the Only Way to Live" -- NH State Motto


Re: Another non-free license - PerlBuildSystem

2007-02-19 Thread Fergal Daly

On 19/02/07, imacat <[EMAIL PROTECTED]> wrote:

   I support the GNU over BSD license, though this is not the subject
here.

On Sat, 17 Feb 2007 12:53:38 -0600
Ken Williams <[EMAIL PROTECTED]> wrote:
> On Feb 16, 2007, at 1:01 PM, Ashley Pond V wrote:
>* You, are part or, work for an entity that directely produces
> work or goods for any of the above.

I'm against this, too.  This term may blocks large parts of the
world from the legal.

If the army has asked me a project, should I take it?  Of course
I'll take it.  If a young lad from a poor family come to my suggestion,
that her/his family can't pay her/his tuition, should I suggest her/him
not to join the army?  No.  The armed school pays quite well, and has
better job security than many other businesses.

Besides, the ones that truely creates the wars are the politians,
not the armies.  Armies are merely people that follow their leaders, and
their ultimate leaders are the presidents and congresses.  So, will you
block all the governmental use of your module?

Whether the army is good or bad may not be the subject here.  But
the modern economics system is complex.  This kind of treatment against
the army is not fair.


Yes, leave the army alone, they're only following orders. By the way
it's not clear which army you're talking about, is it just nice,
responsible armies who only kill bad people? (They know they're bad
people because the politicians said so.) How about the Indonesian army
or the Burmese army? I guess once you take morality out of your
economic decisions things become much simpler.

Any to keep this sliglhtly relevant, I wouldn't choose this license -
it's incredibly vague and is missing all the usual stuff about
redistribution, modification etc it just says "free software" which
has no fixed meaning. That said, I have no problem with someone who
tries to achieve some other (political) goal through their free
software license, Stallman is doing much the same thing.

Given that it's possible to upload a CPAN dist with no license at all,
if CPAN wants to start getting picky about licenses, there's a lot of
work to be done,

F


Re: Delete hate speech module

2007-02-08 Thread Fergal Daly

On 08/02/07, imacat <[EMAIL PROTECTED]> wrote:

On Thu, 8 Feb 2007 01:28:12 -0800
Eric Wilhelm <[EMAIL PROTECTED]> wrote:
> # from Andy Lester
> # on Wednesday 07 February 2007 10:25 pm:
> >> I'd just read of Time::Cube, a disjointed rant full of hate speech.
> >> This is the kind of content that is most deserving of deletion from
> >> CPAN. Would the responsible parties please go nuke this, please?
> Given that the license does not allow it to live on CPAN, I'd say we
> have to remove it.

Correction: Time::Cubic.

As I'm not a citizen of U.S., I have no idea on this Time Cube
theory thing till now.  I paid a visit.  Well, even if it comes with an
valid open source license, I do not agree it's proper to allow such
hatred words on CPAN.  That is really very bad.

I understand that for some psychos (may or may not be the Time Cube
followers) the best way is to ignore them rather than fight with them.
But since hatred is involved in this Time::Cubic, psycho or not this
will hurt the public image of CPAN, which many people work hard to make
it better for a long time.  It would be very bad if Time Cube followers
gather and plan on killing the Jews or educators on CPAN with a Artistic
license.  CPAN may not be always serving the public interests, but must
not hurt the public, nor become a tool to hurt the public.

This is only my humble opinion.


While I do agree that this should be taken down since CPAN is
breaching the license, I would point out that appears to be a joke.
There are several "LOL"s in the license and the code and the whole
bantown thing seems to be a project to produce amusing but useless
code - an irc bot that opens a channel, invites people at random and
kicks them out as soon as they join, a program to randomly trash the
registers of a running process. Also any code containing

sub dongers {

has to be a joke. Sadly www.timecube.com on which this is based is not as funny,

F



--
Best regards,
imacat ^_*' <[EMAIL PROTECTED]>
PGP Key: http://www.imacat.idv.tw/me/pgpkey.txt

<> News: http://www.wov.idv.tw/
Tavern IMACAT's: http://www.imacat.idv.tw/
TLUG List Manager: http://lists.linux.org.tw/cgi-bin/mailman/listinfo/tlug




Re: James Freeman's other modules (was: Re: CGI::Simple)

2007-01-12 Thread Fergal Daly

Changing the subject from Keenan to Freeman (James Keenan is not MIA),

F

On 12/01/07, Andy Armstrong <[EMAIL PROTECTED]> wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12 Jan 2007, at 10:16, David Landgren wrote:
> Do we wait until someone else manifests a need to dust off one of
> them to hand over maintenance? Or do we forget about it until next
> time? If it's worth it, then I would volunteer.

Actually I was thinking of volunteering for the whole lot of them -
but then decided that they're probably not that valuable to anyone.

I was also wondering whether - given that backpan exists so people
can always find them if they really want them - there shouldn't be a
mechanism for removing modules that are unloved and unused.

- --
Andy Armstrong, hexten.net

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.2 (Darwin)

iD8DBQFFp2e0woknRJZQnCERAquJAJ4hKkNHZKS3u3JnhRbcPd9k7xUm9wCfaKfK
M8tnMc8hzZxL8BlEEyAMtVg=
=k1gg
-END PGP SIGNATURE-



Re: Benefits of Test::Exception

2006-12-31 Thread Fergal Daly

On 31/12/06, Paul LeoNerd Evans <[EMAIL PROTECTED]> wrote:

On Sun, Dec 31, 2006 at 02:13:47AM +, Fergal Daly wrote:
> I think the code about should die comlaining about dies_ok() is
> unknown. So you need to do even more.

No it doesn't... This is one of those things about perl - code that
looks like a function call is never checked to see if the function
exists until runtime:

  #!/usr/bin/perl
  use warnings;
  use strict;

  print "Here I have started running now\n";

  foobarsplot();

  ^-- won't complain until runtime.

That's what gave me the motivation to write B::LintSubs, by the way:

  http://search.cpan.org/~pevans/B-LintSubs-0.02/


I just forgot that SKIP actually doesn't execute the code (I was
thinking it just marked the test results as to be ignored).


> Don't you get the same problem with any non-standard test module?

Yes; but Test::More seems to be installed as part of whatever the
testing core is on various things that automatically test my CPAN
modules. I note whenever I upload something, lots of machines around the
world manage to automatically test it. I use Test::More everywhere and
they can cope.


I use whatever test modules I feel like (for example I always use
Test::NoWarnings) and the same machines test my modules without
problems. The automatic testing tools install whatever deps are
necessary (assuming they're listed as deps in Makefile.PL). Are you
seeing brokenness or are you just expecting it?

F


> If you alread yhave some CPAN dependencies then adding another for
> testing is perfecctly reasonable. It would be nice if the various CPAN
> tools could understand the difference between a runtime dependecy and
> a test-time one though,

EU::MM can't, but I believe Module::Build can. That said, the consensus
on #perl/Freenode is that the latter isn't really ready yet, so just use
the former. Ho hum..

--
Paul "LeoNerd" Evans

[EMAIL PROTECTED]
ICQ# 4135350   |  Registered Linux# 179460
http://www.leonerd.org.uk/


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFFl5upqRXzCRLr5iQRAh+xAKCL/rKxP/QmZc/4lxnFeQyDKxNEqACfZjoU
EODl67ZC0bW/jCJvmmUMGIw=
=1ffY
-END PGP SIGNATURE-





Re: Benefits of Test::Exception

2006-12-31 Thread Fergal Daly

On 31/12/06, Paul LeoNerd Evans <[EMAIL PROTECTED]> wrote:

I recently stumbled upon Test::Exception, and wondered if it might make
my test scripts any better.. So far I'm struggling to see any benefit,
for quite a lot of cost.

Without using this module, my tests look like:

eval { code() };
ok( $@, 'An exception is raised' );

(and possibly either of)
like( $@, qr/some string match/, 'Exception type' );
(or)
ok( [EMAIL PROTECTED]>isa( "Thing" ), 'Exception type' );
(to check the type)

Whereas, if I want to use the module, I have to first note that it isn't
standard install, so I should start the test with something like:

eval { require Test::Exception; import Test::Exception; };
my $can_test_exception = $@ ? 0 : 1;

Then each test that might use it should be:

SKIP: {
skip "No Test::Exception", 1 unless $can_test_exception;

dies_ok( sub { code() },
 'An exception is raised' );
}

So, a lot more code, to achieve the same end result... Plus, I'm now in
the situation where if Test::Exception isn't installed, the test won't
be run at all.


I think the code about should die comlaining about dies_ok() is
unknown. So you need to do even more.


Have I missed something here? Does Test::Exception provide me with some
greater functionallity I haven't yet observed? Or should I just not
bother using it?


Don't you get the same problem with any non-standard test module?

If you alread yhave some CPAN dependencies then adding another for
testing is perfecctly reasonable. It would be nice if the various CPAN
tools could understand the difference between a runtime dependecy and
a test-time one though,

F


Re: spamming cpan? was [Fwd: Perl-Freelancer needed]

2006-10-05 Thread Fergal Daly

Yeah, I was thinking of applying exactly because it said in all caps

PLEASE DO NOT APPLY IF YOU PERSONALLY DO NOT FULFILL THIS REQUIREMNT

F


On 05/10/06, Andy Armstrong <[EMAIL PROTECTED]> wrote:

On 5 Oct 2006, at 16:39, Jonathan Rockway wrote:
> Did anyone else get a message like this via their CPAN alias?  I think
> it's pretty odd that someone would mail me personally with a message
> like this.  Instead, it looks like someone just iterated over their
> local CPAN mirror and sent everyone an e-mail.  If this is the
> case, I'm
> going to report it to spamcop.  If that's not the case, I'm going to
> nicely suggest that they post to jobs.perl.org instead.

Yup, I got it too. They way it's phrased suggested to me that it had
been sent to multiple recipients.

--
Andy Armstrong, hexten.net




Re: Divide by 0? Was: Re: Introduction Letter

2005-03-01 Thread Fergal Daly
On Tue, Mar 01, 2005 at 12:50:35AM -0800, Austin Schutz wrote:
>   I don't know, but I do know that having the interpreter crap out
> is not helpful to most of us simpletons who find phrases like "core dumped"
> not especially user friendly.

If you haven't loaded some external module written in C then you should
never see "core dumped" coming from Perl. If you do, it's a bug and you
should report it.

I think most people (simpletons or not) should be able to understand
"Illegal division by zero at..." which is what Perl gives me when I divide
by zero.

If on the other hand this happens to you in C, you're simply experiencing
the downside of putting performance before error checking. Nothing is free,

F


Re: Divide by 0? Was: Re: Introduction Letter

2005-03-01 Thread Fergal Daly
On Mon, Feb 28, 2005 at 07:55:36PM -0600, Eric Wilhelm wrote:
> I like the one where you get the mathematically-correct (or at least 
> mathematically-useful) infinity.
> 
>   $perl -le 'use bigint; $x = 1/0; print $x+1'
>   inf
> 
>   $perl -le 'use bigint; $x = 1/0; print 1/$x'
>   0

and what should these print?

$perl -le 'use bigint; $x = 2/0; print $x*0'

$perl -le 'use bigint; $x = 1/0; print ($x+1)-$x'

$perl -le 'use bigint; $x = 1/0; print ($x*2)/$x'


Allowing inf might make some sense in an interactive calculator but it's a
bad idea in a programming langague.

For example there's no way to evaluate $x/$y when both are inf or even $x *
0 when $x is inf. Perl would have to die with an Inf error. It's bad enough
trying to find the real source of a divide by zero error but for an Inf
error you might have to find 2 occurrences of divide by zero, figure out
which one is wrong and then go fix a divide by zero error. It's also
possible that neither of them is wrong and that both infs are supposed to be
there in which you have a big refactoring job ahead.

There is actually a system of arithmetic which allows infinitely large and
small values (it provides an infinite number of infinitely large/small
numbers) and produces "sensible" results for any expression, no matter if
the values involved are finite or inifintely large/small. In this system (x
* 2) / x is always 2 for any x, except x = 0 - even in this system division
by zero is not allowed.

http://mathforum.org/dr.math/faq/analysis_hyperreals.html

has a good explanation.

F


Re: CPAN::Forum

2005-02-03 Thread Fergal Daly
There are two useful things that could come from having some PAUSE
interaction

As an author of several modules, I'd like to be able to tick a box that says
"monitor all forums for my modules" Also, it would be nice if users can see
that the author is monitoring a module, it saves having to post a "hey
everybody I'm monitoring this module" type of message for each one,

Fergal


On Fri, Feb 04, 2005 at 02:40:09AM +0200, Gabor Szabo wrote:
> On Wed, 2 Feb 2005, Nicholas Clark wrote:
> 
> >The same hack as rt.cpan.org uses - attempt a login on pause.cpan.org
> >using the ID and password provided. If PAUSE accepts it, then you know
> >it's the real thing.
> 
> That would mean my server if cracked could be used to collect PAUSE
> passwords. I am not sure I'd like to have that responsibility.
> 
> 
> I am thinking of allowing users to use a "screen-name" and if I manage
> to authenticate that you are a PAUSE user (using the suggested
> @cpan.org e-mail trick) then you will be able to uese the
> PAUSE::yourname screen name.
> 
> Sounds like overcomlicating things.
> 
> But it is nearly 3 am.
> 
> Gabor
> 
> 


Re: Circular dependencies in PREREQ_PM

2004-08-30 Thread Fergal Daly
On Fri, Aug 27, 2004 at 09:52:16AM -0400, John Siracusa wrote:
> If module A uses module B, but module B also uses module A, what do I put in
> PREREQ_PM?  Will the CPAN shell be able to handle a circular dependency?

I'd say it's a sign that you could factor something out of one or both
modules, doing this would break teh circle. On the other hand, if B really
does depend on all of A and a depends on all of B then they probably
shouldn't be separate modules. Unfortunately while factoring out independent
chunks into their own modules makes thing more "correct" it makes
distributing them more awkward,

F



Re: Let's eliminate the Module List

2004-08-20 Thread Fergal Daly
On Fri, Aug 20, 2004 at 09:50:22AM +0100, Jose Alves de Castro wrote:
> On Thu, 2004-08-19 at 18:54, Simon Cozens wrote:
> > [EMAIL PROTECTED] (Jose Alves de Castro) writes:
> > > I don't want to show the results of a search. I want to say "Here is the
> > > link to the module list. See how long it is? It contains practically
> > > everything you need, doesn't it?"
> > 
> > http://www.cpan.org/modules/02packages.details.txt.gz
> 
> It seems like I'm the only one, but I still prefer the other list... :-(
> It has the module descriptions and all... :-(

So why not auto generate another list, giving keyowrds and descriptions of
_every_ module?

F



Re: Let's eliminate the Module List

2004-08-19 Thread Fergal Daly
On Thu, Aug 19, 2004 at 05:24:57PM +0100, Jose Alves de Castro wrote:
> On Thu, 2004-08-19 at 16:47, Christopher Hicks wrote:
> > On Thu, 19 Aug 2004, Hugh S. Myers wrote:
> > 
> > > It seems to me that ANY thing that contributes to the solution set of 
> > > 'How do I find the module I'm looking for?' needs to be kept until it 
> > > can be replaced with something of equal or greater value.
> > 
> > search.cpan.org seems to be of greater value than the modules list 
> > according to most of the people that have chimed in.
> 
> Try asking beginners what they think. I believe it is easier for them to
> look at a long list of modules then searching for a specific one,
> particularly because they often don't know what they should be looking
> for.

The problem is that the list is missing many modules and in some cases it is
missing "the right module" for a particular job while listing other inferior
modules and since no one is adding to the list, this can only get worse.

> Anyway, I like to have a long list of modules to show my Java friends
> and say "see?"

If we had keywords you could just search on a keyword and show them that
list instead,

F



Re: Future of the "Module List"

2004-07-20 Thread Fergal Daly
On Tue, Jul 20, 2004 at 09:30:47AM -0500, Mark Stosberg wrote:
> On Tue, Jul 20, 2004 at 10:10:02AM +0100, Fergal Daly wrote:
> > On Tue, Jul 20, 2004 at 06:15:49PM +1200, Sam Vilain wrote:
> > > I nominate the
> > > 
> > >  Review::*
> > > 
> > > Namespace for author-submitted module indexes and in-depth reviews, in 
> > > POD format.  I think this has a number of advantages.  Let's use the 
> > > infrastructure we already have, no?
> > 
> > Interesting, but what comes after Review:: if it's Review::Text::Balanced
> > then how do we get multiple reviews Text::Balanced
> 
> Maybe the convention could be:
> 
> Review::Text::Balanced::CPANUSERNAME
> 
> I'll let someone else suggest what should happen if the same person
> decides to review the same module multiple times. (Perhaps there would be
> an early negative review, and then a later positive review after the
> module improved with feedback.)

I thought someone might say that.

The more I think about it, the more I think that it's not a great idea using
the real CPAN to do things other than distribute code. Reuse the
infrastructure by all means but the idea of mixing bundles, code, reviews
and whatever else comes up in the same hierarchy with just naming
conventions to tell them apart does not appeal to me. If we weren't
dependent on collapsing all the relevant information down into a ::
delimited list it would be much nicer (fantasy land, I know),

F
 


Re: Future of the "Module List"

2004-07-20 Thread Fergal Daly
On Tue, Jul 20, 2004 at 06:15:49PM +1200, Sam Vilain wrote:
> I nominate the
> 
>  Review::*
> 
> Namespace for author-submitted module indexes and in-depth reviews, in 
> POD format.  I think this has a number of advantages.  Let's use the 
> infrastructure we already have, no?

Interesting, but what comes after Review:: if it's Review::Text::Balanced
then how do we get multiple reviews Text::Balanced or are you talking about
something else entirely?

F


Re: META.yml keywords

2004-07-18 Thread Fergal Daly
On Sat, Jul 17, 2004 at 11:40:02AM -0500, Ken Williams wrote:
> Well, I actually don't think we need a place for keywords *anywhere*, 
> but if we have them somewhere, then yeah, I do think it's good to be 
> able to see them in the pod.  Something like they are here (random 
> academic paper in my field):
> 
>  http://www.cs.cmu.edu/~yiming/papers.yy/kdd02.pdf.gz

I think having them in the POD is nice but that makes them a little harder
for the indexer to extract them. Having them inserted into the POD at build
time might be a better option.

I think the need for keywords is there on CPAN just as it is for academic
papers. How sophisticated would the full-test based search engine have to be
to understand that "this module requires an XML parser" should not be a hit
for "XML parser"? Or that "this module does not yet support HTTPS" is not a
hit for "HTTPS"?

F



Re: META.yml keywords

2004-07-18 Thread Fergal Daly
On Sat, Jul 17, 2004 at 03:40:52PM +0200, A. Pagaltzis wrote:
> Which was exactly the purpose: to be able to make sure that the
> list with official keywords really does only contain official
> keywords, so a release tool can complain about misspellings f.ex.
> If you simply allow both in a single list, then "netwrok" will go
> unnoticed and make your module invisible to searches with the
> correct keyword.
> 
> I don't think the existence of two lists should matter to the
> indexer -- official keywords in the freeform list should have the
> same value as official ones in the fixed keys list. That sort of
> defeats the above point, I guess, but a list for fixed keys only
> still helps those who want its benefits.
> 
> It might suffice to have the release tool check the list and tell
> the user which keywords are official and which aren't, but I
> don't know if that is helpful enough -- I personally would like
> to be able to tell it to choke on all mistakes *except* those I
> specifically declared as known non-official ones.

The only benefit I can see is that of spell-checking and that would be
better done by an actual spell-checker. Isn't it important not mis-spell any
keywords, regardless of their officialness?

F


Re: META.yml keywords

2004-07-17 Thread Fergal Daly
On Sat, Jul 17, 2004 at 01:32:36PM +0200, A. Pagaltzis wrote:
> * Randy W. Sims <[EMAIL PROTECTED]> [2004-07-17 12:45]:
> > There is, however, another advantage to the catagory approach:
> > Searching would likely be more consistent. It would help
> > authors to place their modules so that they can be found with
> > similar modules. It would also help ensure that users looking
> > for a particular type module will get back a result set that is
> > likely to contain all/most of the modules of that type.
> 
> Why does it have to be either/or?
> 
> There could be two keyword lists, one with fixed keywords, and
> the other freeform. Their names would have to be chosen carefully
> to suggest this as the intended use (rather than filling both
> with the same keywords) -- maybe ``keywords'' and
> ``additional_keywords'' or something.

I agree that If there is to be an "official" list of keyowrds then it
shouldn't be either/or. The officials haven't regenerated the module list
for 2 years, there's no reason to think that the keyword officials will stay
up to date.

That said, I don't think having 2 lists is useful. The author should supply
a single list of keywords. Those that are on the official list are on the
official list, those that aren't aren't. The search engine/indexer will be
far better at figuring that out than the module author. Otherwise you are
just obliging the authors to keep track of the official list and move
keywords around in their meta info as the official list chnages.

It would be up to the search engine to perhaps give more weight to official
keywords. The search engine could also maintain "official" synonyms so that
"postgres" and "pg" are indexed together,

F



Re: Finding prior art Perl modules (was: new module: Time::Seconds::GroupedBy)

2004-07-14 Thread Fergal Daly
On Wed, Jul 14, 2004 at 10:34:08PM +0100, Tim Bunce wrote:
> On Wed, Jul 14, 2004 at 06:30:59PM +0100, Fergal Daly wrote:
> > XML::HTTP::Network::Daemon::TextProcessing::Business::Papersize::GIS
> > 
> > so that people can find it,
> 
> That's what the Description field is for.

There's a Description field? I accept responsibility for not knowing about
this, I've never made an effort to see what is available. However, if
search.cpan.org had allowed me to search by Description field I probably
would have included one in all of my modules,

F



Re: Finding prior art Perl modules (was: new module: Time::Seconds::GroupedBy)

2004-07-14 Thread Fergal Daly
On Wed, Jul 14, 2004 at 06:08:16PM +0100, Leon Brocard wrote:
> Simon Cozens sent the following bits through the ether:
> 
> > The searching in search.cpan.org is, unfortunately, pretty awful. At some
> > point I plan to sit down and try using Plucene as a search engine for
> > module data.
> 
> I thought that would be a good idea too, so I tried it. It works
> *fairly* well.
> 
>   http://search.cpan.org/dist/CPAN-IndexPod/

Does META.yaml have a place for keyowrds? It would be nice if it did and if
search.cpan.org indexed it. That would mean that it would be no longer
necessary to name your module along the lines of

XML::HTTP::Network::Daemon::TextProcessing::Business::Papersize::GIS

so that people can find it,

F



Re: CPAN Rating

2004-06-16 Thread Fergal Daly
On Wed, Jun 16, 2004 at 06:39:22PM -0300, SilvioCVdeAlmeida wrote:
> Let's write it better:
> 1. FORBID any module without a meaningful readme with all its (possibly
> recursive) dependencies, its pod and any other relevant information
> inside.

Having the dependencies easily visible is a good idea but rather than
banning those modules which don't do, it should be done automitcally by the
CPAN indexer, all the info is there.

> 2. Branch a last-version-only CPAN_modules_by_category, without authors
> folders, a kind of a fast_food_CPAN_modules_by_category.

Could you explain this please, I don't know what you mean.

F


failures that aren't failures

2004-06-16 Thread Fergal Daly
Hi all,

One of my modules has a failure noted against it that was caused by the
tester's wonky Perl installation. How can this be removed?

F



Re: CPAN Rating

2004-06-16 Thread Fergal Daly
On Wed, Jun 16, 2004 at 12:05:02PM +0100, Nicholas Clark wrote:
> All volunteer organisations work in roughly the same way - if you want to
> get a job done, you have to *start* it yourself. Others may well join in
> and help once they see that it's a good idea, but things don't get started
> because someone would like it.
> 
> [This is an oversimplification. You may be able to persuade someone else
> that they also care about it enough to do it. But this is as if that person
> starts on his/her own as above. Likewise someone may be able to get others
> to start a new project for them, but generally they have earned this by
> visibly contributing their own blood sweat and tears to something else
> already.]
> 
> No-one is stopping you setting up a ratings system.

Maybe Nadim should simply start the ball rolling by picking an interesting
module and posting a few comments (+ or -) on the list and seeing the
reaction. Of course there may be a problem with the on/off topicness of that
for the list. Perhaps Simon Cozen's code review list is a better place,
although in these cases the code review would be involuntary which probably
wasn't what Simon intended.

The alternative is to start a new list but that might have a larger than
normal bootstrapping problem,

F


Re: running tests

2004-04-03 Thread Fergal Daly
On Sat, Apr 03, 2004 at 01:37:03AM +0200, Paul Johnson wrote:
> Coming soon to Devel::Cover (well, on my TODO list anyway):
> 
>  - Provide an optimal test ordering as far as coverage is concerned - ie
>tests which provide a large increase in coverage in a short time are
>preferred.  There should also be some override to say run these tests
>first anyway because they test basic functionality.

For me, the "perfect" order of display would be:

Coverage A is a subset of Coverage B implies that Test A must be displayed
before Test B. You could call Test A a subtest of Test B.

You then order all the tests by their coverage increase and attempt to
display them in that order (while satisfying the above rule).

This will ensure that low level precedes high level (because the low level
tests will be subsets of the high level ones).

You need to consider subset in terms of packages or modules rather than
function, otherwise if lowlevel.t tests func1() and func2() but highlevel1.t
only calls func1 then there is no subset relationship. You also need to
keep your test scripts kind of modular .

On the other hand, if you are trying to save time on your test suite then
the same information as above can be used to cut corners.

You run the tests in coverage increase order until you have run out of tests
that will increase the coverage, then you stop. The only exception is if a
Test C fails, then you run it's largest subtest (Test B) and if Test B fails
then you run Test B's largest subtest etc. Until one of them doesn't fail.
Then you have located the failure as well as you can with the given tests,

F


Re: running tests

2004-04-03 Thread Fergal Daly
On Fri, Apr 02, 2004 at 04:59:41PM -0600, Andy Lester wrote:
> > Even if you have a smoke bot, you presumably run the tests (depends on the
> > size of the suite I suppose) before a checkin and it's convenient to know
> > that the first failure message you see if the most relevant (ie at the
> > lowest level). Also when running tests interactively it's nice to be able to
> > save even 30 seconds by killing the suite if a low level test fails,
> 
> Sure, but even better is to run only the tests that need to be run,
> which is a key part of prove.  You can run "prove -Mblib t/mytest.t"
> instead of the entire "make test" suite.

If the suite's big enough to warrant a bot then that makes sense but many of
my modules have test suites that complete within a fairly short time.

I tend to run the relevant test until it passes and then run the suite
before checkin. I can pipe the verbose output the whole suite into less and
know that the first failure is probably the most important one.

F



Re: running tests

2004-04-02 Thread Fergal Daly
On Fri, Apr 02, 2004 at 02:51:11PM -0600, Andy Lester wrote:
> > coded correctly. So it's desirable to see the results of the lower level
> > tests first because running the higer level tests could be a waste of time.
> 
> But how often does that happen?  Why bother coding to optimize the
> failures?
> 
> Besides, if you have a smokebot to run the tests for you, then you don't
> care how long things take.

It's more the time spent looking at the test results rather than the time
spent running the tests. So actually it's the result presentation order that
matters. Basically you want to consider the failure reports starting from
the lowest level as these may make the higher level failures irrelevant.

The order the tests actually ran in should be irrelevant to the outcome but
if you're running from the command line the run order determines the
presentation order.

Even if you have a smoke bot, you presumably run the tests (depends on the
size of the suite I suppose) before a checkin and it's convenient to know
that the first failure message you see if the most relevant (ie at the
lowest level). Also when running tests interactively it's nice to be able to
save even 30 seconds by killing the suite if a low level test fails,

F


Re: running tests

2004-04-02 Thread Fergal Daly
On Fri, Apr 02, 2004 at 01:52:12PM -0600, Andy Lester wrote:
> > No.  But there are certain classes of functions of the module that don't
> > work until others have been run.  So others should have been tested
> 
> So some tests are setting up other ones, then?

I don't think Tims is writing tests that depend on each other but he has
written higher level functions that depend on lower level ones. The tests
could run in any order but if the lower level functions are broken then the
higher level tests are sure to fail even if the higher level functions are
coded correctly. So it's desirable to see the results of the lower level
tests first because running the higer level tests could be a waste of time.

Even if that's not what Tim meant, it seems like a useful feature, although
it could be taken as an indication that you need to split your distribution
into several new distributions,

F


Re: [Fwd: [perl #25268] h2xs does not create VERSION stubs]

2004-02-03 Thread Fergal Daly
I saw that on p5p. It seems to be an idea who's time has come!

John has taken a different approach. A is compatible with B if A >= B (for the 
standard version meaning of >=) and it hasn't been specifically declared 
incompatible.

An upside is that you can give a reason why the current version is not 
compatible with version A.

A downside is that I think negative declarations might be harder to maintain. 
As you make more and more changes you must continually rethink the list of 
versions with which you are incompatible and why.

Making positive declarations means that you can just let your compatibility 
information grow. You don't have to think about anything except the 
difference between your new version and it's immediate predecessor,

F

On Tuesday 03 February 2004 05:08, david nicol wrote:
> So here's what I got back from perlbug
> 
> 
> -- 
> david nicol
>  shift back the cost. 
www.pay2send.com
> 
> Encapsulated message
> 
> 
> [perl #25268] h2xs does not create VERSION stubs
> Date: Saturday 02:24:27
> From: "John Peacock via RT" <[EMAIL PROTECTED]>
> To: [EMAIL PROTECTED]
> Reply to: [EMAIL PROTECTED]
> 
> See CPAN/authors/id/J/JP/JPEACOCK/version-Limit-0.01.tar.gz for a way to
> do this using the "version" module and Perl 5.8.0+
> 
> 
> 
> End of encapsulated message



Re: VERSION as (interface,revision) pair and CPAN++

2004-01-30 Thread Fergal Daly
On Wed, Jan 28, 2004 at 11:52:45AM +, Adrian Howard wrote:
> >This should be less error prone and easier to maintain.
> [snip]
> 
> Hmmm... I'm not so sure that that's always (or even mostly) true.

At the moment the information must be maintained separately by each module's
user (if they can be bothered). Perhaps the amount of work required from the
module author increases, but the overall amount of work (much of which was
redundant) decreases. The only authors who will have to do lots of work are
those who break their interfaces every couple of weeks.

> - I've certainly been caught by many accidentally introduced bugs / 
> subtle interface changes that the module author wasn't aware of (so no 
> interface change)

That would catch you with the old version system too. No version system can
stop the author from messing up. However with something like Version::Split,
authors could be notified whenever their new version claims to be compatible
with the old one but breaks other modules on CPAN. The reason that's not
practical now is that we have no way of saying "not compatible with previous
versions".

In this scenario the rest of CPAN becomes part of your test suite - the bits
of it that use your module anyway.

I can see an issue here though, there should be a way to exclude known bad
releases that claim to be compatible. A responsible author would remove such
a module from CPAN as soon as possible (they could upload it again in as new
release that doesn't claim to be compatible).

> - I've also carried on using modules that have had interface changes 
> without problems - because I've not been using that particular piece of 
> the API.

Yes, this has been brought up. In some cases you could blame the author for
not making the module fine grained enough but that won't solve the problem.
A more practical solution is to require version A or version B . Yes this
goes back to forcing the user to specify information that should be provided
by the author. However it should hopefully be unnecessary in most cases.
When it is necessary, the information is limited to a (hopefully) short list
of acceptable versions. Finally, the information would only need to be
updated whenever a new version comes out that is declared to be incompatible
with older versions but is actually good enough for your use. This should be
quite rare.

I'm not saying Version::Split solves everything but if it can move 99% of
the version work to the module author and still let the user do the final 1%
of fine tuning then it's an improvement,

F


Re: VERSION as (interface,revision) pair and CPAN++

2004-01-30 Thread Fergal Daly
On Fri, Jan 30, 2004 at 03:02:53PM +0100, khemir nadim wrote:
> What I meant is that we shouldn't have two ways (and 2 places) of telling
> what we need for our modules to work.

I agree, there should be only one place where Some::Module's compatibility
information is declared, whether that's in Some::Module's Build.PL or in
Foo/Bar.pm, is not really important, what's important is that Some::Module's
developer has to figure out the details, not Some::Module's user.

> Other have pointed some problems with your scheme so I won't repeat them
> here. I understand what you want to achieve and I think it's good but please
> keep it in one place. Can't you coordinated your efforts with Module::Build
> so
> 
> # old example
> >   my $build = Module::Build->new
> > (
> >  module_name => 'Foo::Bar',
> >  license => 'perl',
> >  requires => {
> >   'perl'   => '5.6.1',
> >   'Some::Module'   => '1.23',
> >   'Other::Module'  => '>= 1.2, != 1.5, < 2.0',
> >  },
> > );
> 
> ...
>  requires => {
>   'perl'   => '5.6.1',
>   'Some::Module'   => 'COMPATIBLE_WITH 1.23', # or the like
>   'Other::Module'  => '>= 1.2, != 1.5, < 2.0',
>  },

There should be no need for COMPATIBLE_WITH.

'Some::Module'   => '1.23'

should work fine with any whacko versioning scheme that anyone ever came up
with if Module::Build is behaving correctly.

Behaving correctly means letting Some::Module->VERSION decide what is
compatible and what is no. This is the officially documented way to do it.

MakeMaker almost does this. From a quick look at the source, I think
Module::Build definitely doesn't. Instead, it attempts to find the version
by snooping around in the .pm file. So it basically ignores any custom
VERSION method. It should be fairly easy to fix that in both cases.

> Instead for drawing in a new module that most _won't_ use, you make it in
> the main stream "new" installer.

The new module is designed to make it easy for people to have a custom
VERSION method that does something better than the current default. However,
depending on this new module is obviously a problem - extra dependencies are
no fun for anyone. The next version should help solve that,

F


Re: (fast reply please!) Idea for new module: A bridge between Perl and R-project

2004-01-29 Thread Fergal Daly
On Thursday 29 January 2004 19:50, Graciliano M. P. wrote:
> I'm working on a module that make a bridge between the R-project
> "intepreter" and Perl. Actually I need to have this done today, soo, I will
> ask for a fast reply. Thanks in advance.

It would help if we knew what the R-Project was,

F



Re: VERSION as (interface,revision) pair and CPAN++

2004-01-29 Thread Fergal Daly
Yes it's confusing, I'm having trouble following bits of it, I'm sure anyone
else who's actually bothering is too. Hopefully all the confusion will be
gone at the end and only clarity will remain, that or utter confusion - it
could end up either way really.

To see why the current situation is most definitely broken, take the example
of Parse::RecDescent again. 1.90 changed in a fundamental way. Using the
current system, what should the author have done? Calling it 2.0 would be no
good because

use Parse::RecDescent 1.84;

works fine with 2.0 and CPAN.pm would download 2.0 if you told it you need
at least 1.84.
 
The "correct" thing to do was to release Parse::RecDescent2 v1.0 which means
that CPAN should be cluttered up with copies of modules with numbers on the
end including perhaps Net::POP32 which might be the 2nd version of Net::POP3
or it might be the 32nd of Net::POP or it might be an implementation of some
future POP32 protocol.

In a serious production environment you should be doing exactly what you do
but when you want to try out some cool looking module, you shouldn't have to
worry about with the entire revision history of all it's dependencies and
all their dependencies and so on, it should just work or fail at compile
time,

F

On Wed, Jan 28, 2004 at 11:08:23PM -0500, Lincoln A. Baxter wrote:
> Phew... Only one comment:  KISS (Keep It Simple Stupid)
> 
> This is WAY too confusing!  No one will be able to figure it out, or
> want to.  What we have now is not really that broken, especially if one
> regression tests his applications when new versions of modules are
> installed.  
> 
> Actually, we build our offically supported perl tree which we deploy to
> all of our boxes, and all of our applications use.  And when we upgrade
> things, we build a whole new tree, which we regression test every
> application with before we roll it into production.
> 
> No fancy versioning emnumeration scheme can replace this testing, and
> what we have now works "well enough" (I think). Most module authors I
> think are pretty good about documenting what they change in the Changes
> file. 
> 
> Lincoln
> 
> 
> On Wed, 2004-01-28 at 00:28, David Manura wrote:
> > Fergal Daly wrote:
> > 
> > > On Saturday 24 January 2004 18:27, David Manura wrote:
> > > 
> > >>(1) All code that works with Version A will also work with subsequent Version B. 
> > >>(e.g. adding new functions)
> > >>
> > >>(2) There exists code that works with Version A but will not work with Version 
> > >>B. (e.g. changing existing function signatures)
> > >>
> > >>(3) There exists code that works with Version A, will not work with Version B, 
> > >>but will work with an even more future Version C.  (probably a rare case)
> > >>
> > >>To handle #1 and #2, we could require all interface version numbers be of the 
> > >>form x.y such for any two increasing interface numbers x.y and u.v, assertion #1 
> > >>is true iff x=u and v>=y.   Assertion #2 is true iff the opposite is true (i.e. 
> > >>x!=u or v > >>1.2.3.4).
> > > 
> > > 
> > > I think this might make more sense alright and I'll probably change V::S to work 
> > > like that.
> > > However I don't agree with having no use for longer version numbers.
> > > 
> > > For a start, people do use them and I don't want to cut out something
> > > people use.
> > > 
> > > Also, when you have 1.2 and you want to experiment with a new addition
> > > but you're not sure if you have it right you can release 1.2.1.1 which is
> > > implicitly compatible with 1.2 . If you then think of a better interface you can
> > > release 1.2.2.1 which would still be compatible with 1.2 but would have no
> > > implied relation to 1.2.1.1. You can keep releasing 1.2.x.y until you get to
> > > say 1.2.6.1 at which point you're happy. Then you can rerelease that as 1.3
> > > and declare it compatible with 1.2.6.1 .
> > > 
> > > This let you have development tracks without having to including lots of
> > > explicit compatibility relations. Branching and backtracking is an essential
> > > part of exploring so supporting it without any effort for the author is good.
> > > 
> > > So to rephrase, B implements the interface of A (say B => A where "=>"
> > > is like "implies" in maths) if
> > > 
> > > (
> > >   version_head(A) == version_head(B) and
> > >   version_tail(A) < version_tail(B)
> > > )
> > > or
> 

Re: VERSION as (interface,revision) pair and CPAN++

2004-01-29 Thread Fergal Daly
Hi Nadim,

The difference is that Module::Build forces the Foo::Bar's author to work
out what current versions of Some::Module and Other::Module are suitable and
to try to predict what future version will still be compatible. This is time
consuming and error prone (predicting the future isn't easy) and it has to
be done for every module that requires these other modules. In fact I think
most module authors do not test these things thoroughly - I know I don't,
it's just too much of a pain.

If Some::Module and Other::Module used Version::Split for their version
information then Foo::Bar's author could just say "well I developed it with
Some::Module 1.23 and Other::Module 1.2 so only accecpt a version that is
declared to be compatible with those".

That way all the work on building the compatibility information is only done
once and it's done by Some::Module and Other::Module's authors, which is
good because they're the people who should know most about their own
modules. Foo::Bar's author never has to change his requires just because
Other::Module 1.9 has been released and works in a different way.

You also get the interesting side effect that if Foo::Bar's tests all pass
when using Some::Module 1.23 and they fail with 1.24 (which has been
declared to be comapatible) then both Foo::Bar's and Some::Module's authors
can be informed about it and try to work out who has the bug,

F


On Tue, Jan 27, 2004 at 08:54:47AM +0100, khemir nadim wrote:
> Hmm, isn't that what Module::Build is already offering to you?
>   use Module::Build;
>   my $build = Module::Build->new
> (
>  module_name => 'Foo::Bar',
>  license => 'perl',
>  requires => {
>   'perl'   => '5.6.1',
>   'Some::Module'   => '1.23',
>   'Other::Module'  => '>= 1.2, != 1.5, < 2.0',
>  },
> );
>   $build->create_build_script;My 2 cents, Nadim.
> 
> 



Re: VERSION as (interface,revision) pair and CPAN++

2004-01-28 Thread Fergal Daly
On Wednesday 28 January 2004 05:28, David Manura wrote:
> I'm not sure branching maps cleanly onto the interface versioning scheme as 
> shown above.  Let's say you have 1.2.  You then branch to 1.2.1.1 => 1.2. 
> Meanwhile, in your main trunk, you create 1.3 => 1.2.  OK, now back in the 
> branch, say you want to introduce an incompatible change to 1.2.1.1.  There 
are 
> actually two ways in which your change can be incompatible: with respect to 
> 1.2.1.1 only or with respect to 1.2.  You provided an example of the first 
case, 
> where we introduce 1.2.2.1 =/=> 1.2.2.1 yet 1.2.2.1 => 1.2.  However, what 
shall 
> we do if we need to introduce a change incompatible with 1.2?  Number it 
1.3? 
> We can't do that because 1.3 has already been assigned in the main trunk.

The next version that is not automatically compatible with 1.x is 2.1 . This 
is true in your original scheme and in my version with branches.

If you have branches then the next version that is automatically compatible 
with 1.2 but not with 1.2.2.1 is 1.2.3.1 .

> Maybe the branch numbers should be of the form
> 
>x.y.b.u.v
> 
> where x.y is the main trunk revision, b is the branch number, and u.v is the 
> branch revision.  For simplicity, we'll also eliminate the distinction 
between 
> changes that are incompatible only with the current branch revision and 
changes 
> that are incompatible with the main trunk revision.  The scheme for x.y will 
be 
> exactly the same as, yet independent of, the scheme for u.v.  So, the 
following 
> relations are implicit:
> 
>1.2.1.1.1 ===> 1.2
>1.2.1.1.2 ===> 1.2.1.1
>1.2.1.2.1 =/=> 1.2 (note!)

why is this =/=> when ...

>1.2.1.2.2 ===> 1.2.1.2.1
>1.2.2.1.1 ===> 1.2 (a second branch)

... this one is ===> ?

>1.2.2.1.2 ===> 1.2.2.1.1
>1.2.2.2.1 =/=> 1.2.2.1.2
>1.3   =/=> 1.2.b.u.v   for all b, u, v
>1.3   ===> 1.2
> 
> This seems workable, but it's getting more complicated.  The question is, 
will 
> anyone use this?  Also, are numbers the best way to express this 
information? 

It's certainly confusing me and I don't think it will be widely used.

> The branch identifier 1.2.1 might alternately be labeled something more 
> meaningful like "unstable."  So, the above scheme might be rewritten
> 
>unstable-1.1 ===> 1.2(must be declared explicitly)
>unstable-1.2 ===> unstable-1.1
>unstable-2.1 =/=> unstable-1.2
>unstable-2.2 ===> unstable-2.1
>mycopy-1.1   ===> 1.2(must be declared explicitly)
>mycopy-1.2   ===> mycopy-1.1
>mycopy-2.1   =/=> mycopy-1.2
>1.3  =/=> unstable-*.*   (unless otherwise declared explicitly)
>1.3  =/=> mycopy-*.* (unless otherwise declared explicitly)
>1.3  ===> 1.2
> 
> Now, say after merging unstable into 1.4 that you want to branch again, then 
you 
> just declare this explicitly and continue:
> 
>unstable-3.1 ===> 1.4
> 
> Use of branch names rather than branch numbers will also reduce the 
possibility 
> of conflicts when there is no central assignment of branch identifiers (e.g. 
> when I create my own private version of a standard module and name the 
branch 
> "davidm", unbeknown to the module author).

Yes, once you stop trying to do numerical things to version strings (namely 
expecting < and > to mean something) then you are no longer forced to use 
numbers, you can use something more expressive. However numbers are still 
very common and already have some useful meanings so I want to get the 
numbers out of the way first and then consider more general strings.

> I was thinking making only "imp1.imp2_bug1.bug2" part of the identifier for 
the 
> distribution file to download, as is currently the case.  So, as usual, 
people 
> can say "I need to download MyModule-1.2_3," and this will uniquely identify 
the 
> correct file to download.  The interface number (or *multiple* interface 
> numbers), however, will be embedded, possibly hidden, inside the module so 
that 
> "use" will work correctly.  The interface numbers might exists as well in 
the 
> POD to give the user a heads-up, but this is not strictly necessary (if 
there's 
> a problem, the module user will find out upon compilation).  Although not 
> required and maybe not always practical, the module author may even attempt 
to 
> synchronize the implementation number with the interface number to make 
things 
> simpler. Therefore, 1.x implementations will implement 1.x versions of the 
> interface, while 2.x implementations will implement 2.x versions of the 
> interface.  This may be possible since the module author has full freedom in 
> assigning implementation version numbers (except for the requirement that 
they 
> be strictly increasing).

I can think of 2 disadvantages to not using the interface revision in the use 
statement and in the distribution name.

1 The user will get no compatibility information by just looking at the 
version. This 

Re: VERSION as (interface,revision) pair and CPAN++

2004-01-25 Thread Fergal Daly
On Saturday 24 January 2004 18:27, David Manura wrote:
> (1) All code that works with Version A will also work with subsequent Version B. 
> (e.g. adding new functions)
> 
> (2) There exists code that works with Version A but will not work with Version 
> B. (e.g. changing existing function signatures)
> 
> (3) There exists code that works with Version A, will not work with Version B, 
> but will work with an even more future Version C.  (probably a rare case)
> 
> To handle #1 and #2, we could require all interface version numbers be of the 
> form x.y such for any two increasing interface numbers x.y and u.v, assertion #1 
> is true iff x=u and v>=y.   Assertion #2 is true iff the opposite is true (i.e. 
> x!=u or v A where "=>"
is like "implies" in maths) if

(
  version_head(A) == version_head(B) and
  version_tail(A) < version_tail(B)
)
or
(
version(B) begins with version(A)
)

where version_head means all except the last number and version_tail means
the last number

So 1.2 => 1.1, 1.2.1 => 1.2, 1.2.2 => 1.2.1
2.1 not => 1.1 but you could declare it to be true.
1.2.2.1 => 1.2 but 1.2.2.1 not => 1.2.1.1

and => is a transitive relation, just like implies in maths, so they
can be chained together. 1.2.1 => 1.2 and 1.2 => 1.1 means 1.2.1 => 1.1.

So an extension causes an increase and a branch which can be abandonned
requires adding 2 more numbers. Actually this is exactly the same as CVS
and presumably for the same reason.

> To handle #3, which is more rare under this new proposal, the module probably 
> will need to provide a compatibility map as suggested:
> 
>use Version::Split qw(
>2.1 => 1.1
>);
>
> That is, code compatible with 1.1 is compatible with 2.1 but might not be 
> compatible with 2.0 such as if 2.0 removed a function present in 1.1 only for it 
> to appear in 2.1.  Furthermore, code compatible with 1.2 may or may not be 
> compatible with 2.1.  The above use statement would consider them to be 
> incompatible, but how would we express compatibility if they are actually 
> compatible?  Could we do this?
> 
>use Version::Split qw(
>2.1 => 1.2
>);
>
> Now, code compatible with 1.2 is known to be compatible with 2.1.  Code 
> compatible with 1.1 (or 1.0) is implicitly known to be compatible with 1.2, 
> which in turn is known to be compatible with 2.1.  Code known to be compatible 
> only with 1.3, however, remains considered incompatible with 2.1.  The above 
> does not suggest that code compatible with 2.1 is compatible with 1.2, rather 
> the reverse.

Yes. We declare 2.1 => 1.2 and we know 1.2 => 1.1 so we get 2.1 => 1.1 and 1.0
but we can prove nothing about 2.1 => 1.3, it could be true or false and we're
assuming that if we can't prove we don't want it.

>  > Are you saying that having split our current version number into 2 parts, I
>  > should have actually split it into 3? One to indicate the interface, one to
>  > indicate the revision and one to indicate how much code changed?
> 
> I questioned combining the interface version and amount-of-code-change version 
> into one number.  However, could we combine the bug-fix-number and 
> amount-of-code-change number?  Are these really different?  A major internal 
> refactoring could be fixing bugs even if we never discover them.  It could be 
> adding new bugs as well, but bug fixes can also inadvertently introduce new 
> bugs.  I propose these two be combined, such as maybe x.y_n, where x.y is the 
> refactoring part and n is the bug fix, or maybe just x.y.z to eliminate the 
> distinction all-together.
> 
> Given a combined refactoring+bugfix number, does the number hold any 
> significance?  You would expect 1.2.15 to be more stable that 1.2.14 as it is 
> probably fixed a bug.  Alternately, it might have made a small change to an 
> algorithm--i.e. refactoring.  We don't know.  We would also expect 2.0.1 to be 
> better implemented/designed that 1.2.14, as the 2.x effort probably did some 
> major refactoring, possibly at the initial expense of stability.  However, how 
> does 2.1.79 compare with 1.2.14 in terms of stability?  It's difficult to say 
> from the numbers alone, and the two tasks of bug fixing and refactoring can 
> occur simultaneously.  We might say that x.y.z is more stable than u.v.w iff y > 
> v or (y = v and z > w).  However, it's not clear whether y and v really 
> represent code change or stability--we're mixing two things.

Mixing things was what caused the trouble in the first place so I'd rather not mix
things in the first place. However, the internal version number is of no use for
anything automatic so I'm not sure that keeping it separated is useful either.

Having a relatively "pure" bugfix version means that < and > actually might
have some real meaning and so it's possible to prefer a 1 release of an interface
over another. This was part of the reason that I thought it was better to mix the
internal with interface version rather than the bugfix version. Also the

Re: VERSION as (interface,revision) pair and CPAN++

2004-01-23 Thread Fergal Daly
On Fri, Jan 23, 2004 at 01:36:48AM -0500, David Manura wrote:
> Fergal,
> 
> I like what Version::Split is attempting to do (triggering a compile time 
> error if a newer version of a module could result in logic errors) and how 
> it does it (overriding the VERSION method). Perl6 RFC78 seems to address a 
> different but related problem (selecting a module from multiple installed 
> versions) which requires a different solution (modifying "require" to scan 
> the library path differently).  In fact, both efforts might be combined.

RFC 78 puts the burden on the user of the module to express all possible
valid versions that are acceptable using <>, ranges etc.

The real problem is that comparing interface versions using < and > is just
wrong, they are purely labels and their meaning is not related to their
numerical values. < and > only make sense for revision versions and even
then I have my doubts (is 2.3 "better" than 1.4?)

A useful thing from RFC 78 (and only.pm etc) would be the ability to list
multiple acceptable versions either explictly or by using ranges like

use My::Module 1.0-1.3 1.4.1-1.45 1.5;

1.0-1.3 obviously includes 1.0, 1.1, 1.2, 1.3 but whether it includes 1.0.x,
1.1.x, 1.2.x and 1.3.x depends on your point (2) below.

> A change that is neither an interface change nor a behavioral change is not 
> necessarily a bug fix.  The change could be a huge internal refactoring 
> (e.g. complete rewrite for better maintainability).  Assigning the original 
> code the version+revision "1.3.2 67" and the new code "1.3.2 68" does not 
> reflect the magnitude of the change.  If I am the user of "1.3.2 67" and I 
> see that "1.3.2 68" is now available, would I bother to upgrade? Probably 
> not--the difference appears inconsequential from the version+revision 
> number alone.

Absolutely, just call it 2.0 and declare it to be compatible.  I'll add that
as a case in the POD.

> (2)
> 
> >Consider what happens when you have extended your interface several times 
> and
> >your version is now 1.2.3.4 . This is getting a bit long so maybe it's 
> time to
> > make a clean break and call it 1.3 or even 2.
> 
> This may be a quirky part in the design.  It makes me question whether it 
> is useful for the relationship between 1.2.3 and 1.2.3.1 to be implicit at 
> all in the numbering scheme since *in the general case* it it not possible 
> for the module users to deduce the relationship from the numbers alone 
> (e.g. 1.2.3 v.s. 1.3).  If I see that 1.3 is now available, can I instantly 
> tell from the numbers alone whether it will break my code that uses 1.2.3?  
> No.  Furthermore, the intuitive (but now possibly incorrect) understanding 
> of 1.2.3 v.s. 1.2.3.1 is that the latter involves a very minor amount of 
> code change.

Well, it was a convenience things, it's not a fundamental part of the idea.
I agree that in many cases you cannot see the compatibility eg 1.3 and 1.2.3
However, if I saw 1.2.3 of some module on CPAN, I would expect it to be
compatible with 1.2 . Maybe I'm alone on that.

Without this assumption, the whole x.y.z syntax contains no meaning at all.
You may as well just use 1, 2, 3, 4, ..., 67 and explicitly declare all the
compatibility relations.

> The previous two points point out that version numbers are now being used 
> for TWO things, neither well: (a) to (partly) describe the amount of code 
> changes and (b) to (partly) describe interface/behavioral compatibility.  I 
> believe that one or the other should be chosen rather than both.

Are you saying that having split our current version number into 2 parts, I
should have actually split it into 3? One to indicate the interface, one to
indicate the revision and one to indicate how much code changed?

These 3 things do need to be expressed somehow but I'm not sure that "how
much code changed" can be expressed as a number. Perhaps we need an
"internal version" number which tracks the revisions of how the code works
rather than what it does. This is useful for users of the module but not too
useful to automated tools - they just need to know if it's compatible, they
cannot possibly make any decisions based on how different the internals are
because they don't care about the internals.

One way might be to keep abstract interface numbers separate from your
concrete implementations. You could name your interfaces "1.2if", "1.3if"
etc you would never actually release anything with version set to "1.2if".
You would release My::Module-1.2_1 which declares itself compatible with
"1.2if" then some day in the future you rewrite and release My::Module-2.0_1
which is also compatible with "1.2if".

You would have to strongly encourage your users to only specify abstract
versions in their use statements.

> (3)
> 
> Version::Split is conservative (safe) in its detection of version 
> incompatibilities, with a relatively low number of false negatives 
> (theoretically zero though not in practice) and a relatively high number of 
> false po

Re: VERSION as (interface,revision) pair and CPAN++

2004-01-22 Thread Fergal Daly
itoring all 
releases
of all modules that Your::Module depends on and checking whether each one 
is
compatible, updating you dependency information accordingly and making
releases of Your::Module with nothing new in them except this dependency
information. This sucks big time.

If Your::Module depends on His::Module and His::Module is using
Version::Split then as long as He keeps the version info in His::Module
correct, You never have to worry about getting an incompatible version and
Your::Module will always accept newer revisions of His::Module which 
contain
bug fixes.

BUGS
Because version strings are a bit like funny floating point numbers and I
haven't had time to sort it out you must be careful not to leave any
trailing 0s at the end of your versions so don't do

  use Version::Split qw(
1.3.2 67 => 1.0
1.0 => 0.8
  )

This will be fixed soon.

CPAN, CPAN.pm etc know nothing about this, maybe one day they will.

I need to add some way to require a minimum revision level of a given
interface. It's easy but it's bed time now.

DEPENDENCIES
Test::More, Test::NoWarnings are required for testing, apart from that 
it's
independent.

HISTORY
Been thinking about this for a while, had a quick rant about it on the
module-authors mailing list, followed by a chat where the VERSION method 
was
pointed out.

http://www.mail-archive.com/[EMAIL PROTECTED]/msg01542.html

SEE ALSO
Module::Build, ExtUtils::MakeMaker, only, version.

AUTHOR
Written by Fergal Daly <[EMAIL PROTECTED]>.

COPYRIGHT
Copyright 2004 by Fergal Daly <[EMAIL PROTECTED]>.

This program is free software and comes with no warranty. It is 
distributed
under the LGPL license. You do not have to accept this license but nothing
else gives you the right to use this software.

See the file LGPL included in this distribution or
http://www.fsf.org/licenses/licenses.html.



Re: VERSION as (interface,revision) pair and CPAN++

2004-01-22 Thread Fergal Daly
Check out Version::Split

http://www.fergaldaly.com/computer/Version-Split/

which does what I'm talking about. It's a terrible name, any better ones?

It answers many of the questions you asked, the others are below.

On Wed, Jan 21, 2004 at 08:41:35PM -0600, david nicol wrote:
> Q: was this suggestion made as a perl6 RFC and if so what did Larry
>think of it?

Nope, it's just something that's been brewing in my head for a while.

I found this

http://dev.perl.org/perl6/rfc/78.html

which tries to address the problem with <, > etc but doesn't try to
distinguish between revision and interface which is key.

> So we're talking about altering "the default VERSION method" to
> recognize something other than a version string, that would trigger
> a different case.  Such as, the major number has to match and the
> minor number has to be greater, or a PROVIDES method which defaults to
> the @{"$Module::PROVIDES"} array must include a version number with
> the same major number as the one we want.

Never noticed the VERSION method before now so I didn't even think of it.
Thanks for pointing it out!

It seems to do the trick nicely in terms of preventing bad versions being used 
and allowing good versions. However the other important place for version 
checking is in Build.PL, Makefile.PL and CPAN/CPANPLUS and they tend to take 
a greater than or equal to type of approach. They may need to be adapted to 
work with custom VERSION() methods.

> Or maybe we're talking about enlightened new versions of modules
> that present old interfaces when provided an old version number. This
> is IMO a non-starter, since it would require a lot of work that
> will not seem necessary by the people who would have to do the work.

You could make your VERSION() method do all the work for you there if you
really want to.

> Or maybe we're talking about adding a bureaucratic layer to CPAN so
> it won't accept new versions of modules under the same name unless
> the new version passes the test suite from the old version, for modules
> on a restricted list that your module gets on once enough people rely on
> your module.
> 
> The last suggestion would enforce interface compatability, after a
> fashion.

Sounds interesting but might be more part of the KWALITEE project. Not sure
if it should enforce it or just point out when you break it and maybe
CPAN.pm could ignore it too or shout loudly about it when you download it.

> It would need to be documented thoroughly so people don't go including
> test cases for things known to be broken in their modules, that verify
> that the modules are broken in the way that the modules are broken. (I
> have done this; DirDB 0.0.6 fails tests in DirDB 0.0.5 concerning
> croaking on unhandleable data types)

You would need to split up your test suites into an interface test suite and
a release test suite. The interface one would probably have to live
separately. All in the future...

One cool thing is that some modules already have lots of interface only
tests that already live outside the module, namely the tests in modules that
use it. So for instance if A requires B 1.2 and I upload a new B 1.2 and
suddenly A's tests start failing then CPAN(TS) should tell A's author and
myself about it and we can figure it out.

> Compliance might be part of a "standard quality check" before a module
> with a positive major version number is accepted; or CPAN might enforce
> quality ratings, which would be enforced to be nondecreasing, for a
> given module name.
> 
> So if a module has a PRODUCTION quality rating, that means that the
> interface is guaranteed to remain stable into the future, under the
> given name.
> 
> And that tests for things that are broken and you mean to fix in the
> future would have to be marked as such in the tests.pl.

Something like that would be cool,

F



Re: cpan name spaces

2004-01-21 Thread Fergal Daly
On Tue, Jan 20, 2004 at 10:07:43PM -0500, David Manura wrote:
> In consideration of what Fergal said, should every public method or 
> function in a module be individually versioned?  So, when I do
> 
>   use Text::Balanced qw(extract_multiple extract_codeblock), 1.95;
> 
> this could (under new semantics) assert only that those two functions have 
> the same interface and expected behavior as the corresponding functions in 
> module version 1.95.  If a future version of Text::Balanced (e.g. 1.96) 
> adds or changes the interface/behavior of other functions, my code will 
> still accept the new module.  Only when extract_multiple or 
> extract_codeblock themselves change interface/behavior would my code reject 
> a new module version.  There is no need for my code to provide an 
> acceptable version range; that is the module's responsibility to deduce.  
> (OO-like modules must be handled by a different mechanism.)

It may be worth it in some cases but perhaps if the functions are so
unrelated that they can change independently they should not be in the same
module. Making Text::Balanced::Multiple::extract() and
Text::Balanced::Codeblock::extract() would then allow you version them with
the module. There's nothing to stop you still making them available to
export from Text::Balanced.

> Consider further that another author comes out with a module named 
> Text::Balanced::Python having the same interface as Text::Balanced 1.95 but 
> whose extract_quotelike extracts Pythonic quotes rather than Perl-like 
> quotes (i.e. differing behavior).  I haven't considered how useful it would 
> be to express this relationship in the versioning metadata, but that might 
> be a further direction.  This resembles (OO) interfaces, but I believe the 
> versioning considerations make it different.

That is exactly what Java's etc "interfaces" do. So interfaces are what you
want rather than versions however it might be useful to be able to specify
the version of the interface.

This is getting very far away from anything that might realistically
happen...

F



Re: cpan name spaces (was: Re: Re3: Re: How about class Foo {...} definition for Perl? )

2004-01-21 Thread Fergal Daly
On Tue, Jan 20, 2004 at 11:12:25PM -0600, david nicol wrote:
> Here's a controversial assertion:
> 
> Just because Damian Conway does something that doesn't make it right.

It certainly doesn't but he's not alone in doing it.

Just to come clean, I was never really bitten by the Parse::RecDescent
change, it actually hit me very early on in development of my module so I
just switched to using 1.9x style without any hassle but it was over 2 years
between 1.80 and 1.90 so I could have been and I'd guess a lot of people
were bitten.

> I reccommend changing the name of the module when the interface
> changes, for instance I published Net::SMTP::Server::Client2
> instead of hijacking Net::SMTP::Server::Client and producing
> incompatible higher-numbred versions. (internally I've got a Client3
> as well, but it's not going to get published)
> 
> In my opinion as soon as he broke compatability with something that
> people were actually using, he should have changed the name. 

That's what's necessary in the current scheme but good names are in short
supply so you end up with Client2, Client3, Client3_5 etc which is not so
nice and especially for things like Net::POP3.

Again, the result of gluing 2 strings together without a delimiter. This
also makes it hard for say search.cpan.org to make you aware that there is a
Client3 when you're looking at the Client2 page.

A better (IMHO) alternative is to make the interface part of the version
number as important as the name. This is equivalent to including it in the
name except you don't lose information like you do when you just glue a
number on the end of the name. You also get to use '.'s in the version
number because you're not try to make a valid Perl module name. Then CPAN
and other tools could understand the relationship between different versions
of modules.

Unfortunately, this is the bit I think will never happen, I don't think it
would be possible to convince people that this is worthwhile, possibly
because it's not worthwhile at this late stage.

So in the absence of the "full" solution perhaps we should urge people
towards sticking interface version numbers in the names of the modules. I've
done it privately too but I'm not convinced that CPAN should be littered
with My::Module, My::Module2, My::Module3 etc,

F


Re: cpan name spaces (was: Re: Re3: Re: How about class Foo {...} definition for Perl? )

2004-01-21 Thread Fergal Daly
On Wed, Jan 21, 2004 at 03:53:34AM -0500, Terrence Brannon wrote:
> I am author maintainer of the Parse::RecDescent::FAQ - what happened 
> vis-a-vis version compatibility? I have been far away from the mechanics 
> of Parse::RecDescent for quite awhile.
> 
> And yes, please email me something that you want put in there.

>From the Changes file:

1.90Tue Mar 25 01:17:38 2003


- BACKWARDS INCOMPATIBLE CHANGE: The key of an %item entry for
  a repeated subrule now includes the repetition specifier.
  For example, in:

sentence: subject verb word(s)

  the various matched items will be stored in $item{'subject'},
  $item{'verb'}, and $item{'word(s)'} (i.e. *not* in $item{'word'},
  as it would have been in previous versions of the module).
  (thanks Anthony)

F


Re: cpan name spaces

2004-01-20 Thread Fergal Daly
On Tue, Jan 20, 2004 at 10:26:30AM -0500, Mark Stosberg wrote:
> On Tue, Jan 20, 2004 at 12:23:09PM +0000, Fergal Daly wrote:
> >
> > Not that this would ever be agreed upon, the old way is ingrained. Modules
> > will continue to do for example
> > 
> > PREREQ_PM => { Parse::RecDescent => 1.80 }
> > 
> > then fall over because 1.90 satisfies this requirement but is not actually
> > compatible,
> 
> This is addressed to some degree in the Module::Build system, which allows you
> to specify required module versions like this:
> 
>  Ken::Module => '>= 1.2, != 1.5, < 2.0',
> 
> So there /is/ currently way to specify exactly which versions you expect
> to be compatible. Unfortunately, unless the author of the required module 
> has made clear version numbers like you suggest...it make take some
> digging to figure out exactly which versions should be required.

The big problem is that when I release a module requiring Ken::Module >=
1.80, I don't know in advance that the not yet released 1.9x is going to be
incompatible with it. I won't find that out until I start getting bug
reports about it.

If I was really being diligent, I would track all the releases of
Ken::Module and all your other dependencies and keep updating the

Ken::Module => '>= 1.2, != 1.5, < 2.0',

entry in my Build.pl.

The problem is the conflict within the current interpretation of version
strings:

You should say >= to get future bug fixes but...

You shouldn't say >= because you'll get future interface and behaviour
breakage.

This is caused by trying to cram 2 version numbers into a single version
string with no separator. Our tools cannot tell where the interface version
ends and the bugfix revision version begins.

What is the purpose of the version number? Given 2 versions of the same
module, my automated tools (like CPAN etc) and I should be able to

1. see easily whether each module is compatible with my software

2. know which one is "better" where "better" usually means later stable
revision but if I want it could take devel version into account etc.

A simple convention would be to reserve the final component for the
revision/bugfix version, but that might be a bit limiting. It may be better
to clearly separate them like 2.3.4_5.5 . Our tools could then pick up bug
fixes but ignore interface breaking releases, all without constant rewriting
of our (increasingly complex) PREREQ_PM lines.

If we want to get really fancy, in the meta information for Module::Ken
2.3.4_xx we could declare that this is in fact compatible with 2.3.2 (maybe
we decided that the interface change in the 2.3.3 series was a mistake and
have abandonned that branch). Now Module::Fergal, which thinks it wants the
latest 2.3.2_xx, knows it can happily use 2.3.4_xx, which means Fergal
doesn't have to update his Makefile.PL every time Ken releases a new version
and Ken doesn't have to keep backporting bug fixes to 2.3.2. to keep Fergal
happy.

And of course if we get even more fancy, this meta data can be very easily
used to build a tree of compatibility. So if 2.3.2 had declared itself to be
compatible with 2.1.0 then CPAN can easily figure out that 2.3.4_xx is also
an acceptable substitute for 2.1.0.

Anyway, forget about the fancy stuff. It sucks that unless I keep checking
my dependencies and updating my Makefile.PL, my modules are going break some
day, just because something I depend on broke compatibility. When that does
happen, I'll have to go to search.cpan.org and root out the old compatible
version by hand. That's a problem that cannot be solved with a single
version number,

F


Re: cpan name spaces (was: Re: Re3: Re: How about class Foo {...} definition for Perl? )

2004-01-20 Thread Fergal Daly
On Tue, Jan 20, 2004 at 12:17:51PM +0100, A. Pagaltzis wrote:
> Perl does not provide for keeping around same-named modules that
> differ in some other way.

That's not true. There are many modules where for example version 1.xx has
one interface and 2.xx has a different interface and then 2.xx where xx < 37
is a slightly different to 2.xx for xx >= 37 and so on.

Unfortunately version number in Perl/CPAN (and for that matter most other
versioning systems) aren't used as usefully as they could be.

Each module has a behaviour/interface version and each of these has a
revision. What most people care about when they say "you need version x.y.z"
is the behaviour/interface version. They're saying you need the version that
does this, that and the other. They almost never care about the revision,
you should just use the latest revision of version x.y.z on the assumption
that it is the one that actually comes closest to implementing the
documented behaviour with the fewest bugs.

It's possible to represent this information in string form quite easily, for
example $IfaceVersion_$Rev would do the job, a typical instance might look
like

2.3.4_57
2.3.4_58
2.3.5_1

which would mean that 2.3.4_58 "is better (less buggy) than but has the same
intention as" 2.3.4_57. 2.3.5_1 does something slightly different from
2.3.4_xx and is not suitable as an upgrade without careful consideration by
the programmer. Unfortunately, this is not how things work at the moment
Perl tools currently think 2.3.5 is higher than 2.3.4 and may think it's ok
to use 2.3.5 where 2.3.4 was requested with possible nasty results.

What this all has to do with names is that in the scheme above, you can
easily stick an author into the version to make 2.3.5-JBLOGGS_33. No need to
extend the module namespace with author prefixes.

Not that this would ever be agreed upon, the old way is ingrained. Modules
will continue to do for example

PREREQ_PM => { Parse::RecDescent => 1.80 }

then fall over because 1.90 satisfies this requirement but is not actually
compatible,

F


Re: HTTP::Parser module

2003-12-14 Thread Fergal Daly
On Saturday 13 December 2003 20:39, David Robins wrote:
> parse() will return:
> 0 on completion of request (call request() to get the request, call data() 
to 
> get any extra data)
> >0 meaning we want (at least - may want more later if we're using chunked 
> encoding) that many bytes
> -1 meaning we want an indeterminate amount of bytes
> -2 meaning we want (at least) a line of data
> parse() will also accept undef as a parameter

That looks good. Is it ok to give less than n when the parser asks for n? 
Also, is it ok to give less than a line when the parser asks for a line? If 
not then every client will have write their own buffering code so they can 
build up the necessary length, it would be much better for the parser to 
handle that,

F



Re: New module Algorithm::Interval2Prefix

2003-12-02 Thread Fergal Daly
On Tuesday 02 December 2003 20:41, Lars Thegler wrote:
> On Monday, December 01, 2003 11:24 AM, Fergal Daly wrote:
> Obviously, if the numbers are of variable length, then we have a different
> situation, that cannot easily be handled this way. Maybe I should check $lo
> and $hi to ensure they are the same length...

It would definitely be a good idea to check that but I was pointing out that 
the documentation doesn't actually say that the hi and lo must be the same 
length and also, the numbers you are testing it against must be that length 
also.

> > 2900-3999 into a single regex string which will match if and only
> > if the number is in any of the intervals
> >
> > ^(?:29\d{2}|3\d{3})$
> 
> Intersting proposistion. Assuming that $lo and $hi are the same length, then
> doing
> 
>   $n = length($lo)-length($_);
>   $p .= $n ? "\d{$n}" : '';
> 
> to each prefix should do the trick. I'll be adding that to the module.

Exactly. Actually, if you're going to put in the \d{$n} stuff then you can  
make this work for arbitrary intervals without any constraints on the length 
of hi, lo and the number being tested (you must also include ^ and $ in the 
regex for this to work). So

i2p(900, 1999) would give back

^(?:9\d{2}|1\d{3})$

and if your algorithm was really clever

i2p(500,5000) would give

^(?:[5-9]\d{2}|[1-5]\d{3})$

but that may not be worth the effort. Also, the regexes involving \d{$n} will 
actually be slower for your telephone application because you already know 
that the number is the right length,

F



Re: New module Algorithm::Interval2Prefix

2003-12-01 Thread Fergal Daly
On Sun, Nov 30, 2003 at 05:17:10PM +0100, Lars Thegler wrote:
> Hi all,
> 
> I've written a small module, encapsulating an algorithm that can generate a
> set of 'prefixes' (patterns that match the beginning of a numeric string)
> from an 'interval' (range) of integers. This is a problem often occurring
> working with telephony switching equipment or IP address subnetting.
> 
> I've trawled CPAN to locate prior work with similar functionality, but to no
> avail.
> 
> The POD is attatched below, and the module distfile can be fetched from
> 
> http://lars.thegler.dk/perl/Algorithm-Interval2Prefix-0.01.tar.gz
> 
> Question: Am I reinventing something here?

I never saw it before.

> Question: Is the namespace appropriate?

Looks ok to me although I'd prefer "To" rather than "2".

> Comments on code, style etc are welcome.

> Taking an interval as input, this module will construct the smallest set
> of prefixes, such that all numbers in the interval will match exactly
> one of the prefixes, and no prefix will match a number not in the
> interval.

You need to say something about the length of the number because as it
3000-3999 produces just 3 and there are a lot of numbers that start with 3
but which aren't in the interval. In the same vein, a mode that produces
a set of strings like

^3\d{3}$

which can be used directly with Perl's re-engine might be useful or even
something that turns

2900-3999 into a single regex string which will match if and only
if the number is in any of the intervals

^(?:29\d{2}|3\d{3})$

so instead of 

my @p = interval2prefix($lo, $hi);

my $found = 0;
foreach my $pref (@p)
{
if ($num =~ /^$pref/)
{
$found = 1;
last;
}
}

if ($found)
{
do stuff
}

you could just do

my $r = interval2regex($hi, $lo);

if ($num =~ /$r/)
{
do stuff
}

F


Re: Author's namespace

2003-11-14 Thread Fergal Daly
But what about code that is shared by several CPAN modules but which I don't
consider to be worth getting up to standard for general use. It's not that
the code is "trash", it's fine I just can't see anyone else wanting to use
it, even if it was fully documented.

I suppose I'll just have to upload Class::OhGodNotAnotherMethodMaker,

F

On Thu, Nov 13, 2003 at 11:28:38PM -0500, Sherzod Ruzmetov wrote:
> If the code is not to be used by others, may be you shouldn't upload it to
> CPAN at all?!
> 
> If it's a piece of code used by a re-usable module of yours, then it should
> be put under 
> that module's namespace, instead of putting it under a non-related
> namespace.
> 
> --  
> sherzod
> 
> 
> : -Original Message-
> : From: Fergal Daly [mailto:[EMAIL PROTECTED] 
> : Sent: Thursday, November 13, 2003 17:20
> : To: [EMAIL PROTECTED]
> : Subject: Author's namespace
> : 
> : 
> : Is there, or should there be a namespace for each author? 
> : Somewhere I can put 
> : modules that I don't consider worth releasing but that I 
> : do use in some of my 
> : released modules? For instance I have a very simple 
> : method maker that I 
> : wouldn't expect anyone else to use and I don't want to 
> : bother writing docs 
> : for it. So instead of including a copy with each module, 
> : I'd like to upload 
> : it as something like
> : 
> : Authors::FDALY::MM
> : 
> : and then I can use it in my modules but I'm not really 
> : publishing it for 
> : others to use... yes, I know it gets used in the things I 
> : do publish so in 
> : that sense it is published but lots of modules have 
> : undocumented bits that 
> : are not really intended for reuse by anyone else so it's 
> : no worse than that 
> : and it's better than having Test::Deep::MM an identical 
> : file called 
> : Blah::Blah::MM for the next module I release.
> : 
> : Exactly how stupid is this idea?
> : 
> : F
> : 
> : 
> 


Re: Author's namespace

2003-11-13 Thread Fergal Daly
On Thursday 13 November 2003 22:34, A. Pagaltzis wrote:
> I'm not particularly excited about the idea, but it's better than
> duplication. I really like the Authors:: idea, although I'm not
> sure that name is good.
> 
> However, the ::MM bit really irks me. If anything, please make
> the name meaningful anyway - something like
> Authors::FDALY::Class::MethodMaker. Reserve a corner for your
> private stuff in the classroom in plain sight of everyone if you
> really want to, but please be nice to others and don't litter it.

The whole point is that it you don't need to be nice to others. Ideally, 
Author::* wouldn't  turn up in searches (unless you ask for it).

It would also be a handy place to put a module while some list is discussing a 
good name for it,

F



Author's namespace

2003-11-13 Thread Fergal Daly
Is there, or should there be a namespace for each author? Somewhere I can put 
modules that I don't consider worth releasing but that I do use in some of my 
released modules? For instance I have a very simple method maker that I 
wouldn't expect anyone else to use and I don't want to bother writing docs 
for it. So instead of including a copy with each module, I'd like to upload 
it as something like

Authors::FDALY::MM

and then I can use it in my modules but I'm not really publishing it for 
others to use... yes, I know it gets used in the things I do publish so in 
that sense it is published but lots of modules have undocumented bits that 
are not really intended for reuse by anyone else so it's no worse than that 
and it's better than having Test::Deep::MM an identical file called 
Blah::Blah::MM for the next module I release.

Exactly how stupid is this idea?

F



Re: Tie::Array::Sorted

2003-11-13 Thread Fergal Daly
On Wed, Nov 12, 2003 at 05:17:28PM +, Simon Cozens wrote:
> Randy W. Sims:
> > Sounds like a set/multiset/bag structure.
> 
> I thought it sounded more like a sorted array, but I'm prepared to be
> persuaded otherwise. (Primarily because I've already released the module
> to CPAN. ;)

I think the point is that T::S::A is closer to a set than an array, or more
to the point it's interace is just an expansion of a Set's interface.
Whereas it is a restriction of an array's interface - well it doesn't have to
be a restriction but as you point out in the docs, using the full array
interface doesn't make sense and for instance

push(@a, $n);
pop(@a);

doesn't leave @a unchanged and a whole load of other un-arraylike things

So if Perl's standard collection types had been objects from the start and
had been implemented as a hierarchy with consistent interfaces then you
probably would have called it Set::Ordered because it looks like a Set with
extra stuff:

Set methods
insert($elem)
delete($elem)
count
get_iterator

Sorted Set methods
# same as Set and also
deleteindex($index)

Array methods
# same as Sorted Set and also

insertindex($index, $elem)
# as you point out in the docs, this makes no sense for a T::A::S

splice
pop
etc etc

And a Sorted Set could be passed into a function expecting a genuine Set
with nothing to worry about. However you cannot pass a T::A::S into a
function expecting a genuine array because it might do something like

$a->[0] = "defcon 5";
...
launch_missiles() unless $a->[0] = "defcon 5";

All that said, it probably makes more sense to leave the name as it is but
how about implementing it as a nice object oriented ordered set and making
the Tie stuff a very thin wrapper around that? Actually you could do with

*PUSH = \&insert;
*DELETE = \&deleteindex;

sub delete
{
# a good home for your binary search implementation
}

Thinking of it in terms of usage, this module is useful when you have some
already existing code that expects an array and wants to

- read some values from it
- push/unshift some stuff onto it
- fill it with completely new contents.

If the routine does anything besides that it will not work properly with a
T::A::S array. So I'd throw an expcetion in STORE rather than trying to do
"the right thing" because it's almost certainly not the right thing.

Any code that knows it's a sorted array can achieve the same effect with a
delete and a push and it could also use the OO interface which would be
faster than going through all the tie stuff,

F


Re: Class::FakeAttributes -- Opinions Wanted

2003-11-07 Thread Fergal Daly
On Fri, Nov 07, 2003 at 11:02:34AM +0100, A. Pagaltzis wrote:
> Even disregarding these differences, your code needs a lot of
> additions before it becomes useful in practice. The most glaring
> definiciency is that there's no provision for deleting an object
> instance's data from the hash on object destruction.

Absolutely, my code was presented in response to your complaint about having
to type refaddr all the time and that there was no easy way to do
::MethodMaker stuff with lexicals using inside out objects. That's all, I
wasn't planning on posting it on CPAN.

> It also lacks a provision for generating any kind of non-trivial
> accessors/mutators.

It lacks a whole lot more than that but there's no technical reason why it
couldn't do all the same stuff the others do.

That said, I've given up on automating non-trivial accessors/mutators. I
generally use my own very simple method maker which sets up an Attrs
package, stuffs a load of methods into it and pushes that package onto @ISA.
I never access the underlying data structure. So classes look like

package MyClass;

# this will create setColour, getColour, setSize and getSize in MyClass::Attrs
# and also do push(@ISA, "MyClass::Attrs");
use MM qw( Colour Size );

and if I want to do something fancy for a setter or getter I just override
like this

sub setSize
{
my $self = shift;

my $size = shift;

die unless ($size > 0 and $size < 10);

$self->SUPER::setSize($size);
}

It makes me happy.

> A real, general-case ::InsideOut will require a lot more work
> than that snippet.

A snippet was all it was ever meant to be,

F


Re: Class::FakeAttributes -- Opinions Wanted

2003-11-07 Thread Fergal Daly
On Thu, Nov 06, 2003 at 05:58:25PM +0100, A. Pagaltzis wrote:
> Read it too. My point is that the method would be accessible from
> a much broader scope (ie globally, really) than would the
> attribute hash in Yves' code (stricly local to the method).

Yes, _all_ methods are globally accessible.

As you said, my method is accessible globally and Yves' hash is strictly
local to the method. However Yves's method is also accessible globally and
my hash is also strictly local to the method. So I don't see the point of
this comparison.

> I'm not interested in where the attribute is stored. I'm talking
> about the attribute. Whether it is accessed by hash lookup or
> closure call makes no difference. In Yves' code, the attribute is
> only accessible to the method in its scope (by looking it up in
> the lexically scoped hash), while using your code, the attribute
> would be accessible globally (by calling the exported function).

Again, this is the same comparison as above. In Yves' code as in mine, the
the attribute is only accessible to the sub and the sub is accessible
globally. Not exporting it makes no difference. Anyone can call that
inside_out method from anywhere. Just because a sub is declared inside {},
along with %attrib doesn't mean that the sub is locally scoped, if it did
then nothing at all would be able to call the Yves' inside_out method, which
would make it a bit pointless.

I think you're misunderstanding the purpose of my code. Here it is again
except this time I've provided the import() method rather than just leaving
it to the imagination. It's a method maker that makes methods in the style
of Yves' example method, so you can make lots of them without having the
type refaddr all over the place.

F

# Inside out method maker

use strict;
package InsideOut;

use Scalar::Util qw(refaddr);

sub import
{
  my $self = shift;

  my $class = caller();
  foreach my $attr (@_)
  {
make_attr($class, $attr);
  }
}

{
  my %attribs;
  sub make_attr {
my $class = shift;
my $attr = shift;

my $full = "${class}::${attr}";

my $sub = sub {
  my $s=shift;
  if (@_) {
$attribs{$full}->{refaddr($s)}=shift;
return $s;
  } else {
return $attribs{$full}->{refaddr($s)};
  }
};
{
  no strict 'refs';
  *{$full} = $sub;
}
  }
}

1;


#
 Usage example

use Test::More tests => 4;   

my $red9 = Object->new;
my $blue1 = Object->new;

$red9->Colour("red")->Size(9);
$blue1->Colour("blue")->Size(1);

is($red9->Colour, "red");
is($red9->Size, 9);
is($blue1->Colour, "blue");
is($blue1->Size, 1);
 
package Object;

# these attributes will be stored somewhere else because there's nowhere to
# store them in the object itself.

use InsideOut qw( Colour Size );

sub new
{
my $pkg = shift;

my $a = "string";
return bless \$a, $pkg;
}


Re: module to access w3c validator

2003-10-30 Thread Fergal Daly
On Thursday 30 October 2003 21:51, Struan Donald wrote:
> > HTML::Validator::W3C
> 
> Which is going to get confused with HTML::Validator and also I think
> you need to make sure people know it's a web thing.

Sorry, should have been

HTML::Validate::W3C

that way you're in a clean namespace. I knew one was free and the other 
wasn't, got them mixed up.

You said it wasn't going to be a web thing if the person has it installed 
locally, so it's not always webby. Or am I misunderstanding what you meant 
when you said it could use a local install of the validator. Maybe you meant 
you can point it to a local web server running the scripts? If so then how 
about

WebService::*::*::*

where *::*::* uses W3C, HTML and Validate in some order, the only requirement 
being that HTML and Validate are adjacent. 1-dimensional namespaces suck!

> Ah, but there will be. See the intial mail for details.

Since a lot of people have XML modules installed anyway, how about keeping it 
all in one distribution and just disabling the detailed functionality for 
those that don't have the required modules. You can mention, when Makefile.PL 
runs, that they will get the other functions if they install X, Y and Z,

F



Re: name for a module that implements dynamic inheritance?

2003-10-30 Thread Fergal Daly
On Thursday 30 October 2003 18:24, Dave Rolsky wrote:
> Well, sort of.  It messes with the symbol table of the dynamically
> constructed "child", which ends up with each parents methods.  I don't
> really want to do that.  I want to be able to have any of the intermediate
> classes call SUPER::foo() and have it do the right thing, which is my
> current stumbling block.

What is "the right thing"? Is it to call foo() in any other package besides 
the current one? If so this should be achievable with something like 

package BottomOfAll;

sub AUTOLOAD
{
my $meth = $AUTOLOAD =~ /::(.*?)$/;
my $call_pkg = caller();

my $pkg = ref $_[0];

# go through all the 
my $super;
for (@{$pkg."::ISA"})
{
next if $_ eq $call_pkg; # don't want to end up back in the same method
last if $_ eq __PACKAGE__; # don't want to end up in the AUTOLOAD again
last if $super = $_->can($meth);
}

goto &$super if $super;

croak qq{Can't locate object method "SUPER::$meth"};
}

This still has the potential for loops if a::foo and b::foo both call 
->SUPER::foo.

Of course "the right thing" could mean something very different...

F



Re: module to access w3c validator

2003-10-28 Thread Fergal Daly
If you can get the source then why bother putting it on a server, wrapping it 
in SOAP and calling it remotely?

F

On Tuesday 28 October 2003 20:15, Sherzod Ruzmetov wrote:
> Here is what you should do.
> 
> You need to download the source code of the actual validator that W3C uses
> and 
> design a SOAP interface for the script. You can get this job done very
> easily with
> SOAP::Lite.
> 
> You can then either contact the W3C validator team and get it hosted on
> their server,
> or host it on your own box.
> 
> Then, you will need to write a very simply CPAN module using same
> SOAP::Lite, 
> may be with about 20 lines of code to talk to your SOAP server.
> 
> Final interface of your module may look something like:
> 
>   use W3C::Validator::Markup;
>   my $val = new W3C::Validator::Markup();
>   $val->validate($markup_as_string);
> 
>   if ( $val->is_valid() ) {
>   print "Good job!\n";
>   if ( $val->warnings ) {
>   print "There are some minor warnings though\n";
>   }
>   } else {
>   print "Nah, doesn't validate. Because...\n";
>   while ( my $errobj = $val->errors ) {
>   printf "Line %d, column: %d: %s\n\t",
> $errobj->line_number, $errobj->col_number, $errobj->line;
>   print "Description: %s\n", $errobj->description()
>   }
>   }
> 
>   $val->finish(); # <-- free up the buffer
> 
> 
> --  
> sherzod
> 
> 
> : -Original Message-
> : From: Struan Donald [mailto:[EMAIL PROTECTED] 
> : Sent: Tuesday, October 28, 2003 1:38 PM
> : To: [EMAIL PROTECTED]
> : Subject: module to access w3c validator
> : 
> : 
> : Hi,
> : 
> : I've been looking at getting at the W3C's HTML validation 
> : service and
> : as there's nothing there that does what I want I was looking at
> : knocking something up.
> : 
> : Having checked with the maintainer of W3C::LogValidator we came up
> : with WWW::Validator::W3CMarkup as a name.
> : 
> : Does this sound reasonable to everyone out there and is there
> : something out there that I've missed?
> : 
> : The other question is that I was also going to write a 
> : version that
> : wraps up the XML output you can get from the Validator 
> : but I'm really
> : not sure what to call it.
> : 
> : Essentially the difference between the two will be that the basic
> : version will just let you know if the webpage passed or 
> : failed. The
> : one that takes the XML will be able to return you a list 
> : of the errors
> : in the document. WWW::Validator::W3CMarkup::Detailed was 
> : on thought I
> : had but that seems a little clumsy.
> : 
> : The logic in splitting these into two modules is so that 
> : people don't
> : need to install a load of XML processing stuff unless 
> : they really need
> : it.
> : 
> : thanks
> : 
> : Struan
> : 
> 
> 



Re: sub-packages for object-oriented modules

2003-10-05 Thread Fergal Daly
On Sunday 05 October 2003 17:23, Eric Wilhelm wrote:
> > The following was supposedly scribed by
> > Fergal Daly
> > on Sunday 05 October 2003 06:54 am:
> 
> >That said, having a single package so full of stuff that you need to split
> > it into sub files is often an indicator that you're doing way too much in
> > one package anyway. It's possible you could benefit from mixin classes.
> > That is classes which contain only methods and these methods make very few
> >assumptions about about their $self object.
> 
> I do have Get and Set methods which would allow functions like Move() to 
> operate without directly accessing the data structure, but they would still 
> have to know about the per-entity data structure (i.e. I could later change 
> where the entity is stored in the object, but the functions need to know 
> about some of the details of the entity.)  Is this enough separation?

It's not so much a question of "is it enough?", it's more "is it useful?". 
Mixin classes are useful in a situation when you have several different 
classes which share a set of methods but do not inherit from a common 
ancestor. For instance if you have Array and IndexedHash (a hash that can 
also be used like an Array) then if they have the same interface, you could 
write a mixin class Sortable and get both of them to inherit from it and 
voila you get 2 sort methods for the price of 1.

I'm not sure if this is relevant to your situation (I suspect not as you seem 
to only have 1 class of objects that you work with).

F



Re: sub-packages for object-oriented modules

2003-10-05 Thread Fergal Daly
There aren't any technical issues in using one file for methods, one for 
constants, one for helper functions etc but it would be a bit of a surprise 
to anyone who is used to a strong correspondence between file names and 
package names.

That said, having a single package so full of stuff that you need to split it 
into sub files is often an indicator that you're doing way too much in one 
package anyway. It's possible you could benefit from mixin classes. That is 
classes which contain only methods and these methods make very few 
assumptions about about their $self object. So you'd have a Drawing::Movable 
class which implements Movement methods. You can then do something like

package Drawing::Square;
@ISA = qw( Drawing::Shape Drawing::Movable Drawing::Copyable );

The key is dividing your methods into 2 classes, those which _must_ fiddle 
directly with the inside of the object and those which can do the job just 
using the public interfaces. The second group can often be split into mixin 
classes.

As for the Exporter difficulties if you export lots of stuff from one package 
and you also want to export the same stuff from another you can do

package Drawing;

sub import
{
Drawing::Constants->export_to_level(1, @_); # exports the constants to main
}

This allows you to export the symbols from Drawing::Constant when somebody doe 

use Drawing;

Alternatively, if you want to export symbols from a variety of sub packages 
you could do

@EXPORT = (@P1::EXPORT @P2::EXPORT, @P3::EXPORT);

rather than having to maintain a duplicate list of symbols.

F

On Saturday 04 October 2003 16:59, Eric Wilhelm wrote:
> Hi,
> 
> I'm working on a module which will eventually be CAD::Drawing
> 
> Currently, it is named ::Drawing and the 
package 
> is declared as simply "Drawing".
> 
> I've run into a wall with my original data structure and have seen a much 
more 
> flexible and expandable way to do things, so I will be rebuilding the entire 
> module and its children.
> 
> The sub-modules are currently stored under Drawing/*.pm and they all declare 
> their packages as "Drawing".  These modules contain methods and helper 
> functions/constants.  For example, Drawing/Manipulate.pm contains the 
> functions Move, Copy (and lots of other goodies.)  It would not work as a 
> standalone and is not intended to ever be use'd by main.  There is also a 
> file Drawing/Defined.pm which exports a pile of constants, but these only 
get 
> exported to Drawing.pm and never show up in main (as intended.)
> 
> I've seen the use of @ISA in perltoot, and it looks like this would work, 
but 
> it does not seem like inheritance is really what I'm doing (since the 
> sub-modules aren't really base classes (none of them have (or could have) 
> constructors.))  If it didn't amount to so many lines, all of the functions 
> could really be contained in one file (but this makes it hard to edit and 
> navigate.)
> 
> Is this overloading of my own namespace the right way to go, or should I be 
> using some more rigorous method?  I imagine that the entire distribution 
> could eventually be rather complicated, so I'd like to start down the path 
to 
> robustness with this next revamp.
> 
> Thanks,
> Eric
> -- 
> "It is a mistake to allow any mechanical object to realize that you are in a 
> hurry."
> --Ralph's Observation
> 
> 



Re: What search.cpan.org & PAUSE produce (Fork from: what to do with dead camels?)

2003-08-14 Thread Fergal Daly
On Tuesday 05 August 2003 14:05, Iain 'Spoon' Truskett wrote:
> A format using the META.yml file has sprung up.

This works for new releases of modules but it depends on people using it so it 
does nothing for the current issues with search.cpan.org.

Looking at 02packages.details.txt, I see Test::More and HTTP::Response are 
indexed correctly as being parts of Test::Simple and libwww-perl. So, some 
indexer is doing the right thing, it's just a matter of search.cpan.org using 
the same algortihm when ranking the returned results. Who do we encourage for 
that?

F



Re: what to do with dead camels ?

2003-08-04 Thread Fergal Daly
On Sunday 03 August 2003 17:45, Andy Lester wrote:
> There's a distro on CPAN now called lcwa that I would love to see
> disappear.  It's from 1997 and it's one of those distros that
> included all its necessary parts rather than rely on depencies.
> Unfortunately, those parts are 6 years out of date, but come up in
> searches on the modules.
>
> Do a search on search.cpan.org for "HTTP::Response", a pretty common
> module.  The first hit that comes up is the one from lcwa, and if
> you're not paying attention to the distro name (or you're a relative
> newbie who doesn't realize he needs to), you're going to be looking
> at 6-year-old docs for the module.

Try Test::More, it's true home is Test::Simple but that's 5th on the list.

Can I suggest a change to the sorting algorithm for search.cpan.org when 
searching for a module or for docs

@sorted_distros = sort {
$a->oldest_version->release_date <=>
$b->oldest_version->release_date
} all_distros_containing("Module::Name");

Because chances are that if Distro::A includes a piece of Distro::B then 
Distro::B probably predates Distro::A. Of course that's not necessarily true, 
it's quite possible but that should be comparitively rare.

I think it doesn't fully solve the problem for Test::More but it might for 
some others

F



Renaming modules (was Re: [ANNOUNCE] Test::Warn::None 0.02)

2003-06-28 Thread Fergal Daly
On Saturday 28 June 2003 02:51, Michael G Schwern wrote:
> When I merged Test::Simple with Test::More I left a Test-More tarball lying
> around containing a Makefile.PL which simply died saying "download
> Test-Simple instead".

That's OK for a merge (or you could have an empty archive with a dependency on 
Test::Simple so CPAN.pm can be happy.)

I don't think dieing is a good idea for a rename or a deprecation. It's 
probably a good thing to die when a developer gets your module but if a user 
gets it to satisfy a dependency then it shouldn't fail. Is there a way to 
know if Makefile.PL is being run by CPAN.pm? That way you could release 
My-Module-0.40_please-use-My-Better-Module-instead.tgz. This would have docs 
telling developers to use My::Better::Module instead and it would die for 
perl Makefile.PL but compile fine with CPAN.pm or

perl Makefile.PL --i_really_want_this

It would be good to be able to signal to the CPAN indexer that a module has 
been superceeded by another.

F



Re: Test::Deep namespace

2003-06-20 Thread Fergal Daly
On Friday 20 June 2003 20:21, Ken Williams wrote:
> Second, I find it very confusing that all these different capabilities 
> are happening inside one cmp_deeply() function.  In Perl it's much more 
> common to use the function/operator to indicate how comparisons will be 
> done - for example, <=> vs. cmp, or == vs. eq.  I would much rather see 
> these things broken up into their own functions.

I had a hard time trying to document this module and I wasn't sure I did a 
good job, now I'm certain I didn't! I hope I can explain in this email. It's 
a bit long but I hope you will see at the end that your comments are based on 
a misunderstanding of what Test::Deep does. I'd really appreciate it if you 
could tell me if it makes any sense to you or if it makes no sense at all I 
don't want to alienate users just because my docs are unintelligible. As a 
bonus, since you're @mathforum.org I'll throw in some non-well-founded set 
theory near the end ;-)

First off, the Test::Deep functions set(), bool(), bag(), re() etc are not 
comparison functions, they are shortcuts to Test::Deep::Set->new, 
Test::Deep::Bool->new, Test::Deep::Bag->new, Test::Deep::Regex->new. The 
objects they return act as markers to tell Test::Deep that at this point of 
the comparison to stop doing a simple comparison and to hand over control to 
Test::Deep::Whatever.

There's nothing you can do in regular expression that you can't do with substr 
and eq but regular expressions allow you to express complex tests in a simple 
form. That is the goal of Test::Deep. Perl has regexs that operate on a 
linear array of characters, Test::Deep supplies "regular expressions" that 
operate on an arbitrarily complicated graph of data and just as a regex often 
looks like the strings it will match, a Test::Deep structure should look like 
the structure it will match.

What's wrong with using Test::More::is_deeply()? Well, is_deeply is  just the 
complex-structure equivalent of eq for strings. is_deeply checks that two 
data structures are identical. What do you do if part of part of the 
structure you're testing is unpredictable? Maybe it comes from an outside 
source that your test script can't control, maybe it's an array in an 
undetermined order or maybe it contains an object from another module - you 
don't want your test to look inside other modules' objects because you have 
no way of telling if it's right or wrong. In these cases is_deeply() will 
fail and so is no use. Test::Deep::cmp_deeply() has varying definition of 
"equality" and so can perform tests that is_deeply can't.

Time for some examples.

Simple string case: Say you want to test a string that is returned from the 
function fn(). You know it should be "big john". So you do

Test::More::is(fn(), "big john", "string ok");

Messy string case: Things change, now fn() returns a string that contains "big 
john" and some other stuff, you can't be sure what the other stuff is, all 
you know is that the string should be a number, followed by "big john", 
possibly followed by some other stuff. No problem

Test::More::like(fn(), qr/^\d+big john.*/, "string ok");

Now imagine that you have a function that returns a hash

Simple structure case: you want to test that fn() returns

{
age => 34,
id => "big john",
cars => ['toyota', 'fiat', 'citroen'],
details => [...] # some horrible complicated object
}

Test::More::is_deeply(fn(), 
{
age => 34,
id => "big john",
cars => ['toyota', 'fiat', 'citroen'],
details => [...] # some horrible complicated object
}
);

Messy structure case: same as above but say now the id is no longer simply 
"big john", it's the same messy thing we talked about in the messy string 
case, and say you're no longer guaranteed that the cars will come back in any 
particular order because they're coming from an unorderd SQL query.

Test::is_deeply is no good now as it needs exact equality. You could write

my $hash = fn();
is($hash->{age}, 34);
like($hash->{id}, qr/^\d+big john.*/);
is_deeply(sort @{$hash->{cars}}, ['citroen', 'fiat', 'toyota' ]);
is_deeply($hash->{details}, [...]);
is(scalar keys %$hash, 4);

but you'd be s wrong because you've also got to check that all your 
refs are defined before you go derefing them so here's the full ugliness you 
really need

if( is(Sclar::Util::reftype($hash),  "HASH") )
{
is($hash->{age}, 34);
like($hash->{id}, qr/^\d+big john.*/);

if( is(Sclar::Util::reftype($hash->{cars}),  "ARRAY") )
{
is_deeply(sort @{$hash->{cars}}, ['citroen', 'fiat', 'toyota' ]);
}
else
{
fail("no array");
}
if( is(Sclar::Util::reftype($hash->{details}),  "ARRAY") )
{
is_deeply($hash->{details}, [...]);
}
else
{
fail("no array");
}
}
else
{
for (1..6) # cos 

Re: Test::Deep namespace

2003-06-19 Thread Fergal Daly
On Thursday 19 June 2003 15:48, Paul Johnson wrote:
> Sounds a little like Test::Differences.  I don't suppose there is any
> chance of integration or anything?

If Test::Deep was purely for checking if 2 structures are identical then 
Test::Differences would be fine but Test::Deep also allows you to check that 
the structure you give it matches a structural pattern. See the reply about 
Test::Data for more details,

F




Re: Test::Deep namespace

2003-06-19 Thread Fergal Daly
On Thursday 19 June 2003 15:24, Andy Lester wrote:
> It would be nice if the functions ended in _ok, so it's clear that 
> they are actually outputting and not just returning booleans.

There is only 1 function really, all the rest are shortcuts to the 
constructors of various plugins. I suppose I could call it cmp_deeply_ok. Not 
sure if I like that too much though.

> I think that Test::Data might be a better place for them, somehow. 
> I'm maintaining brian d foy's Test::Data:: hierarchy, so maybe we can 
> figure something out.

Test::Data takes a totally different approach. With Test::Data::Hash you'd do 
something like

hash_value_false_ok("key1", $hash);
hash_value_true_ok("key2", $hash);
hash_value_false_ok("key3", $hash);
hash_value_true_ok("key4", $hash);

with Test::Deep you'd do

cmp_deeply($hash,
{
key1 => bool(0),
key2 => bool(1),
key3 => bool(0),
key4 => bool(1),
}
);

You build a structure that looks like the result you're expecting except 
sometimes instead of simple values you have special comparators which .

You can also do this

my $is_person = all(
isa("Person"),
methods(
getName => re(qr/^\w+\s+\w+$/),
getMaritalStatus => any("single", "married"),
),
);

my $is_company = all(
isa("Company"),
methods(
getName => re(qr/\w/),
getCEO => $is_person,
getDirectors => all($is_person),
),
);

cmp_deeply([EMAIL PROTECTED], all($is_company));

You can also make your definitions available to other modules so that when 
they run their tests they can check that they are getting good values back 
from you. It'd be nice to put this in the test code for my fictitious log 
handler,

use IO::File::Test qw( $opened_fh );

my $log_handler = Log->new("$test_file");

cmp_deeply($log_handler,
methods(
getFileName => $test_file,
getEntriesCount => 0,
getFH => $opened_fh,
)
);

and that'll make sure that my file was opened correctly along with various 
other relevant tests,

F



Re: Binary File Modules

2003-06-19 Thread Fergal Daly
On Thursday 19 June 2003 16:18, Matt Seddon wrote:
> But File::.*::Info feels like the Right Thing :)
> 
> File::BinObj::Info?
> File::BinaryObject::Info?

At the moment your only module is the PE module and that deals with a binary 
format but that's not to say that future modules won't deal with ascii 
formats too. Well ok, you said you'd be focussing on binary formats but some 
ASCII formats are none too simple and could probably use an Info modules.

Anyway, I don't really have a suggestion for the right name but including 
"binary" seems wrong to me,

F



Re: Test::Deep namespace

2003-06-19 Thread Fergal Daly
On Thursday 19 June 2003 15:15, Enrico Sorcinelli wrote:
> Why not to hack into Test::More in order to improve it and fix its bugs?
> Test::More is often used and I think that your patches will be welcome!

I did, my patches were accepted by Michael Schwern months ago but he hasn't 
released a new version, I think he's pretty busy at the moment.

Anyway, Test::Deep does huge amounts more than Test::More.

Simple usage is much like is_deeply():
cmp_deeply($hash, { a=> [1, 2, 3], b => \'hello'});

More advanced features:
cmp_deeply($set, set(1, 2, 3, 4));

will make sure that $set is an array ref which is setwise equal to (1, 2, 3, 
4) so any of the following would be ok: [1, 2, 3, 4] or [4, 2, 3, 1] or [4, 
4, 3, 2, 4, 2, 1, 3, 2, 1]

cmp_deepy($hash,
{
set1 => set(1, 2, 3, 4),
set2 => set(5, 6, 7, 8),
}
);

makes sure that $hash has 2 keys and that
$hash->{"set1"} is setwise equal to (1, 2, 3, 4) and
$hash->{"set2"} is setwise equal to (5..8)

cmp_deepy($set_of_sets, set(
set(1..5),
set(6..10),
);

give an OK for [[1..5], [6..10]] or [[10..6], [5..1]] and it would also pass 
[[1..5, 2, 3, 2,4, ,5],[6..10, 7, 6, 8,], [5..1]].

cmp_deeply($bag_of_objects,
bag(
methods(getName => "a"),
methods(getName => "b"),
methods(getName => "c"),
methods(getName => "d"),
re(qr/banana/),
)
)

would make sure that $bag_of_objects is an array ref with 5 items, 1 of which 
is a string containing the word "banana" and the other 4 are objects and 
these objects return "a", "b", "c", "d" when the getName method is called. 
The order in which these elements occur is ignored.

cmp_deeply($father,
methods(
getName => re(qr/^\w+\s+\w+$/),
getChildren => all(isa("Animal::Human")),
getPets => all(isa("Animal")
)
);

is the same as testing

$father->getName =~ /^\w+\s+\w+$/;

and

foreach my $child (@{$father->getChildren})
{
$child->isa("Animal::Human");
}

except it's using a declarative syntax which (hopefully) makes it's easier to 
understand and maintain. It will also give sensible error reports and unlike 
the code above will not explode if $father->getChildren->[5] is not actually 
a blessed reference, instead it will just tell you about it in the test 
diagnostics.

So it's not something that can be hacked into Test::More,

F



Re: Test::Deep namespace

2003-06-19 Thread Fergal Daly
On Thursday 19 June 2003 06:55, Shlomi Fish wrote:
> From what I understand the Test namespace is intended for modules that are
> meant to test Perl code for bugs (Test, Test::More, Test::Simple,
> Test::Harness, etc.). I think your module belongs somewhere under Data.
> Like Data::Test::Deep or wherever.

It's a Test::Builder based testing module, it's designed to replace and 
enhances Test::More's is_deeply() and eq_set(). is_deeply() has several 
limitiations like not handling circular references and ignoring the 
blessedness of references, it also has a few bugs.

It outputs the usual test pass/fail stuff plus diagnostics explaining where it 
found a difference between the given structure and what it expected. For 
example

not ok 1
# Failed test (-e at line 1)
# Compared ${$data->[0]->{"key"}->load("filename")}
#got : 'some text'
# expect : 'something else'

That said, I probably will be breaking it into 2 modules, one which compares 
data and one which to wraps that in a Test::Builder interface. Hopefully then 
you will be able to use it easily in assertions and other non-testing 
situations.

For the moment, I want to stake my claim on a Test:: namespace,

F




Test::Deep namespace

2003-06-18 Thread Fergal Daly
Hi,
I already have Test::Deep on CPAN and I want to officialise the namespace so 
I thought I should run it by this list before mailing [EMAIL PROTECTED]

http://search.cpan.org/author/FDALY/Test-Deep/

Test::Deep allows you to check that a complex data structure contains the 
right stuff. It will traverese the data structure and allows you do a wide 
variety of comparisons at all levels and it can handle circular data 
structures. You could think of it as regular expressions for complex data 
structures.

Anyway what I really want to know is does anyone object to Test::Deep as the 
name of a Test module that does deep comparisons of data structures?

F



looking for a name

2003-06-11 Thread Fergal Daly
I have a module for building Perl expression as a tree which can then be 
dumped out as Perl eg.

my $tree = trav(hash("key"), array(10), method("getName", hash('other')));
print "perl: ".$tree->perl('$var')."\n";

perl: $var->{"key"}->[10]->getName($var->{"other"})

It's currently part of (yet another) template system but I'd like to split it 
out. I was thinking something like Code::Builder::Expression or 
Code::Perl::Expression or Code::Builder::Perl::Expression.

Are there already modules for doing this out there? I couldn't see them on 
CPAN,

F



Re: UDPM name space finalization

2003-06-01 Thread Fergal Daly
Sorry to bring this up again, I should have chased it more the last time but 
what exactly is UNIXy about about this module?

The reason given previously was that all the dialog programs run on UNIX. That 
seems fairly incidental, it's not like there can't be dialog be programs for 
windows, Mac, Amiga etc and quite possibly there are. I presume if The Gimp 
can be compiled on windows then surely gDialog could be and KDialog could 
probably be ported easily as it's based on the QT toolkit.

If I was searching for a dialog module on CPAN, "unix" would not be one of my 
search terms and if someone ever does write a backend for a windows dialog 
program then anyone who tries to find it could be confused by the UNIX and 
assume it won't work under windows.

I just don't see any fundamental UNIX connection. Is there a reason why this 
module could never work on anything else?

UI::Dialog::* seems like a much more apt prefix and as someone pointed out in 
another thread, there's nothing wrong with starting a new toplevel namespace 
as long as it makes sense and you don't hog the whole thing,

F