Re: Why does File::Path not respond to 'cover'?
* James E Keenan jk...@verizon.net [2015-06-28 15:20]: In any case, on my previous laptop I located some correspondence with pjcj from two years ago in which I reported having found a hack for this case: Create a branch. In the branch rename 'Path.pm' to something like 'ABCPath.pm', then do a global search-and-replace in all files (except things like README, TODO and Changes) to impose the new name. At that point you can run 'perl Makefile.PL make', and from that point Devel::Cover sees a brand new library and calculates coverage. That’s very invasive to the code being instrumented, though. Hacking the exemption out of Devel::Core seems like the far more robust alternative. -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Revert use_ok() change to allow lexical effects?
* Michael G Schwern schw...@pobox.com [2012-04-11 18:35]: Nope, too much magic for too small a use case. And faithfully duplicating `use` would be less so? :-) I don’t see how it is any more magic than `done_testing`. What exactly makes you uneasy? Maybe there is a way to address that if you can be more specific. What I don’t like about duplicating `use` is that you need to diddle internals and keep in mind a long series of edge cases in the interface that matter to almost no one except those few people who need them. So it’s very difficult to be sure that the implementation is fully complete and correct, and hard to find out at short notice when it gets broken by changes to perl, if any. And all that when there is no reason not to just use the original instead. It’s ridiculous when you stop to think about it. Let me reiterate, I have no plans to *deprecate* `use_ok`. Even if I wanted to there are simply too many users to make deprecation worth while. Yes, as I mentioned, I can see that. It works fine if what you want is a runtime require + import + assert, and sometimes you want that. The problem is it's been overused and has come to replace a simple `use` in test scripts. To that end the question is whether to *discourage* its use in the documentation: to scale it back from use this when you load a module to use this for some special cases. *Which* special cases? I would rather not recommend it in any case ever. My suggestion to ship AutoBailOut was so you would be able to suggest it as a replacement in the docs, as it covers the one case where `use_ok` is even of interest (though still not the right solution). But I guess you could do that even if it ships outside of Test::More. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Revert use_ok() change to allow lexical effects?
* Ovid publiustemp-perl...@yahoo.com [2012-04-11 19:10]: * Aristotle Pagaltzis pagalt...@gmx.de [2012-04-11 18:55]: I don’t see how it is any more magic than `done_testing`. Because done_testing is applicable to every test module and solves the far more common issue of hating maintain a plan. I would argue that done_testing is much more necessary than an AutoBailout. Well sure, which is why I suggested a separate module and not a patch to Test::More itself. (Also since it is doable separately with such a clean implementation.) That being said, I'd use AutoBailout a lot more often if it was available. Then again, as Schwern pointed out, that's why I wrote Test::Most :) I guess I should go and put AutoBailout on CPAN. :-) What I don’t like about duplicating `use` is that you need to diddle internals and ... I think Schwern's not arguing against this. He's just trying to figure out the best way forward. Yes, that was not an argument, just stating my position on it in line with my question about his unease. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Revert use_ok() change to allow lexical effects?
* Smylers smyl...@stripey.com [2012-04-11 18:20]: Aristotle Pagaltzis writes: my $reason = 'Tests must succeeded'; Grammar correction if anybody is going to publish this in a module: succeed, rather than succeeded. Woops. Thanks. Ah d’uh! Now I feel stupid. Please don't! You're still ahead many of us ... See above. ;-) Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Revert use_ok() change to allow lexical effects?
* Michael G Schwern schw...@pobox.com [2012-04-11 20:10]: On 2012.4.11 9:53 AM, Aristotle Pagaltzis wrote: I don’t see how it is any more magic than `done_testing`. done_testing() has no global side effects, it's just a function. Unless I'm mistaken, Test::AutoBailOut is doing to need a global $SIG{__DIE__} handler or override CORE::require or add something to @INC or try to parse exception messages or something like that. Any of those have global side effects which can potentially interfere with or be overwritten by the code being tested. … what for? Is Ovid’s solution of just using an END block insufficient? Why? My suggestion to ship AutoBailOut was so you would be able to suggest it as a replacement in the docs, as it covers the one case where `use_ok` is even of interest (though still not the right solution). But I guess you could do that even if it ships outside of Test::More. Precisely. For example, it already suggests Test::Differences and Test::Deep. OK. Then how about I stick AutoBailout on CPAN with a SYNOPSIS that covers `t/00-load.t`, and then you change the `use_ok` docs to a) discourage its use and b) point to AutoBailout for `t/00-load.t` uses. Sound good? Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Revert use_ok() change to allow lexical effects?
* Ovid publiustemp-perl...@yahoo.com [2012-04-11 21:55]: From: Michael G Schwern schw...@pobox.com Personally I'm a fan of scroll up and read the first failure. It always works! Try it on a Test::Class test suite running thousands of test in a single process, whizzing past on your terminal :) That is not relevant to t/00-load.t though, nor applicable when BAIL_OUT is involved, and here we are considering both of these. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Revert use_ok() change to allow lexical effects?
Btw Ovid, * Ovid publiustemp-perl...@yahoo.com [2012-04-11 19:10]: done_testing is applicable to every test module and solves the far more common issue of hating maintain a plan. ^^ a little Freudian slip there? :-)
Re: Revert use_ok() change to allow lexical effects?
* Andy Lester a...@petdance.com [2012-04-11 22:30]: As a module author, I would not require a user to install AutoBailout.pm just to remove boilerplate in my t/00-load.t If it’s only a test_requires the user won’t have to. (It would be pretty silly to keep discarding it though if it gets widespread use, esp seeing as it’s only a dozen lines of pure Perl.) But that's just me. Yup. And it’s a free country. :-) Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Revert use_ok() change to allow lexical effects?
* Michael Peters mpet...@plusthree.com [2012-04-11 23:15]: On 04/11/2012 04:45 PM, Andy Lester wrote: test_requires is Module::Build only, right? I don't use Module::Build. No, I'm pretty sure it works with Module::Install and ExtUtils::MakeMaker now too. Although that might just be something that was worked on at the qa-hackathon. If the issue is really the extra download (for whatever reason that isn’t obvious to me), Andy can also just copy it into t/lib. It’s one dozen-line pure-perl file that isn’t likely to ever change, so… Might as well have it on CPAN instead of pasting a recipe into the Test::More docs and then having everyone paste that into their test scripts in turn. Even if I did, I don't think I'd require the user to go through a download and temporary build of AutoBailout.pm just to remove boilerplate. That's definitely a reasonable position to take. Sure. Same can be said of many Test modules. Test::Exception or just make do with just an `eval` and some logic and a few extra tests? Test::Deep or just stick with the worse output of `is_deeply` when it suffices? Etc. Every author can make each of these choices whichever way they like, and so can Andy. Likewise with AutoBailout. The stop energy he is throwing at it has no substantive reason so far, only “I don’t care for this”. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Document the delegator or the delegated?
* Michael G Schwern schw...@pobox.com [2011-11-04 04:45]: When doing delegation I have a documentation dilemma: do the API docs go in the delegator or the delegate? This is the classic reference vs tutorial dilemma, isn’t it? The direct conclusion is that you want to have separate documentation for the two different kinds of reading, which on CPAN generally takes the form of ::Manual PODs. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Test::Deep 0.108 (and the Test::Most philosophy)
* Ovid publiustemp-perl...@yahoo.com [2010-10-17 16:25]: Modules are poor place for evangelism about unrelated conventions in general, but I feel this especially strongly about Test:: modules with break-the-CPAN level adoption such as Test::Deep. That arguments you made are compelling, so I need to ask your point of view about this: #!/usr/bin/env perl use Test::Most ok 1, '1 is true'; use Test::Most tests = 42 is loosely equivalent to: use strict; use warnings; use Test::Exception 0.88; use Test::Differences 0.500; use Test::Deep 0.106; use Test::Warn 0.11; use Test::More tests = 42; Test::Most, like Test::Class::Most, not only imports the most common testing functions, but also imports strict and warnings for you. I didn't do this lightly. I did this because I see a lot of test suites forgetting one or the other and in the case of test suites, it's terribly important to not miss those because they stop so many errors (for example, many warnings are actually symptoms of underlying bugs and that's what a test suite is about, right?). So did I do the wrong thing here? I'd love to hear pro and con arguments. That looks fine to me. The primary purpose of Test::Most is to cut down on typing. Enabling strictures and warnings for the user fits right into its mission. More importantly, use strict; use warnings; is hardly an experimental interface unproven by practice. :-) Whereas new approaches to namespaces very definitely are. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: sharing perl data structures across tests
* Ovid publiustemp-perl...@yahoo.com [2010-04-02 10:05]: It's an art to write a concise email which describes a problem. What I often find myself doing is writing a long email and then summarising at the bottom. When that's done, I often cut-n-paste the summary at the top of the email and only leave the rest if it is really necessary. “I made this so long only because I didn’t have the time to make it shorter.” —Blaise Pascal Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: camels
* Ricardo Signes perl...@rjbs.manxome.org [2010-01-03 14:35]: It would be a shocking display of benevolence on the part of O'Reilly to give up the camel. And... live dangerously? Do you mean: piss off the publisher of many useful Perl books, opening ourselves to lawsuits and ostracism? That's not a good plan. I cannot help noticing how this parallels the rationalisations of people in an abusive relationship… * Ricardo Signes perl...@rjbs.manxome.org [2010-01-03 14:35]: The problems with pearls include: (a) promoting mispeling Perl as Pearl and (b) a pearl reduces, in its simplest depiction, to a circle. It's not very visually distinctive. An onion can be pretty pared down before you lose sight of what it is. It’s unrecognisable at favicon size. The camel is distinctive down to a handful of pixels. And if you add a shell to it, so can a pearl be. In fact a pearl in a shell is what iX magazine in Germany has used as the masthead for their on-and-off Perl column for at least a decade. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Testing with Test::Class
* Ovid publiustemp-perl...@yahoo.com [2009-10-27 16:15]: I'd much prefer to have Keynote.app slides generated, but Apple's XML format is impenetrable (and no longer has a schema with it. Thanks Apple!). “Steve Jobs doesn’t build platforms, except by accident.” —http://diveintomark.org/archives/2007/01/12/sharecroppers Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Test::Most, end blocks and TAP
* Josh Heumann perl...@joshheumann.com [2009-03-13 14:40]: I can change that to: END { pass(); all_done( 2 ); } ...and everything's just fine. The problem really comes when the test being run in the END block is in another module (such as Test::NoWarnings). END { had_no_warnings(); all_done( 2 ); } -- *AUTOLOAD=*_;sub _{s/(.*)::(.*)/print$2,(,$\/, )[defined wantarray]/e;$1} Just-another-Perl-hack; #Aristotle Pagaltzis // http://plasmasturm.org/
Re: done_testing()
* David E. Wheeler da...@kineticode.com [2009-02-22 07:20]: I care less and less about backwards compatibility every day. This is a format spec, not code. Guess what happened to XML 1.1? To XHTML2? Or differently, to RSS? It may well be that we’ll need to break backcompat over this issue, and if so, OK, but “I don’t care about backcompat” is no way to go about designing a format that succeeds widely. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: done_testing()
* Michael G Schwern schw...@pobox.com [2009-02-19 21:15]: As TAP has no formal means to express that, and I'm not waiting for a TAP extension, any TAP reader will need extra logic to figure that out. So worrying about that seems moot. If it takes a lot longer and TB offers subplans in the meantime, people will want to be able to get the information back out of their TAP streams before formal subplans become part of the syntax. So worrying about how they will process the output of a stream containing implicit subplans seems entirely appropriate. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: done_testing()
* Michael G Schwern schw...@pobox.com [2009-02-20 23:35]: And we come back to the beginning: it's all going to be ad hoc anyway until TAP formalizes it. Fine for eyeballing. If someone wants to scrape the information out they can do it from the description (with the usual caveats about scraping). What I am saying is that just because they are… *wrinkles nose* …scraping [yuck, I feel dirty] doesn’t mean we shouldn’t try to make it as easy as possible. Something about cow paths… so do we want those established by the scraping plebes or not? I plan on doing this: So you see no way of doing it as a test numbering scheme? I would still prefer that… Although, that solution is acceptable as a distant second choice. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: done_testing()
* Michael G Schwern schw...@pobox.com [2009-02-18 21:55]: One of the issues with that approach is Test::Builder's history can't store test #2 twice. So history is lost. Shouldn’t this be fixed? Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Perl 6 and Test.pm's skip() function
* Eric Wilhelm scratchcomput...@gmail.com [2009-01-22 18:55]: Pretend for a moment that the number of tests could automatically be counted by the interpreter (e.g. at the parse/compile stage.) There’s no need to pretend. Either you can tell us how to solve the halting problem and then it’s possible, or you can’t and then it’s not. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Perl 6 and Test.pm's skip() function
* David E. Wheeler da...@kineticode.com [2009-01-22 20:20]: There will be loops with tests in them, and the number of iterations of the loop will be independent of the code in the test script, making it impossible to actually count the number of tests with a computer until the tests have actually been run. Which is how no_plan works. The nice thing about data-driven tests is that in most cases the test program can programmatically derive the number of tests from the test data so it can set up a plan without the programmer having to count. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Perl 6 and Test.pm's skip() function
* Eric Wilhelm scratchcomput...@gmail.com [2009-01-22 18:55]: I'm not sure anybody *wants* a plan. I do. A way to ensure that every test ran or accurate progress reporting, yes. I also want to be sure that no unexpected extra tests ran. It seems to me that some are just willing to suffer counting their tests to achieve that. I am willing to suffer counting tests in order to have my tests counted, yes. Although most of the time I find ways not to have to do that, and still be able to declare a plan. Data-driven tests work wonders for this. The rest of the time I use a variety of tricks to reduce the pain, such as this one: http://perl-qa.hexten.net/wiki/index.php/TestFAQ#How_do_I_update_the_plan_as_I_go.3F Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: The Test Feature I Want
* Ovid publiustemp-perl...@yahoo.com [2008-12-12 13:40]: However, prove shouldn't have special-case knowledge of, say, setting environment variables or anything like that. Agreed. We could potentially have Test::Harness set an environment variable specifying how many tests are run and do this: use Test::Skipall if = sub { !$ENV{PROFILE_TESTS} }, message = 'PROFILE_TESTS environment variable set'; And internally it would not skip_all if TEST_PROGRAMS_TO_RUN (or whatever it's named) is not equal to 1. It scratches a huge itch of mine, but I don't know if anyone else would benefit or if this is the right approach. Too use-case specific and magical, IMO. The whole “set a magical environment variable to make the .t change its mind about whether it should run” thing is broken. It is now possible to pass switches to tests via `prove`, and also to separate tests out into directories. Why not use those facilities instead? I am thinking we should have something like Test::Skipall::Getopt to complement the directory-splitting approach where necessary. I was originally going to propose an interface here but it was laden with environment variable thinking; since realising that for the mistake it would be, I have not yet come up with a solid new idea. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: [PATCH] ExtUtils::MakeMaker and world writable files in dists
* Michael G Schwern [EMAIL PROTECTED] [2008-11-13 04:15]: I really, really, really don't want PAUSE modifying my stuff after it's uploaded. Oh god the mysterious bugs. And then there's the fact that the code I've put my name and signature on is not the same code as is being distributed! Count me in this camp. I do think that PAUSE could fix this, but it *MUST* require author consent. User data is untouchable by principle. The fact that tarballs are “just an envelope” does not matter; they are no less part of the user’s data than the extracted contents. My suggestion on IRC when dmq stumbled into this was that PAUSE should not index the tarball nor mangle it; but it could produce a repacked version and include a link to it in the “was not indexed” mail. That way the author can rubberstamp the repacked version by pasting the link right back into the URL field in the PAUSE upload form. Or else they can fix their toolchain on their own terms and prepare a fresh upload. Or take their ball and go home. Whatever. The real fix is to patch the Windows code in EU::MM and M::B so that tarballs produced through them won’t contain world-writable files, so that ultimately the whole process would become entirely transparent to the Windows-based crowd. But in the meantime, having PAUSE provide assistance (not automagic!) for such people would be a helpful way of keeping the fuss down. (Needless to mention, once the toolchain is appropriately patched, the won’t-index mail should also include the hint that if on Windows, one might want to upgrade one’s toolchain to avoid having to deal with this hassle.) This security check has sent CPAN on the slippery slope of security. Not hardly. Until now CPAN has been a common carrier. Pretty much anything was allowed, stuff was only rejected for extreme reasons and always on a case-by-case basis and always by human judgment. The filtering does not change this. It doesn’t cause the upload to be rejected. It merely causes it not to be indexed, and there a lots of reasons for which PAUSE will already refuse to index an upload that it accepts. Checking for world-writable files feels to me just like the other “is this a sane tarball” tests that are already being performed. It seems to me like a minor and hardly objectionable addition – were it not for Windows marching to a different drummer. Silently mangling tarballs, in contrast, would be entirely new territory. * Jan Dubois [EMAIL PROTECTED] [2008-11-13 20:25]: CPAN (at least the indexing part of it) always poked inside the packages and verified ownership of namespaces. Do you really want *anybody* to be able to upload a new version of your modules and have them replace your versions in the index? If you don't, then you'll have to let go of this common carrier idea. You are confusing two separate things, which is no surprise because Michael confused them too. CPAN is two things: a file distribution mirror network and an indexing service. The mirror network, so far, distributes the files you put on there in bit-for-bit identical form, and therefore is in fact a common carrier. The indexing service, OTOH, is not. But the author does not get to touch the index database anyway. All they can do is affect it in a roundabout way by uploading tarballs for distribution that the indexer will consider interesting enough to take a look at. Changing the indexer’s idea of what is interesting or not is not related to the mirror network’s bit-for-bit identity contract. I would not want to see the latter change. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: [PATCH] ExtUtils::MakeMaker and world writable files in dists
* Michael G Schwern [EMAIL PROTECTED] [2008-09-29 14:50]: MakeMaker can set a minimum umask if it wants to play security nanny On Windows? Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: [PATCH] ExtUtils::MakeMaker and world writable files in dists
* Michael G Schwern [EMAIL PROTECTED] [2008-09-29 16:35]: Aristotle Pagaltzis wrote: * Michael G Schwern [EMAIL PROTECTED] [2008-09-29 14:50]: MakeMaker can set a minimum umask if it wants to play security nanny On Windows? Windows, as always, is a special case. If a work around is necessary for Windows that's fine. Err, the *only* point of this patch is Windows. The idea was to relieve Windows users from having to hack their tar before they can use EU::MM to bake distros that the CPAN indexer will not reject. If you propose a “better solution” that doesn’t work on Windows then it might be “better” but it fails to be a “solution.” Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: [PATCH] ExtUtils::MakeMaker and world writable files in dists
* Cosimo Streppone [EMAIL PROTECTED] [2008-09-29 02:10]: but it seems that gnu tar doesn't like the following: $ tar --mode=0755 cvf blah.tar somedir $ tar c --mode=0755 vf blah.tar somedir and will only accept: $ tar cvf blah.tar --mode=0755 somedir Could this work? GNU tar will, however, accept this: tar cv --mode=0755 -f foo.tar bar/ BSD tar will also accept this placement of the `-f` flag, according to the man page. Other tars may not, though I don’t think that is very likely. However, the `--mode` switch is a GNU curiosity. No other tar that I checked does have it. Honestly, though, if you are using tar on Windows, I don’t know why you would want any other default. Patching EU::MM is the pragmatic approach, and we probably can’t avoid it, but I think it is the wrong place to fix this, still. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Who is vpit?
* Ovid [EMAIL PROTECTED] [2008-09-25 10:55]: I don't recall seeing this error before and I'm pretty sure that it's not a fault on my end (that really is a class method at that line number): It’s not. I got the exact same bogus FAIL. http://www.nntp.perl.org/group/perl.cpan.testers/2008/09/msg2290906.html Someone tell him that his Perl installation is broken. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: [RFC] Dealing with World-writable Files in the Archive of CPAN Distributions
* Eric Wilhelm [EMAIL PROTECTED] [2008-09-23 00:30]: Would someone please explain to me how this issue is not already made a mostly non-issue by having a proper umask and running CPAN as non-root? Note that while running CPAN as non-root is a good idea because it reduces the surface area of any exploits, it doesn’t make them a non-issue. I would prefer my homedir not to vanish, thank you very much. (Note that I’m not saying that this issue is a bona-fide exploit. I’m just saying that running CPAN as non-root is not a way to close any hole.) Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: running cpan as a nobody
* Eric Wilhelm [EMAIL PROTECTED] [2008-09-23 06:35]: Don't run them as yourself either then! I don’t like my module library disappearing *either*. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: running cpan as a nobody
* Eric Wilhelm [EMAIL PROTECTED] [2008-09-23 07:45]: And anyway, having to reinstall something which is widely mirrored on the internet sure beats having to recover your own files (which, presumably are not.) Yes, sure. But it might still mean a machine is off the air for unplanned maintenance all of a sudden. What I’m saying is that no matter how much you reduce the surface area for exploits, it’s not a solution; closing the hole in question is the solution. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: PerlUnit
* Eric Wilhelm [EMAIL PROTECTED] [2008-09-16 13:25]: You need a +15 modifier of nothingbettertodoism to vote. Thankfully, it is very, *very* easy to rack up those points: getting one of your posts upvoted gives you 10 points. So it’s just a minimal barrier so you can’t register a horde of smurfs and start voting away. * Nicholas Clark [EMAIL PROTECTED] [2008-09-16 14:15]: So everyone gainfully employed because they know what they are doing is automatically disqualified? Luckily, no. * David Golden [EMAIL PROTECTED] [2008-09-16 14:30]: I don't think they limit votes the way PM does Everyone gets 30 upvotes and 5 separate downvotes per day. So they don’t limit votes the way PM does, but they do limit votes. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: PerlUnit
* Shawn Boyette ☠ [EMAIL PROTECTED] [2008-09-16 13:30]: I wondered this myself. My guess is (1) they're dissociated from what I think of as the Perl community (which is why i tried to stress it in my post on stackoverflow) and (2) google for perl unit testing. Yes. There are lots and *lots* of people who use Perl but have never encountered the Perl core community, or never use CPAN, or don’t even know of CPAN, or don’t even know anything about Perl that their “How to write CGIs” book didn’t tell them. Many of them are part of isolated pockets of programmers who use Perl on the side. Even online there are many isolated forums that have comparatively little traffic, usually dominated by a few users who mostly just muddle their way through Perl. The size of this “dark community” (in the sense of “DarkPAN”) is enormous. And I suspected that StackOverflow would be a place where some of these people, who would never have an inclination to seek out any Perl-specific places, might go to. Ovid’s observation seems to be bearing that out. Hence my call to action for forming a presence there. We want and need to get the word out. -- *AUTOLOAD=*_;sub _{s/(.*)::(.*)/print$2,(,$\/, )[defined wantarray]/e;$1} Just-another-Perl-hack; #Aristotle Pagaltzis // http://plasmasturm.org/
Re: CPAN Testers - Author Notification System
* Eric Wilhelm [EMAIL PROTECTED] [2008-09-11 22:40]: CENTRAS would probably google better than CENTRAL It seems to be a moderately popular business name and also appears to be a word in several languages. It won’t be as bad as googling “CENTRAL” but it won’t be exactly ideal either. However, if the name ends in “… for Authors on PAUSE” then we get “CENTRAP” instead, which already has Google hits, but merely a few hundred. We’d probably take over the top spots very easily. Plus, I kinda like it. The “trap” allusion could be interpreted several ways… :-) * David Cantrell [EMAIL PROTECTED] [2008-09-12 00:50]: I spent far more time than I should have trying to work 'Liturgy' into it. I thought of suggesting that, as a joke. Then I thought, nah. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Sub::Uplevel vs Test::More
* Paul Johnson [EMAIL PROTECTED] [2008-09-10 13:50]: Oh, but they *could* have them. And I think that is a perfect solution. CPANTS should check whether modules have a shebang line, and if so whether it contains -w. If it does then the author has asserted that the module runs cleanly with warnings enabled and should receive the kwalitee point. Problem is, the shebang line doesn’t actually *do* anything. The correct solution (which also doesn’t require changes to CPANTS) is called warnings::compat. Assuming you actually care that much… Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Sub::Uplevel vs Test::More
* Nicholas Clark [EMAIL PROTECTED] [2008-09-10 17:30]: Why should I add a dependency to correct code to placate the CPANTS game? If you don’t care enough to add a dependency, why care at all? :-) Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: use Test::More no_plan = $plan;
* Eric Wilhelm [EMAIL PROTECTED] [2008-09-09 07:45]: But, uh... what are you looking for exactly? I was hoping to make a nearly-comprehensive list of modules which make this mistake, like I did back when the “`use_ok` prior to setting a plan silently succeeds” was fixed; assuming the list wasn’t too huge, we could then prod all the authors by mail to ask them to fix their crud, thus mostly averting the disruption even before it happens. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: use Test::More no_plan = $plan;
* Ovid [EMAIL PROTECTED] [2008-09-09 08:35]: This reminds me of all of the regexes people write to match proper HTML: sometimes something simple is all you need :) As I wrote in response to Eric, I was actually trying to get as near a comprehenive list of things that need fixing as possible. The nice thing about having a central package repository with such a strong gravity as CPAN does is that it enables tandem upgrades of dependent code when APIs change incompatibly. Kinda like what getting drivers into the official Linux kernel offers if you are a hardware vendor: if the kernel changes internally, the people who just broke your driver will also fix it for you. Centralisation has many big downsides in the general case, but can be attractive because it also has some highly compelling benefits. Since we do have a case of centralisation here, it seems a waste not to exploit that. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: use Test::More no_plan = $plan;
* Michael G Schwern [EMAIL PROTECTED] [2008-09-09 08:15]: I was surprised to get a few hundred results Note that CodeSearch indexes tarballs, so there are likely to be a lot of dupes. But even so, a cautious estimate would still put that at at least several dozen unique hits, so it’s not quite an “I broke CPAN”-level problem, but it’s still significant. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: use Test::More no_plan = $plan;
* Aristotle Pagaltzis [EMAIL PROTECTED] [2008-09-09 09:05]: “I broke CPAN” Btw, Michael, do you have a t-shirt that says that? Because if not, we really need to make you one. :-) Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: use Test::More no_plan = $plan;
* Ovid [EMAIL PROTECTED] [2008-09-09 12:00]: --- On Tue, 9/9/08, Michael G Schwern [EMAIL PROTECTED] wrote: Who's Bob Nimby? No one. 'Bob' is a generic name. Nimby refers to NIMBY -- Not In My BackYard -- the selfish habit of people who shut down needed works because they're personally inconvenienced. OK, this just took a turn to the weird and now I’m lost. What did you mean by your previous reply? Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: use Test::More no_plan = $plan;
* Andreas J. Koenig [EMAIL PROTECTED] [2008-09-09 11:25]: It's definitely the 'I broke CPAN' level. My smoker has 260 fails more than usually. Due to this particular issue? Anyway, the biggest “I broke CPAN” event I remember involved failures cascading to some 15× as many distributions – literally more than half of the CPAN. This one isn’t nearly as bad, even if it’s more than bad enough. Good thing it’s a dev release, eh? Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: use Test::More no_plan = $plan;
* Ovid [EMAIL PROTECTED] [2008-09-08 12:55]: In the developer release of Test::Simple, Test::Builder has been altered to die if you have any arguments after 'no_plan'. This means that some previously passing tests will fail. In fact, there are two test programs in Moose 0.57 which have this and thus fail to pass: use Test::More no_plan = 1; I've recommended that we warn instead of croak as I don't know how widespread this problem is I tried to use Google CodeSearch, but for some reason all my regexes that I feed it match all the cases I want to exclude. I tried variations on use\s+Test::More.*no_plan\s*[')/]\s*[^;] but that matches pretty much every `use Test::More` line with `no_plan` on it ever written, regardless of what follows. If anyone can see something that I can’t, please tell me. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: What do you want? (was Re: The relation between CPAN Testers and quality)
* Andy Lester [EMAIL PROTECTED] [2008-09-05 06:45]: I want nothing in my inbox that I have not explicitly requested. I want to choose how I get reports, if at all, and at what frequency. I want aggregation of reports, so that when I send out a module with a missing dependency in the Makefile.PL, I don't get a dozen failures in a day. (Related, but not a want of mine, it could aggregate by platform so I could see if I had patterns of failure in my code). I want to be able to sign up for some of this, some of that, on some of those platforms. These are currently difficult as a matter of architecture, which is “every tester sends mail directly to every author.” The design overhaul will centralise the gathering and issuing of reports, so all of these things will become possible in the medium term. You are not the only one to ask for them, FWIW. I want to select what kwalitee benchmarks I choose my code to be verified under, so that I can proudly say My modules meet these criteria across these platforms. I want a couple dozen checkboxes of things that could be checked where I say All my modules had better match test X, Y and Z, and these specific modules had also better past A, B and C, too. I want easily selected Kwalitee settings which group together options. Slacker level means you pass these 10 tests, and Lifeguard level means you are Slacker + these other 15 tests, and Stringent level means something else, all the way up to Super Duper Batshit Crazy Anal Perfection level. It seems that you are confusing the CPAN Testers with the CPAN Testing Service (CPANTS), Domm’s pet project. The only relation between the two is their confusingly similar naming. CPANTS tries to lint-check your distribution and code without running any of it; the CPAN Testers download your releases, install any prereqs and then run your test suite. The goals and designs of the two projects as well as their participants are entirely different. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)
* Andy Lester [EMAIL PROTECTED] [2008-09-04 17:45]: Who's to say what my job as an author is? No one, but at the same time, you as an author of libre software have no moral right to dictate what your users want from your code, and if your job according to your understanding does not extend to satisfying the users’ expectations according to *their* understanding, then the users have a legitimate case for knowing that your distributions are made of FAIL as far as they are concerned. Whether you choose to care is then your prerogative, obviously. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Reporting Bugs Where they Belong (was Re: The relation between CPAN Testers and quality)
* chromatic [EMAIL PROTECTED] [2008-09-04 23:15]: UNIVERSAL::isa and UNIVERSAL::can are examples of applying the design principle of Report Bugs Where They Are, Not Where They Appear. How do you propose doing that in the general case? I am certainly interested in what technology you have invented so that computer programs can automatically debug themselves and detect the real source of any problems. Earlier versions had one tremendous flaw in that they reported all *potential* failures, rather than actual actionable failures explicitly worked around. This was a huge mistake to which I clung to stubbornly for far too long, and I've corrected it in recent versions. However good my intentions in maintaining that feature, the effects worked against my goals. Just in the last couple of days, David Golden reported making at least two (did I count correctly?) substantial changes to how CPAN::Reporter grades tests, in order to prevent particular classes of bogus FAILs. Isn’t that a demonstration of exactly the same care? Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)
* Andy Lester [EMAIL PROTECTED] [2008-09-05 18:15]: Here are test reports reporting on failures for these things that we care about you caring about. Again, this is CPANTS, not CPAN Testers. Getting failure reports for a module not running on Perl 5.005 is a test about something I don't care about. I don't give Shit One if my code runs on 5.005, and yet, I've had failures for them. Sure. You have had failures because your code doesn’t run on 5.5 means it doesn’t run on 5.5 means it doesn’t run on 5.5. If I am using 5.5 for whatever reason and am considering trying your code for whatever reason [chromatic: let’s not get into that right here], then to me it is interesting to know that your code is going to fail. I don’t give Shit One about how you feel about me running 5.5, I just want to know if the code works or not. I would be interested to know that you don’t care about supporting my configuration, but as you don’t even care enough to declare your non-support explicitly, I have to find out otherwise. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: The relation between CPAN Testers and quality (or why CPAN Testers sucks if you don't need it)
* chromatic [EMAIL PROTECTED] [2008-09-05 19:50]: If I could see somehow that my distribution implicitly runs on Perl 5.001 (or explicitly runs only on 5.11.0), or that it has no Makefile.PL or Build.PL, or any of the other dozens of packaging quirks that can cause problems, I could fix them before uploading and before triggering a wave of testing. That’s CPANTS, the useful part of it anyway… except that the feedback cycle is even *much* slower there than with the Testers. CPANTS is really miscast as a service; the useful bits should really be extracted and made part of the `distcheck` targets of the various Perl distro toolchains. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Plans for CPAN Testers notification when author CC's go away
* David Golden [EMAIL PROTECTED] [2008-09-05 21:10]: * After a period of time to allow people to opt-in, the default policy for authors without a stated preference will be changed to no mail. From that point on, CPAN Testers will be a purely opt-in service. Hopefully, the design of the notification system will be such that people who want to innovate new ways of filtering notifications to particular distributions or platforms of interest can contribute to the code base to do so. I would hope that this still leaves room for Eric Wilhelm’s proposed “welcome basket” mail getting sent at least once to new authors, so they have a chance to learn about the things that the CPAN Testers can do for them, should they so desire. In fact, if that is in the cards, then maybe the default policy change should simply switch authors without a stated preference to “has never received a welcome message” so that they get a chance to hear about the new developments that are transpiring and the new options they have, rather than having CPAN Testers quietly go radio silent for them. Not everyone reads perl-qa. Also, I would stipulate that if an author has not specifically stated their preference, maybe mail them another welcome basket in a year or two. No other mail, of course – they still have to opt in if they want FAIL/PASS mail. But reiterating the welcome basket every once in a very long time (in case it slipped through a crack in their TODO list, got purged during an inbox bankruptcy start-over, or whatever) seems like a gentle enough reminder that it would not annoy. The goal, again, is to reach people at the outskirts of the community who do not read 30 Perl mailing lists and all of the use.perl journals (yes, that’s me). Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: git tarballs / tarfile comments
* Graham Barr [EMAIL PROTECTED] [2008-09-03 18:35]: But the FAIL is record in the wrong place. Anyone with a CPAN toolchain older than the most recent bleeding edge version or with a couple-months-old tar binary (ie. everyone except a number of people indistinguishable from zero) will still encounter the problem. Your distribution is not backward compatible with only-barely-outdated software. That, IMO, is a legitimate flaw in your distribution: current users with non-ancient system configurations will actually encounter this problem. I would certainly suggest that you re-release with a comment-less tarball. So the FAIL did ding the right place. The inadequacy of Testers in this case is that there weren’t *enough* FAILs to go around to account for all the problems that your incident uncovered. Without manual investigation, this might have gone undetected for quite a while longer. But the FAIL did fail to point out the true source of the problem. And IMO this hints at a real problem that was mentioned in this thread, but was not really indicted: namely CPAN.pm’s logic that if there is no Makefile.PL, it is a sane idea to make one up out of whole cloth. I can’t believe anyone would think this could ever *ever* work, though Andreas does not strike me as the type to put harebrained heuristical magic in code. So I have to wonder what the justification for this behaviour might be. Is it really necessary or even helpful? Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: imaginary Makefile.PL
* Eric Wilhelm [EMAIL PROTECTED] [2008-09-03 20:00]: I've been told that it is intended for tarballs made in the times when there was no such thing as Makefile.PL yet. The cure seems worse than the disease at this point in time then. Maybe the heuristic in CPAN.pm could be just a little more robust? I’m not sure what that would entail, of course; what *did* distributions of the time look like? There have to be more cues in there than *just* the absence of a Makefile.PL, I hope? Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Why make up a Makefile.PL (was: Re: git tarballs / tarfile comments)
* David Golden [EMAIL PROTECTED] [2008-09-03 21:20]: Examples: * BinTree * Counter Can’t even find those. * Apache::AuthenIMAP Last update: 2002. Just to be on the safe side, however, earlier today I committed a patch to the CPAN trunk to bypass CPAN::Reporter entirely if a Makefile.PL has been generated by CPAN.pm. Sounds good. Presumably those distros are all so old that their authors are unlikely to have an interest in test reports generated almost a decade after release. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: cpantesters - why exit(0)?
* Ovid [EMAIL PROTECTED] [2008-09-02 22:40]: For all of the bogus reports I get, I'd rather get the bogus ones along with the good ones than nothing at all. I'd much prefer that I find out immediately if there's a disaster rather than have someone email me and say this broke our software! ++ Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: I Want to Believe in the CPAN (was Re: cpantesters - why exit(0)?)
* Andy Lester [EMAIL PROTECTED] [2008-09-02 22:20]: Can the cpan-testers please get a dedicated list that is not perl-qa? So there is Perl-QA, TAPx-Dev (where I’ve been dragging my feet to subscribe), the IETF TAP list, the Module::Build and CPANPLUS lists, and now cpan-testers-discuss. I am sure I am missing a few more. Just how much further can we splinter the discussion? If I need to subscribe to 15 lists to get full coverage of all that is going on in a single area of the Perl world, is it any wonder that no one outside of Perl ever hears of anything that is going on in Perl? Is it any wonder at all that it is so hard for “normal” people, whatever that means to you, to keep up with what modules represent community state of the art? To the extent that we need to talk about rethinking CPAN? (Yes, the last one has more reasons. This is not an argument that CPAN is without flaws.) * Andy Lester [EMAIL PROTECTED] [2008-09-02 23:40]: Also, as far as I know, CPAN Testers has never asked the readers of the CPAN Testers or search.cpan.org What would you like us to do for you? The features in CPAN Testers are 100% self-created dogma. Seriously? First you say you want them to play in their own sandbox, then you say they’ve never asked anyone? This recalls to mind a trademark Linus rant I read a while ago. In that mail he was actively discouraging people from starting a special-interest mailing list for some kernel subsystem they were working on which didn’t look like he would be including it in his kernel tree anytime soon. His argument was that while creating little echo chambers may feel more comfortable to everyone (on all sides), it doesn’t help the quality of the end result. Staying in a place frequented by people with possibly contrary opinions or with extradisciplinary viewpoints is ultimately a better strategy. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: cpantesters - why exit(0)?
* David Golden [EMAIL PROTECTED] [2008-09-02 19:55]: Instead of the annoyance of authors writing warn $foo and exit 0, now they'll need to use configure_requires in META.yml to demand an up-to-date version of Module::Build. Sigh. Conceded. I keep forgetting that when talking about the toolchain, new features default to being worse than useless, and keep having to resign myself to reality. If you really want this to be abstracted safely, you'll need to do it like Devel::CheckLib. Write Devel::HaltPL that exports halt and a script to help authors bundle that in the inc/ directory. And I'm not convinced that less annoying or clunky than warn and exit. I agree that that would be neither. Part of the appeal was folding a list of the arcana into the POD that module authors read anyway. Giving people yet another unrelated module to know about (which they’ll only hear of if they follow an arbitrarily chosen 3 out of 15 mailing lists) is not an improvement of the situation. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: I Want to Believe in the CPAN (was Re: cpantesters - why exit(0)?)
* Andy Lester [EMAIL PROTECTED] [2008-09-03 02:55]: On Sep 2, 2008, at 7:44 PM, Aristotle Pagaltzis wrote: Seriously? First you say you want them to play in their own sandbox, then you say they’ve never asked anyone? Yes, both of those are true. Yes, taking separately and literally, they are true. They are not even contradictory. But to tell them not to bother anyone while criticising that they didn’t examine anyones’s needs is to imply that you do not expect them to ever try to help anyone. Certainly when I've said I find certain aspects of it unuseful I've been told I was wrong. You are the judge of what’s useful to you or not, of course, but I can’t pass any further comment on who should or shouldn’t have said what in those cases, without knowing the exchanges you refer to. But no one having to hear they’re wrong anymore does not seem like it would actually improve anything. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: cpantesters - why exit(0)?
* Eric Wilhelm [EMAIL PROTECTED] [2008-09-01 20:40]: with a long history That is the problem, isn’t it? There are only two kinds of systems: working systems that are complex and grew out of simple ones and non-working systems. Then again, we know that complexity does not follow simplicity, but precedes it. So maybe there is hope. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: JSON TAP Diagnostics?
* Eric Wilhelm [EMAIL PROTECTED] [2008-08-21 18:50]: Does it have to be just one? Now and forever? It doesn’t have to be *just* one, but it needs to be *at least* one, and specifically at least one that *everyone* supports, so that you can count on having a way to make an emitter and consumer understand each other. The question that arises is that since everyone has to support that particular format, how much value do we gain from letting people use other ones? Personally I am not sufficiently convinced that allowing for more than one format will turn out to be harmful that I would argue against it, but I am also quite unconvinced of the value, and I do know that this flexibility will incur costs. So I am inclined to say that it should probably be just one format, now and forever. (If worst comes to worst we still have the very-last-resort option of revving TAP.) Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: TAP Diagnostics
* Andy Armstrong [EMAIL PROTECTED] [2008-08-22 00:35]: Indeed - but there's a sweet spot somewhere between specifying too little and specifying too much. It's entirely proper that there should be a debate about where that sweet spot is - and parsimony should be a guiding principle - but the value of having inline, per-test metadata has already been demonstrated. Fully and strongly agreed, ++ on all counts. IMO we don't need a complete serialisation protocol - just somewhere we can make notes and have them pass unmolested through the harness. If someone needs arbitrary serialisation they can use a protocol that makes sense to them and either inline their serialised data as a Base64 encoded string or have the diagnostic refer to an external file. The important thing is to open up a conduit through which user defined data can flow from a test script. […] I favour JSON because it's the simplest solution that fits those criteria. Exactly. We want to make it easy to annotate tests with any sort of testing-relevant metadata that people might have, but the purpose of the protocol is supposed to be conveying test results, not one-way messaging of arbitrary data. Eg. someone who uses TAP in testing a web app will probably have use for an HTTP response status key and also for a hash of response headers. The killer app example for diagnostics that I always come back to is statical analysis over an archive of your TAP streams over time. It makes sense to preserve as much test-relevant metadata as possible to be able to examine trends. For this purpose, the data does have to be machine readable, and it does have to be structured at least a little – *just* a flat key-value bag won’t cut it. However, no intricate structure is necessary. This is a point where the APIs can help. If the API encourages people to indiscriminately dump complex data structures into the TAP stream, that’s what people will do. If the API instead makes it very easy to use diagnostics in the way we intend for them to be used, and doesn’t go out of its way to accomodate the potato bag usage, fewer people will be tempted. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Should META.yml go in the repository?
* David Golden [EMAIL PROTECTED] [2008-08-20 22:40]: My release tagging program makes sure the code is checked in locally and synchronized to the remote repository, checks that the MANIFEST is up to date, checks that the release tag matches the version found in the META.yml, checks that the tag matches the POD in each 'provides' in META.yml, prompts me as to whether all the prereqs are up to date (haven't automated that part yet), and so on. Mind throwing it into some publicly visible spot somewhere so people can gawk at it? (I’m deliberately not saying “release it” since that would imply polishing it to a degree you probably have no desire for.) Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Should MANIFEST go in the repository?
* Jeffrey Thalhammer [EMAIL PROTECTED] [2008-08-20 18:45]: My colleagues prefer to have the MANIFEST file in the repository so they can run distcheck to see what's been added or removed before committing some work. Err. Shouldn’t that be your VCS’s job? Every single one I’ve used has a `status` command that tells me what I’ve touched, and in my experience it is be much more reliable than `MANIFEST`. So is this discussion in your workplace actually a tooling issue? Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Should MANIFEST go in the repository?
* Michael G Schwern [EMAIL PROTECTED] [2008-08-20 23:05]: This seems to be a subset of the should the repository contain all the release files? argument. Should the repo contain just the files which build a release, or should they contain all the files that go into a release? One side would argue that everything should be in version control so you can track down bugs. The other side argues that keeping generated files up to date is annoying. The pragmatic answer to this question depends largely on your release process and how much stuff you generate. Idle musing: if you have a sufficiently able VCS, couldn’t you have it both ways? Keep a main branch that contains just the essentials, and for a release merge it over to the release branch where the generated files are then refreshed and also checked back in. No? I can’t currently think of any reason for which this would fail, although I might be missing something obvious. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Should MANIFEST go in the repository?
* Michael G Schwern [EMAIL PROTECTED] [2008-08-21 01:50]: is it worth the administrative effort The administrative effort of all of three commands? (Switch branch then merge, plus check in – with the usual release procedure being performed inbetween, prior to check-in.) just to keep trunk clean of one generated file? As I indicated by “idle musing,” I was thinking in general terms about the keep-everything vs ignore-generated debate. If it’s only `MANIFEST`, the overhead of that particular approach is wasted, no doubt. It also presumes that the generated files aren't of any use to day to day development. No it doesn’t, each branch can keep a different list in `.fooignore`. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: JSON TAP Diagnostics?
* Ovid [EMAIL PROTECTED] [2008-08-18 12:55]: First of all, read that thoroughly. That should take you a few days. I know, right? When I mention that I always that the YAML spec is much more complex than the XML spec and the XML Namespaces spec put together. (Despite the XML and Namespaces specs being a good deal more rigorous, btw.) JSON is fairly well implemented and new implementations are trivial. This is not true for YAML. Trying to define a minimum standard of YAML for extended TAP is a quagmire. With JSON, we can punt and just point to a fairly well-established JSON spec. Are we still considering human readability a goal for TAP? That would explain why YAML and not some other format. (However, YAML too is human-readable only if you stick to the core 5% of its syntax. I’m not forgetting that.) And for those who would argue for YAML::Tiny as our spec, it already has limitations that hit us at the BBC. In what way, and why would that be relevant to TAP? Would JSON not have those same limitations? (It’s kinda funny that I’m finding myself the YAML’s advocate now, considering how much I dislike it in general…) Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: JSON TAP Diagnostics?
* Michael Peters [EMAIL PROTECTED] [2008-08-18 15:30]: YAML does support things that JSON does not (types, embedded documents, etc) but I've been in doubt that we'd ever need those things for TAP anyway. That would be useful if any of the YAML producers were capable of serialising tricky data structures correctly. If you care about those kinds of things, you probably want to use a language-specific serialiser and put its output in the TAP diagnostic as a string. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: IETF
* Ovid [EMAIL PROTECTED] [2008-08-18 15:30]: Schwern, I can't tell from reading the references you provide whether or what you're saying is correct, but I *think* so. I think your initial mail was misleading and Schwern promptly misunderstood you. What Salve brought up is not that the *working group* needs to attend IETF meetings thrice yearly, but *the chairman* of the WG does. We’re talking about one person, the chair, not about the entire working group. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: JSON TAP Diagnostics?
* Andy Armstrong [EMAIL PROTECTED] [2008-08-18 17:35]: I prefer JSON aesthetically apart from any technical considerations. I don't actually find YAML all that readable. To programmers' eyes JSON looks more like code - presumably because it is :) YAML requires less quoting and backslashing. As a user of a language that lets me pick my own delimiters for strings, regexes, etc, I appreciate this. I do find YAML painful to write and much prefer JSON in that discipline. In fact I prefer it in nearly every discipline. The only situation where I like YAML better is when the contents of a data structure have to be printed in skimmable form and the data structure shape itself is not particularly important, only its contents. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: File/Line # (Was: IETF)
* David E. Wheeler [EMAIL PROTECTED] [2008-08-18 19:25]: I don't believe it's possible to get that info in JS, is it? Just seen: http://eriwen.com/javascript/js-stack-trace/ Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: scary and strange thing in FindBin
* Graham Barr [EMAIL PROTECTED] [2008-08-11 03:05]: On Aug 9, 2008, at 8:38 PM, Aristotle Pagaltzis wrote: * Todd Rinaldo [EMAIL PROTECTED] [2008-08-10 03:35]: What alternatives do you recommend? See Tom Heady’s reply on this thread. Which also as issues, albeit maybe fewer. Yes, it will fail in some very rare cases. However, it does so loudly and predictably. In contrast, FindBin tries to avoid failing by employing heuristics, and as we all know that is a fancy word for saying “it doesn’t work.” Unsurprisingly, when it fails, it fails silently with bizarre results – as it did for Gabor. Both failure modes are very rare, which means FindBin almost never needs to fall back on heuristics, which is why people tend to think I’m just nitpicking when I point out that it’s broken as designed and should not be used. But I dislike software that can fail silently with bizarre results as a matter of principle. And when the heuristic is there to catch an almost irrelevant failure mode, its extremely disproportionate surprise potential just isn’t worth it, IMO. IIRC when FindBin was written there were some system where $0 was not a full path when the script was invoked via PATH, which was why FindBin was implemented to do the PATH search. But that was over a decade ago and I cannot remember which OS it was on, so who knows if it is still in use today. I guess it might make some sense under those circumstances. But in that case it should have done the PATH search only on affected platforms, not everywhere. (Ideally it would specifically probe, if possible in any way, for whether the heuristic was needed.) Anyway, shoulda coulda woulda… that horse has long left the barn. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: scary and strange thing in FindBin
* Gabor Szabo [EMAIL PROTECTED] [2008-08-09 12:15]: If I have the following code in a file called Makefile.PL use FindBin; print $FindBin::Bin\n; perl Makefile.PL prints /home/gabor/work/pugs no matter where the file is located. And whenever I tell people that FindBin is broken and they should not use it, they look at me like I told them I’m seeing ghosts. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: scary and strange thing in FindBin
* Todd Rinaldo [EMAIL PROTECTED] [2008-08-10 03:35]: What alternatives do you recommend? See Tom Heady’s reply on this thread. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Making CPAN ratings easy (was Re: CPAN Ratings and the problem of choice)
* Eric Wilhelm [EMAIL PROTECTED] [2008-07-04 10:25]: Assuming that we could obtain download counts (which we can't) And even if we could obtain them from the entire mirror network, they would still be worthless. Case in point: I keep my own minicpan mirror, so the download hits my upstream mirror logs from me are worthless. Then there’s also the issue of OS vendors providing packages of Perl modules (Debian, RedHat). Even if we could get those download stats, they would still be meaningless, of course (they don’t actually say anything about how much a module gets used). But that is hypothetical; reality is that you don’t get even as far as that. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: The Problem with Non-Functional Metrics
* chromatic [EMAIL PROTECTED] [2008-06-30 22:05]: Where CPANTS works now is identifying actual, functional problems with distributions: missing licensing information, not extractable, POD errors, invalid META.yml, et cetera. Where CPANTS doesn't work is attempting to bolt on several other highly-ambiguous metrics in order to turn the ranking into a differentiator between similar distribution. Which is what I said. The difference is that I don’t see how the game is particularly harmful as long as it’s carefully applied to functional problem metrics only. In that case I see it as a useful social hack. The hall of fame as an indicator of “these authors consistently take the necessary care to make sure you won’t have problems with their software for no good reason” seems fine to me, at least. The hall of shame is rather more debatable. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: The uselessness of arbitrary Metric gaming
* Eric Wilhelm [EMAIL PROTECTED] [2008-06-30 23:15]: # from chromatic # on Monday 30 June 2008 13:01: Where CPANTS works now is identifying actual, functional problems with distributions: missing licensing information, not extractable, POD errors, invalid META.yml, et cetera. There is some use in that, but what is the line before c? I think there is at least one very clear line, which is to be drawn at “will this cause problems for users who don’t even care about the module itself and are installing it merely as a prereq for something else?” An incorrectly packaged distro clearly will. It's a slippery cliff between what works and what some people happen to think. Is in: why isn't uses Module::Build, contains no Makefile.PL, and requires 5.8.8 a point? See above. What about does not include cargo-culted Test::PodCoverage or Test::Pod? These things cause problems for actual users. This should be a metric per the above criterion. And I’m not joking. (The problem is that one of the aspects of kwalitee was supposed to be that the module is properly documented – regardless of how good the documentation is, it should follow proper form, at least. Unfortunately, since a design goal for CPANTS is not to run any code from the distro, testing POD coverage and even POD itself is not actually doable because you don’t know what the API will look like at runtime (AUTOLOAD, autogenerated functions or methods, etc etc etc). So instead of measuring whether the POD has good coverage, CPANTS measures if it has tests to check the POD coverage… I honestly like and respect Thomas, so I don’t want to insult him at all, but the absurdity of this line of thought is mindboggling. How he arrived at the conclusion that this would provide any sort of useful data point is unfathomable to me.) Has no CamelCase?, No methods expect ({arbitrary = extra = 'braces'})?, isa() is a method?, overrides can()?, No OPTIONS|AS|NUMERIC|CONSTANTS?, Does not use Class::Accessor?, Mutators are set_foo()?, Does not inherit Exporter?, META.yml includes keywords?, META.yml links to version control?, Uses Moose!? All of those are either not measurable, directly or at all, or have significant valid exceptions. If we stick to metrics that can be measured directly without reliance on any kind of proxy, I think the slope won’t be nearly so slippery as you make it out to be. Just the fact that we disagree about a metric would be a good indicator that it isn’t a functional CPANTS metric. Do you disagree that a broken tarball is bad kwalitee? No? Thought so. But *even if* the line was difficult to find, it seems to that just because we were unable to agree exactly where it is doesn’t imply that we’d be unable to agree when a metric is way over it. I can’t imagine anyone ever proposing a Has No CamelCase metric in all seriousness and without getting struck down swiftly. Regardless of whether the metrics are ambiguous or debatable, they can only be used for comparisons between modules if you open a new browser window/tab and manually lookup each one. Disagree. Whether the module is packaged correctly has nothing to do with its purpose and is a piece of information that stands entirely on its own. All of it is potentially useful data to someone, but if it is going to be anything besides a game, it probably needs more ways to be queried. Agree on that. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: The uselessness of arbitrary Metric gaming
* Eric Wilhelm [EMAIL PROTECTED] [2008-07-01 00:15]: # from Aristotle Pagaltzis # on Monday 30 June 2008 14:42: But *even if* the line was difficult to find, it seems to that just because we were unable to agree exactly where it is doesn’t imply that we’d be unable to agree when a metric is way over it. I can’t imagine anyone ever proposing a Has No CamelCase metric in all seriousness and without getting struck down swiftly. If all of the debatable metrics were provided as information, rather than held-up as a golden target, there would be no need to strike down such a thing. Maybe, but my point was about whether metrics can be inarguable, and how to decide which metrics are. You seemed to be saying that such a question offered a slippery slope, and that therefore it was not possible to draw a line between debatable metrics and non-debatable. I asserted that doing so is quite simple. When I said that a metric would be struck down I meant it in the sense of proposals to include debatable metrics in the group of functional ones. That would, hopefully, indeed lead to a swift strike-down. But yes, I agree with your point insofar as that once you have functional (and thus universally applicable and universally helpful) metrics separated out from the inaccurate and in any case debatable and merely informational ones, then sure, you can provide those as neutral data points in a non-judgmental adjunct list without ill effects. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: The uselessness of arbitrary Metric gaming
* David Cantrell [EMAIL PROTECTED] [2008-07-01 00:05]: Surely you can at least check that all POD is well-formed without running any code from the distribution in question? Not entirely, although in retrospect that doesn’t matter. The issue is that if you have multi-line strings or heredocs or other such multi-line special constructs, perl may parse the file differently from a naïve POD parser. However, that doesn’t really matter, as all POD parsers used in practice are naïve, so you *don’t* want to parse the file exactly as perl would if you’re going to validate the POD. So please disregard that part; it is infeasible only to check POD for good form (much less style), whereas it is entirely feasible to check that the POD will be parsed correctly by practical POD processors. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: About tidying up Kwalitee metrics
* Ovid [EMAIL PROTECTED] [2008-06-29 10:55]: --- On Sat, 28/6/08, Aristotle Pagaltzis [EMAIL PROTECTED] wrote: I think the game is actually an excellent idea. The problem is with the metrics. Here are some metrics that are inarguably good: • has_buildtool • extracts_nicely • metayml_conforms_to_known_spec One problem with this is when you get dinged for an unknown key. This means you can't extend your meta YAML file. It's a hash disguised as YAML. There shouldn't be a problem with adding to it, only subtracting from it. On a side note, I still don't understand why I sometimes get dinged for CPANTs errors. Yes, but that doesn’t detract from my point. If those metrics are faulty, they should and *can* be fixed – and either way they measure good form directly, as good metrics should. The problems with them don’t fall in the same category as looking for arbitrarily chosen proxies for unmeasurable aspects of good form (or even style). Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Proposed (optional) kwalitee metric; use re 'taint'
* chromatic [EMAIL PROTECTED] [2008-06-24 08:30]: On Monday 23 June 2008 22:57:26 Paul Fenwick wrote: * Does anyone think this is a bad idea? Absolutely. Is there someplace this should be going besides from CPANTS? It's definitely a common mistake that module authors can easily fix. Perl::Critic? Took the words out of my, well, fingers. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: pgTAP
* Michael Carman [EMAIL PROTECTED] [2008-06-14 03:25]: Who's running that list, anyway? Andy Armstrong. C.f. http://www.nntp.perl.org/group/perl.qa/;[EMAIL PROTECTED] http://www.nntp.perl.org/group/perl.qa/;[EMAIL PROTECTED] http://www.nntp.perl.org/group/perl.qa/;[EMAIL PROTECTED] I tried to sign up a week or two ago but never got any confirmation (or messages). I just resubmitted my request. I had trouble myself at first: http://www.nntp.perl.org/group/perl.qa/;[EMAIL PROTECTED] http://www.nntp.perl.org/group/perl.qa/;[EMAIL PROTECTED] However, many people have since registered without any problems, as David Wheeler just did, so I have no idea if your troubles are related to mine. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: pgTAP
Hi David, I think you wanted to send this to [EMAIL PROTECTED] Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: pgTAP
* David E. Wheeler [EMAIL PROTECTED] [2008-06-11 04:05]: I guess. I'm not on that list. I was just thinking, do QA in SQL! Well, that’s the point of that list. :-) We’re trying to make TAP a less of a Perl echo chamber thing, and from that desire the site and list arose. Your pgTAP experiments would be a perfect example of the sort of efforts we hope to see more of. As you wrote, just a little tinkering and boom, you’re most of the way to harnessing all the existing testing infrastructure (pun intended); that’s the power of TAP. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: TODO Tests
* chromatic [EMAIL PROTECTED] [2008-05-18 07:20]: People already have to modify the TODO test to add whatever kind of positive assertion you postulate; why is writing a separate test a barrier? Because it’s hidden behind an internal interface that would have to be exposed? Or any other reason why code we have to deal with in the real world ends up harder to test than it should be in an ideal world. Modifying one TODO test in such cases would be far cheaper than doing all the legwork necessary to write a proper test case. The result is not as good, of course. If expedience demands forgoing the extra work, however, then having a cheap, suboptimal option to resort to is better than having to forgo testing entirely. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: TODO Tests
* Michael G Schwern [EMAIL PROTECTED] [2008-05-18 05:30]: Aristotle Pagaltzis wrote: As a technique, paying attention to how broken code changes, why does it matter that broken code breaks differently? What does this information tell you that might fix code? It means there is a known internal dependency on some other part of the code that is not being tested directly, either itself or the interaction therewith. You want to be alerted when something changes the result of this interaction. This is very reasonable. I believe this is the point where we diverge. The code's busted, why get fussy over how busted it is? You're probably going to rewrite that bit anyway. You’re not following. 1. There is non-broken code which isn’t being tested directly. 2. There is a test that ensures its correctness, but only indirectly, as part of testing something else. 3. That something else is currently broken, so the test is annotated TODO. End result: if the untested non-broken code breaks, you don’t notice. Arguably, you should write a test that covers the non-broken code directly so you don’t need to care about the TODO’d test. But that may be comparatively hard for a number of conceivable reasons. Ensuring that the TODO continues to break in the expected fashion may well be cheaper than any of the “proper” options and provides similar safety. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: TODO Tests
* Michael G Schwern [EMAIL PROTECTED] [2008-05-14 08:50]: As I understand it, you want to know when broken code breaks differently. Indeed. I can sort of see the point as a regression test... sort of. But if you're at the point of fussiness that broken code has to break in a specific way... is it broken any more? Depends. I agree with others on this list, if you expect it to return 3, put in a test that says it should return 3. As a technique, paying attention to how broken code changes, why does it matter that broken code breaks differently? What does this information tell you that might fix code? It means there is a known internal dependency on some other part of the code that is not being tested directly, either itself or the interaction therewith. You want to be alerted when something changes the result of this interaction. This is very reasonable. Is the opportunity cost of paying attention to and maintaining specific tests for broken code worthwhile? It might be harder to test the depended-upon part of the code, or it might require reaching into the blackbox. One might argue, as chromatic apparently did in this thread, that this means one should just write the proper test that directly asserts the things one is interested in. Personally I see that sort of conviction as counterproductive. It’s always better to lower the bar, making it easy to get a little extra safety with a little extra effort, even if there is a Right Way To Do It that gives much more safety at the cost of much more effort. Cf. `no_plan`, `all_done` and the like. Some testing is always better than none. It seems like a dodgey regression to me of the something changed but we don't know if its significant variety that generates so many false alarms and so much busy work keeping it up to date. A classic example is a test that does a simple string compare with a big woodge of HTML. Any inconsequential change to the formatting is going to trip the test leading to many false negatives and eventually ignoring the test. Again, that depends. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: TODO Tests
* Ovid [EMAIL PROTECTED] [2008-05-12 11:35]: Alternatively, persistent TAP could potentially track TODO results and handle the $WAS for you, but this is quite a ways off and has the problem that we cannot always identify which tests are which. Plus, you still need a way to specify which results conform to expectations and which don’t. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Munging output when running warning tests
* Jonathan Rockway [EMAIL PROTECTED] [2008-05-03 21:15]: You said you're trying to emulate $@, but $@ can be changed out from under you rather easily, so instead of: eval { foo() }; if($@){ error } The defensive programmer will write: my $result = eval { foo() }; if(!defined $result){ error } # use $@ for more details, if necessary. An even more defensive programmer will write: my $result; if ( eval { $result = foo(); 1 } ) { # ... } That’s not necessary in every case, but if foo() can legitimately return undef, you need it. Unfortunately the expression in `eval` is often more complex, so it can hard to make readable. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Munging output when running warning tests
* nadim khemir [EMAIL PROTECTED] [2008-05-01 18:25]: On Tuesday 29 April 2008 19.52.08 Aristotle Pagaltzis wrote: If the warning is normal, how about disabling it? If a warning does not signify a problem, there is absolutely no point in having perl emit one anyway. It's a perl warning (in an eval) not something I output. Yes. Perl warnings can be disabled. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Munging output when running warning tests
* nadim khemir [EMAIL PROTECTED] [2008-04-27 14:00]: Since I didn't remember that the warnings were normal, I had to dig a bit to find that out. If the warning is normal, how about disabling it? If a warning does not signify a problem, there is absolutely no point in having perl emit one anyway. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: User Supplied YAML Diagnostic Keys: Descriptive Version
* chromatic [EMAIL PROTECTED] [2008-04-17 18:45]: IETF's no standards without at least two implementations, and one of them public rule That’s the W3C, actually. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
OT: muttrc (was: User Supplied YAML Diagnostic Keys: Descriptive Version)
* Nicholas Clark [EMAIL PROTECTED] [2008-04-14 08:00]: $ grep -c 'ignore X' ~/.muttrc 100 That's the ones I've collected that I don't care about. And some of those are common prefixes. You know that you can use wildcards to ignore everything (or just big swathes of stuff) by default and then selectively unignore some of non-gunk, right? Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: User Supplied YAML Diagnostic Keys: Descriptive Version
* David E. Wheeler [EMAIL PROTECTED] [2008-04-13 21:00]: On Apr 13, 2008, at 11:37, Michael G Schwern wrote: A) Just reserve ASCII [a-z]. This is very easy to check for but I'm worried it's carving out too small a space. Why would it be too small? I mean, that's a *lot* of words you can use. I don't have any particular reason. Just a feeling that 7-bit ASCII should be good enough for anyone is not such a safe position for anything wanting to look forward. Well, it's not for *everyone*. Just for the folks who are adding keys to TAP itself, which is a pretty small set of people, all things told. But anyway, I agree that B seems fine and it can always be reduced to A later, provided, of course, that we only use ASCII or Latin-1 characters anyway, which seems quite likely to me. I agree with David on all counts. [a-z] seems perfectly sufficient to me, but saying “anything for which POSIX `islower` returns true” is acceptably precise. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: TAP YAML Diagnostics
* Eric Wilhelm [EMAIL PROTECTED] [2008-04-06 18:45]: (perhaps allow non-first numbers too?) /^[a-z][a-z0-9_]*$/ ? ++ Preceding the key with an underscore or some kind of squiggle should be enough to make it safe. Having ToWrite AllPrivateKeys InStudlyCaps OR_EVEN_SHOUT_THEM_IN_FULL_CAPITALISATION would get terribly old very quickly. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: User Supplied YAML Diagnostic Keys
* Ovid [EMAIL PROTECTED] [2008-04-07 10:35]: However, it's been suggested that we do this: not ok 2 - some test --- results: have: ... X-want: ... Secure: y ^^ Is that supposed to be `X-secure`? ... ok 3 - another test By requiring user keys to begin with 'X-', it's visually distinct, immediately clear to the user, it follows conventions used in mail and HTML headers, and if it's wrong, it's easy to change. A full `X-` would be a bit heavy, visually. How about requring punctuation as the first character? not ok 2 - some test --- results: have: ... _want: ... _secure: y ... ok 3 - another test Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: User Supplied YAML Diagnostic Keys
* Smylers [EMAIL PROTECTED] [2008-04-07 16:00]: Would that mean that _secure, -secure, ~secure, +secure, !secure, and so on are all distinct private keys? I'm not sure that's advantageous; picking a single way of denotating privacy would seem less confusing. * Ovid [EMAIL PROTECTED] [2008-04-07 15:30]: --- Aristotle Pagaltzis [EMAIL PROTECTED] wrote: A full `X-` would be a bit heavy, visually. How about requring punctuation as the first character? Such as a colon or dash and run the risk of breaking YAML parsers? OK, pick a single character then. I’m not particularly invested in the idea of it being any punctuation at all. Requiring underscore specifically, say, or something else if you like something else better, would work for me just fine. Plus, but just making it 'X-', it's easy to sort, user supplied keys are at the bottom and it's simple to extract those keys. However, at the end of the day, it's Schwern you'll have to convince. He's quite adamant about upper-case characters. Yeah, because his writing system affords that sort of typographic distinction and does so without trouble. But it’s not the only one, and you don’t even have to go very far to run into all sorts of entertaining surprises: what happens when the uppercase form of i is not I but İ, and the lowercase of I not i but ı? (Turkish locale.) I suggest picking a specific set of characters, specifying that keys consisting entirely of this set of characters are reserved. I’d also suggest denoting private-use keys with initial punctuation (be that a particular character if you prefer if not any from a set of them as I proposed), declaring keys violating both rules to be a parse error. Hey, what can I say? I’m a Perl programmer. I like sigils. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Preparing a new Test::* module
* Randy J. Ray [EMAIL PROTECTED] [2008-04-07 02:00]: Right now, I plan on: * XML validity against a DTD * XML validity against a XML schema * XML validity against a RelaxNG schema * YAML validity * JSON validity * Content testing via some combo of Test::Deep and/or built-in capabilities from Test::JSON et al This is based mostly from Ovid's post, but also from the realization that for these. most of the desired kinds of tests are the same: Does it parse? Is it valid against the spec? Is the content what I was expecting (allowing for acceptable variations in textual format)? I’m not sure it makes so much sense to put those all together. YAML and JSON map directly to Perl data structures, so a “schema” test for both of them would be identical and would amount to something like Data::FormValidator, Params::Validate and friends. XML, on the other hand, has a much more complex data model that does not at all map directly to a Perl data structure. XML is at its best when it comes to documents, not data structures. So I’m not sure it makes a whole lot of sense to mix it all up. I also don’t think it makes a lot of sense to treat JSON and YAML as particularly distinct, or that it makes a lot of sense to pay any attention to their on-disk form as opposed to the resultant in-memory data structure. The only question that’s identical across the board is “does this parse at all,” which in the case of XML means “is this well- formed” but **not** “is this valid according to that schema or DTD or grammar.” Beyond that, the questions diverge: in the case of YAML and JSON, you want Test::Deep/Data::FormValidator/Params::Validate/something along those lines to operate directly on the data structure. In the case of XML, you want to validate against some schema. The shortcuts and conveniences you’ll want in a Test:: module differ between those cases. Right now, I'm leaning towards Test::Markup. That might wind up the YAML guys a bit, though (which is actually a quite-acceptable bonus to me), possibly the JSON camp as well. It’s definitely the wrong name. By that token, the output of Data::Dumper is markup. I, uh, don’t think that’s right. So I am also considering Test::Serialization, since all of these amount to forms of serialization. Even (some would say *especially*) XML. Indeed, XML is a serialisation format… but for a very different kind of thing than what YAML and JSON both are serialisation formats: DOM here, Perl data structure there. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: An alternate view on deferred plans
* Michael G Schwern [EMAIL PROTECTED] [2008-03-31 08:00]: Aristotle Pagaltzis wrote: What it protects you from is dying half-way through the tests without the harness noticing. Of course, that’s by far the most common failure mode. I don't want to drag out the plan vs no_plan argument, but I do want to clear up this common misconception. Death is noted by both Test::More and Test::Harness and has been for a long time. Recent versions of Test::More close off a bug that caused death or non-zero exit codes to be lost in certain cases. If you continue to experience that, report it. It is a bug. The only way you can abort the test halfway through using no_plan and get a success is with an exit(0). That scenario is extremely rare, but I've considered adding in an exit() override to detect it. Except that the test program might be running at the other end of an HTTP connection. Or at the other end of a serial port. Or the harness might be parsing an archived TAP stream. Or a TAP archive generated offline in batch mode. Or… Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: An alternate view on deferred plans
* Buddy Burden [EMAIL PROTECTED] [2008-03-29 20:55]: I just think that having code where every time you change things in one place, you have to make sure you change something somewhere else isn't a good thing. In any other programming scenario, I think most everyone would agree with me. I agree with you even when it comes to plans. The fact that they require you to make lock-step changes in different places of the code is unfortunate. But when it comes to testing, doing this in terms of tests is not only okay, it's considered best practice. No, just intrinsically inevitable, as far as I can tell anyway. (However, for many cases there are patterns you can adopt to minimise the problem – of course, that still imposes a tax on every unit within your test file.) A plan is a declaration, much like explicit static typing, that affords you certain protections at a certain cost. Experience is that by and large, the bottom line for plans is positive: for a small but continuous cost they will catch some very hard to debug false negatives. They are basically an insurance, in this sense. And as with insurances, it is clearly possible that the expense is disproportionate with the safety you get in return. But even when the protection you get is worthwhile, the necessity of distributed changes doesn’t suddenly become a good thing. It’s just a means that’s justified by the ends. Overall, I’m reminded of something MJD said in his Program Repair Shop and Red Flags talk at YAPC::Asia 2007: Also I’m trying to get away from right and wrong. This is not about morals; it’s engineering. And so, well, things can be suboptimal – maybe they aren’t designed as well as they could be –, but they’re not wrong, probably. (The recording is available online, btw, at http://video.google.com/videoplay?docid=-4037440245833870135. If you haven’t seen it, I can only recommend you do. None of it is surprising in the least for a seasoned good programmer (or so I’d hope), but it’s very lucidly explained.) Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: An alternate view on deferred plans
Hi Buddy, * Buddy Burden [EMAIL PROTECTED] [2008-03-31 00:05]: Well, do you agree with this assessment: Having a plan stated as an exact number of tests to be run is a solution to 2 problems. The first is that a test harness must be able to tell when not enough tests have been run. The second is that a test harness must be able to tell when too many tests have been run. Using a success flag at the end of a test script as opposed to a numeric plan solves, equally well, one of those two problems. Since it does not solve the trickier of the two, it is not suitable as a general-purpose solution for all testing. However, since it does solve the more common of the two (the far more common, one might argue), its advantages make it useful, especially for simple test scripts. I was not commenting on the utility of an “all done” flag – merely on your astonishment that people would consider it best practice to do something that required making lock-step changes in multiple places of the code. Having an “all done” flag is definitely useful and I’m not arguing against it. Note that it doesn’t quite protect you from running too few tests either. You may botch some conditional in your test program and end up skipping tests silently, in which case you will still reach the `all_done()` line, and it’ll look as if all was fine. What it protects you from is dying half-way through the tests without the harness noticing. Of course, that’s by far the most common failure mode. For my money, the loss of sense of security you get from the possibility that you might run across one of those uncommon cases and not notice it is about balanced out by the greater likelihood of creating more tests and convincing more programmers to use tests because tests are easier to create and maintain. In fact, I'm starting to sort of view it as a graduated thing: deferred plan all the time, 'cause it's easy, and when you become more seasoned, you naturally gravitate towards numeric plans for their increased sense of security. And/or, it may make sense to use deferred plans during development, but switch to numeric plans before releasing anything to CPAN. Yes in all points. For myself, I think chromatic's warning is strongly making me lean towards the idea that numeric plans are far more useful than I initially gave them credit for, and I personally will probably switch over to numeric plans--at least before I release anything to CPAN, even if I keep using deferred plans for initial development. But as my primary goal right now is to get a team of 3-7 programmers who've been writing tests only haphazardly at best for the past 5-10 years to start writing tests like crazy, I think having deferred plans to KISS is really the way to go. Absolutely – getting them to write tests without plans is much better than writing no tests at all. I’m not dogmatic; even a little improvement is still improvement. Once they write a test and it catches ahead of time a serious bug that would have been a pain to find otherwise, they’ll come around all by themselves, anyway. :-) (Cf. http://c2.com/cgi/wiki?TestInfected). Overall, I'm reminded of something MJD said in his Program Repair Shop and Red Flags talk at YAPC::Asia 2007: Also I'm trying to get away from right and wrong. This is not about morals; it's engineering. And so, well, things can be suboptimal – maybe they aren't designed as well as they could be –, but they're not wrong, probably. True, I probably overemphasized my frustration on that point. Obviously there _can_ be good reasons to violate the principle, and I figured there probably were in this case. I just wasn't seeing them, then. But now I do. :-) Heh, no problem. Note that, as I said, I mean this WRT testing and plans as well: just because you have no plan doesn’t mean your testing is “wrong.” You just have room to improve. :-) Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: MySQL + TAP == MyTAP
Hi Michael, * Michael G Schwern [EMAIL PROTECTED] [2008-03-29 11:35]: Stumbled across this while finding an alternative to libtap for testing C (it has some sort of issue linking with this hairy project I'm working on). Apparently MySQL wrote their own TAP library for C. From http://dev.mysql.com/doc/mysqltest/en/unit-test.html The unit-testing facility is based on the Test Anything Protocol (TAP) which is mainly used when developing Perl and PHP modules. To write unit tests for C/C++ code, MySQL has developed a library for generating TAP output from C/C++ files. Each unit test is written as a separate source file that is compiled to produce an executable. For the unit test to be recognized as a unit test, the executable file has to be of the format mytext-t. For example, you can create a source file named mytest-t.c the compiles to produce an executable mytest-t. The executable will be found and run when you execute make test or make test-unit in the distribution top-level directory. Here's the docs. http://www.kindahl.net/mytap/doc/index.html I think you wanted to send this to the TAP list. :-) Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/
Re: Wide character support for Test::More
* David E. Wheeler [EMAIL PROTECTED] [2008-02-25 20:05]: I'd probably make :utf8 the default, and apply it to STDOUT and STDERR if running on Perl 5.6 or later. That way, any time something is emitted via diag() that is in Perl's internal encoding, it will work (provided, of course, that the user's terminal supports UTF-8, which is not your problem). It is *precisely* his problem if he assumes that every terminal in the world is UTF-8 – particularly if this causes tests to emit octets that the terminal interprets as control sequences, with the end result being a screwy, unusable terminal. Providing a `layer` option in addition to duplicating the dup’d filehandles’ layers by default would be a nice touch; blithely making UTF-8 the default would not. Regards, -- Aristotle Pagaltzis // http://plasmasturm.org/