Re: [DOCS] Documentation tools
Robert Spier [EMAIL PROTECTED] wrote: Pod::Simple is relatively easy to subclass. And Sean is pretty receptive to changes. [ more referenced source inside ] - icu - lib/Test/* - lib/Pod/* are all standard thingys. I'm not thinking that we are gonna reinventing wheels nor that we are gonna copying existing wheels, so I'd vote for just removing all that from CVS. All non-trivial packages have some preliminaries. Some prominent notes in README and INSTALL can provide the necessary steps, how to get that source. If we are going towards real user releases, we can provide complete packages including all, for now its just simpler to not ship (and maintain[1]) everything. *If* we need additions or changes to such prelim source code, please work out a simple scheme that works: $ man patch ... comes to my mind. I'd really like that have clarified towards next release at best. leo [1] IIRC there was already one icu update outside that finally got it into CVS - but the whole code is still unused.
Re: [DOCS] Documentation tools
On 6 Feb 2004, at 22:32, Leopold Toetsch wrote: - icu - lib/Test/* - lib/Pod/* are all standard thingys. I'm not thinking that we are gonna reinventing wheels nor that we are gonna copying existing wheels, so I'd vote for just removing all that from CVS. yep All non-trivial packages have some preliminaries. Some prominent notes in README and INSTALL can provide the necessary steps, how to get that source. I'd like to see Configure.pl say what's needed, and do what it can to help if requested. Meanwhile, I've been adding some perl code of my own which should give a more parroty feel to the docs. http://homepage.mac.com/michael_scott/Parrot/docs/html/index.html There are some links to actual files in the distribution (READMEs etc) which will be broken because it's not up there, but they work ok locally. As you can see the structure is lifted from the wiki, this is because it saved me thinking while I got it working. The Item, Group and Section modules in Parrot::Docs will make it fairly easy to set up alternative subsystem-based views of the content instead. Eagle eyes will note that I put the parrotcode.org css and small parrot png in docs/resources so that they work without a network. Hope I haven't transgressed again. Oh, btw, while googling for parrot and leap I found this (indirectly): http://news.bbc.co.uk/2/hi/science/nature/3430481.stm Mike
Re: PIR version of Data::Dumper
Tim Bunce [EMAIL PROTECTED] wrote: On Thu, Feb 05, 2004 at 02:35:38PM +0100, Leopold Toetsch wrote: Just the opposite, its guaranteed to be not the same even on one platform, albeit a srand() like call is still missing to get really random key order. So it would be really nice to have a Data::Dumper be able to sort the keys, like the Perl one now can. That's not specific to Data::Dumper. We are just lacking a sort routine. Tim. leo
Re: Alignment Issues with *ManagedStruct?
Gordon Henriksen [EMAIL PROTECTED] wrote: Maybe you ought to capitulate to the hierarchical nature of it all and simply push on another struct layout specifier to handle the nesting. Exactly that's the plan: .DATATYPE_STRUCT .DATATYPE_STRUCT_PTR are already in CVS. leo
Re: [DOCS] Documentation tools
Suppose I could make a few changes to Pod-Simple, then our problem would be solved. But, being serious, say I'd decided to use Template-Toolkit, it would never have occurred to me to shove all of that in CVS. It always surprised me a that ICU was there, rather than just what was needed to get it to work. So, it seems just to be a question of adding a prerequisites phase to the config. I would propose that we leave Pod-Simple in CVS until I have time to implement that, then we can delete it (promise). Mike On 6 Feb 2004, at 01:39, Robert Spier wrote: I can possibly help it, so it's ok by me to delete lib/Pod, if that's the consensus. I'm not sure what the consensus is. But we should probably come to one. -R
Re: Alignment Issues with *ManagedStruct?
On 02/05/04 Uri Guttman wrote: with this struct (from leo with some minor changes): struct xt { char x; struct yt { char i,k; int j; } Y; char z; } X; and this trivial script (i pass the filename on the command line): [...] i get this lovely output: struct yt char i : offset 0 char k : offset 1 int j : offset 2 struct xt char x : offset 0 struct yt Y : offset 1 char z : offset 7 [...] BTW, this was on a sparc/solaris box. The offsets look incorrect. On basically every modern 32 or 64 bit OS (with sizeof(int) == 4) it should look like: struct yt (size=8, alignment=4) char i : offset 0 char k : offset 1 int j : offset 4 struct xt (size=16, alignment=4) char x : offset 0 struct yt Y : offset 4 char z : offset 12 lupus -- - [EMAIL PROTECTED] debian/rules [EMAIL PROTECTED] Monkeys do it better
Re: Alignment Issues with *ManagedStruct?
Paolo Molaro [EMAIL PROTECTED] wrote: The offsets look incorrect. On basically every modern 32 or 64 bit OS (with sizeof(int) == 4) it should look like: Yeah. But in the meantime Parrot should calculate correct offsets :) lupus leo
anim_parrot_logo.imc: .include question + general imcc questions
Hello, While looking at Chromatic's anim_parrot_logo.imc (in examples/sdl), I was wondering why the includes weren't at the same place. Indeed, the source reads: .sub _main _init() _MAIN() end .end .include library/sdl_types.imc .pcc_sub _init prototyped .include library/sdl.pasm .pcc_begin_return .pcc_end_return .end Why does: .sub _main _init() _MAIN() end .end .include library/sdl_types.imc .include library/sdl.pasm fails with: error:imcc:parse error, unexpected PARROT_OP, expecting $end in file 'library/sdl.pasm' line 1 included from '../tmp/foo.imc' sub '_new_SDL_Image' line 10 And two more imcc questions: - why using .pcc_sub instead of .sub? What is the difference? Which is best/should be used? - isn't there a kind of return imcc op instead of .pcc_begin_return / .pcc_end_return. I found the .return imcc op in imcc/README but it fails with error:imcc:parse error, unexpected '\n'. Is it to be used only when returning something (ie, returning nothing isn't allowed?) Thank you for the answers, Jérôme -- [EMAIL PROTECTED]
Re: Alignment Issues with *ManagedStruct?
Uri Guttman [EMAIL PROTECTED] wrote: boy, was this easy with this module. all we need to do is mess around with the output to get whatever leo needs. s/leo/Joe R. Parrot Hacker/ - I can craft initializers by hand ;) 1) some script e.g. gen_struct (struct2pasm ...) located in tools/dev or build_tools. 2) options: --gen-pasm, -p(default, *if* src file =~ /\.pasm$/ --pmc=Px (for --gen-pasm only, default P15) --gen-pir (default) --named-initializer=0,1 (1) --named-accessor=0,1 (0) --out-file, -o=file (default: change input in place unless stdin) 3) Operation 3a) Take a C structure, spit out initializer 3b) take a commented C structure, add or change initializer 3b) is for source-files that may contain multiple C structures like in my original posting WRT this util or like below with this struct (from leo with some minor changes): ## gen_struct(--options, --pmc=P20, --gen-pasm) # struct xt { # char x; # struct yt { # char i,k; # int j; # } Y; # char z; #} X; ## end_gen ## begin autogeneratedd bla bla 4) The output 4a) initializer name / P-register = $init_pmc new P20, .OrderedHash # .PerlArray if --no-named-initializer or .local pmc struct_xt_init # --gen-pir struct_xt_init = new OrderedHash and .include datatypes.pasm# for DATATYPE_ consts below 4b) data type of item push $init_pmc, .DATATYPE_$type # --no-named-init... or set $init_pmc[$var], .DATATYPE_$type # --name-init.. If item is a nested struct or a pointer to nested struct please consult docs/pmc/struct.pod 4c) item count = 0, or N for array of items push $init_pmc, $item_count 4d) offset - always 0 for now push $init_pmc, 0 4e) nice to have: preserve C comments (put these after '#') S. also t/pmc/nci.t for examples of initializers and src/nci_test.c for the referenced C structs. I hope above outline is somewhat clear else please just ask. uri leo
Re: anim_parrot_logo.imc: .include question + general imcc questions
Jerome Quelin [EMAIL PROTECTED] wrote: Hello, While looking at Chromatic's anim_parrot_logo.imc (in examples/sdl), I was wondering why the includes weren't at the same place. Indeed, the source reads: One include is inlined init code, while the other has subroutines. And two more imcc questions: - why using .pcc_sub instead of .sub? What is the difference? Which is best/should be used? They are equivalent in PIR code, plain .sub is the prefered syntax. PASM code still needs .pcc_sub _label: to denote a global sub entry. Might be .entry _label: somewhen. - isn't there a kind of return imcc op instead of .pcc_begin_return / .pcc_end_return. .macro .ret_void .pcc_begin_return .pcc_end_return .endm should do it. ... I found the .return imcc op in imcc/README but it fails with error:imcc:parse error, unexpected '\n'. Is it to be used only when returning something (ie, returning nothing isn't allowed?) .return item is used inside above return pairs to return something from the sub. Jérôme leo
Re: [PATCH] Unified PMC/PObj accessors phase 2
Leopold Toetsch wrote: Gordon Henriksen wrote: The patch is at the URL below, and I've split it into 4 for you. The classes-include-lib patch must be applied before any of the other 3. I've resolved the 3-4 conflicts that occurred since the patch was first I've applied now pmc-accessors2-classes-include-lib. *But* 2) The *misc.patch doesn't compile in jit/i386 3) *src-a*.patch reverts Mike's docu patch Ack! Bad cvs update! No cookie! Not sure why those didn't merge. http://www.ma.iclub.com/pub/parrot/ now lists a .tgz with separate patches for each file. You can apply the patches in any order, or not at all; there are no interdependencies. Except!: include_parrot_pobj.h will remove the compatibility interfaces, so you may wish to sit on that for a month or so. -- Gordon Henriksen [EMAIL PROTECTED]
RE: [PATCH] Unified PMC/PObj accessors phase 2
[* - Somewhat inadvisedly, I think. UnionVal is 8 bytes on a 32-bit architecture, but bloats to 16 bytes on a 64-bit architecture. That's likely so because of alignment. But real numbers would be better of course. Err? No, I'd think it's because the union contains two 16-byte structs (64-bit ptr + 64-bit ptr = 128-bit struct = 16 bytes). Shouldn't be any padding in UnionVal unless there's a 32-bit architecture out there that wants to align 32-bit values to 64- bit boundaries... -- Gordon Henriksen IT Manager ICLUBcentral Inc. [EMAIL PROTECTED]
Two things to think about
Just some opinion pieces: http://www.ondotnet.com/pub/wlg/3941 and my reply http://blog.simon-cozens.org/bryar.cgi/id_6649 -- You can't have everything... where would you put it? -- Steven Wright
Re: [PATCH] Unified PMC/PObj accessors phase 2
Gordon Henriksen [EMAIL PROTECTED] wrote: That's likely so because of alignment. But real numbers would be better of course. Err? No, I'd think it's because the union contains two 16-byte structs (64-bit ptr + 64-bit ptr = 128-bit struct = 16 bytes). The minimum size is {bufstart*, buflen). The 2 pointers just fill that memory. leo
Re: [DOCS] Documentation tools
Suppose I could make a few changes to Pod-Simple, then our problem would be solved. Pod::Simple is relatively easy to subclass. And Sean is pretty receptive to changes. never have occurred to me to shove all of that in CVS. It always surprised me a that ICU was there, rather than just what was needed to get it to work. I don't think ICU should be in there at all... but that's just my vote :) So, it seems just to be a question of adding a prerequisites phase to the config. I would propose that we leave Pod-Simple in CVS until I have time to implement that, then we can delete it (promise). I wasn't going to make any rash actions like deleting it on the CVS server side ;) It's there, there's no rush to get it out, but I think in general, we want to keep the parrot source from becoming immensely huge. -R
Re: JavaScript/Perl Question
Thanks for the code Philippe. I'm going to try it. I think it'd be cool if the proxy automatically ran the validator on the page being proxied and the javascript that was added to the page by the proxy would write the results to a logger window or alert. Is this an easy thing to do with HTTP::Proxy? The other thing I would look at is a 'bookmarklet' (browser bookmark with javascript: url) which seems to be able to get at the source for a page. -Kevin Philippe 'BooK' Bruhat wrote: Le jeudi 29 janvier 2004 à 07:22, Ovid écrivait: --- Tony Bowden [EMAIL PROTECTED] wrote: On Tue, Jan 27, 2004 at 10:37:48AM -0500, Potozniak, Andrew wrote: To make a long story short I can not get access to the source of the bottom frame through JavaScript because of an access denied error. This is a security feature in most browsers - Andrew, Hate to say it, but Tony's right. I've run into this before and the problem is not insurmountable, but it means that you have to have your app running on a server. Or that you need a proxy that'll modify the page on the fly (by adding the javascript you need). My pet module HTTP::Proxy (available on CPAN) can help you do this. :-) I suppose you mostly need a filter that'll add the necessary code to load the javascript somewhere near the opening body tag of each and every text/html response. The code of such a proxy is as simple as: use HTTP::Proxy; use HTML::Parser; use HTTP::Proxy::BodyFilter::htmlparser; # define the filter (the most difficult part) # filters not using HTML::Parser are much simpler my $parser = HTML::Parser-new( api_version = 3 ); $parser-handler( start = sub { my ( $self, $tag, $text ) = @_; $self-{output} .= $text; $self-{output} .= YOUR JAVASCRIPT HERE if $tag eq 'body'; }, self,tagname,text ); $parser-handler( default = sub { my ($self, $text) = @_; $self-{output} .= $text; }, self,text ); # this is a read-write filter (rw = 1) # that is the reason why we had to copy everything into $self-{output} my $filter = HTTP::Proxy::BodyFilter::htmlparser-new( $parser, rw = 1 ); # create and launch the proxy my $proxy = HTTP::Proxy-new(); $proxy-logmask( 1 ); # terse logs $proxy-push_filter( response = $filter, mime = 'text/html', host = 'www.example.com' ); $proxy-start(); And now you have all the javascript you need added to the HTML pages you want. There is also a 'path' parameter to the push_filter() method, you you want to add the javascript only to parts of the web site. Note: I'm not very proud of the way I plugged HTML::Parser objects into HTTP::Proxy. But HTML::Parser uses callbacks, just as HTTP::Proxy (LWP::UA, actually) does. If anybody has better ideas, I all ears.
changes to T::H to enable continuous testing
I'd like to propose an addition to the Test::Harness parsing rules to support dependency analysis. That, in turn, allows monitoring for file changes and selective, immediate re-execution of test files. Is this the right forum for that discussion? Building on the mini_harness.plx example from Test::Harness::Straps, I added checks for declarations like the following: DEPENDS_ON file # implicit test file dependency test_file DEPENDS_ON module_file test_file DEPENDS_ON data_file I'd expect a new Test::Harness::depends_on(@) function is the best way to generate the declarations. For now, however, I just add the following line to the end of my test files: map {print qq(DEPENDS_ON $INC{$_}\n) } keys(%INC); In a script with a perpetual loop, I check the modification times of the test files and the modules each uses. Then, whenever something changes, I rerun the affected tests. Also inside that loop is the generation of a simple dashboard. With this code, less than 200 lines all told, I have continuous, automatic testing going on as I write new tests and new code. I've found it a very powerful feedback system. As a newcomer to this list, I'm not sure what needs to happen to expanded Test::Harness. I can provide reference code, but I expect a discussion needs to happen first. Scott P.S. Btw, I also will be requesting that the stderr output from tests be captured as well. That will allow a more sophisticated dashboard, an html document for example, to have a cross link from a failed test to its diagnostics.
recommendation for website HTML validation tool?
Hello, I'm looking for a perl testing tool that check a whole directory of HTML files to see if the are valid HTML. The ideal tool would also be able to check dynamically generated web pages. There seem to be a few options that I've found so far, but none of them are ideal: - HTML::Lint - nice Perl interface, but doesn't seem to support XHTML, which is what I need. - WebService::Validator::HTML::W3C - I like this module because it intefaces with the W3C validator, the de-facto standard. I even set up an instance of their validator on my own web server, for high volume use. Still, the module current fails because it only accepts URIs to validate. So, it seems challenging to generate some dynamic content, /and then/ have it validated. It generally seems like a like a kludgy solution to have make a call to a web service to validate a page. A great solution seems to be a refactoring of the W3C's 'check' script in Perl modules. - 'tidy'. I even tried writing a wrapper to call this binary. It seems to be more focused on fixing HTML than validating, and didn't give useful output. How are other people integrating HTML validation into their work flow? I want a solution that's easy so it actually gets used. :) Thanks! Mark -- . . . . . . . . . . . . . . . . . . . . . . . . . . . Mark StosbergPrincipal Developer [EMAIL PROTECTED] Summersault, LLC 765-939-9301 ext 202 database driven websites . . . . . http://www.summersault.com/ . . . . . . . .
Re: recommendation for website HTML validation tool?
I've catalogued a number of freeware link checkers in my research for Open Testware Reviews. The raw list is below, actually it's all the tools I know about that do any kind of static analysis on web pages. My subscribers haven't asked me to write about this topic yet, so I haven't tried to further characterize the tools in this category. I've been frustrated with online services like Link Valet, because they won't check a whole page if it contains a lot of links. You probably want to use a tool that you can install locally. Or if you need to fill out forms to get to the dynamic content, you may have to use a full-blown http functional test tool (not listed here) and record the values to put into the forms. AccVerify SE for FrontPage http://www.hisoftware.com/msacc/ Big Brother http://pauillac.inria.fr/~fpottier/brother.html.en Bobby http://www.cast.org/bobby/ Checky Plug http://checky.mozdev.org/ CSE HTML Validator Lite http://www.htmlvalidator.com/lite/ CSSCheck http://www.htmlhelp.com/tools/csscheck/ demoroniser http://www.fourmilab.ch/webtools/demoroniser/ Doctor HTML http://www2.imagiware.com/RxHTML/ ht://Check http://htcheck.sourceforge.net/ InSite http://insite.sourceforge.net/ Jenu http://jenu.sourceforge.net/ JSpider http://j-spider.sourceforge.net/ JTidy http://lempinen.net/sami/jtidy/ Link Page Generator http://sourceforge.net/projects/linkpagegen/ Link Valet http://www.htmlhelp.com/tools/valet/ LinkChecker http://linkchecker.sourceforge.net/ LinkVerify http://link-verify.sourceforge.net/index.en.html Meta Medic http://northernwebs.com/set/setsimjr.html RiadaLinx http://www.riada.com/ Scout http://www.joedog.org/scout/index.shtml Tidy http://tidy.sourceforge.net/ Tidy Online http://valet.htmlhelp.com/tidy/ W3C CSS Validation Service http://jigsaw.w3.org/css-validator/ W3C HTML Validation Service http://validator.w3.org/ W3C RDF Validation Service http://www.w3.org/RDF/Validator/ WDG HTML Validator http://www.htmlhelp.com/tools/validator/ Web Page Backward Compatibility Viewer http://www.delorie.com/web/wpbcv.html Web Page Purifier http://www.delorie.com/web/purify.html Web Static Analyzer Tool (WebSAT) http://zing.ncsl.nist.gov/WebTools/WebSAT/overview.html Weblint http://web.sfc.keio.ac.jp/~mimasa/jweblint/ Weblint Gateway http://ejk.cso.uiuc.edu/cgi-bin/weblint WiDGets http://www.htmlhelp.com/tools/widgets/ Xenu's Link Sleuth (TM) http://home.snafu.de/tilman/xenulink.html -- Danny R. Faught Tejas Software Consulting http://tejasconsulting.com/
Re: changes to T::H to enable continuous testing
On Fri, Feb 06, 2004 at 11:03:42AM -0600, Scott Bolte wrote: Building on the mini_harness.plx example from Test::Harness::Straps, I added checks for declarations like the following: DEPENDS_ON file # implicit test file dependency test_file DEPENDS_ON module_file test_file DEPENDS_ON data_file I'd expect a new Test::Harness::depends_on(@) function is It wouldn't be Test::Harness, it would be a seperate Test::Depends or something. the best way to generate the declarations. For now, however, I just add the following line to the end of my test files: map {print qq(DEPENDS_ON $INC{$_}\n) } keys(%INC); In a script with a perpetual loop, I check the modification times of the test files and the modules each uses. Then, whenever something changes, I rerun the affected tests. Also inside that loop is the generation of a simple dashboard. With this code, less than 200 lines all told, I have continuous, automatic testing going on as I write new tests and new code. I've found it a very powerful feedback system. As a newcomer to this list, I'm not sure what needs to happen to expanded Test::Harness. I can provide reference code, but I expect a discussion needs to happen first. Sounds like you've already got it. Doesn't have to go into T::H. Modularize what you've got and put it on CPAN. P.S. Btw, I also will be requesting that the stderr output from tests be captured as well. Love to, but can't do it and still have T::H be cross-platform compatible. :( What you can do is have your tests print your diagnostics as lines beginning with a # to STDOUT. I believe T::H::Straps currently picks these up as type other but it may change to comment later. -- Michael G Schwern[EMAIL PROTECTED] http://www.pobox.com/~schwern/ You and your facts and your physics. Pah, I say. http://www.goats.com/archive/981221.html
Re: changes to T::H to enable continuous testing
On Fri, 6 Feb 2004 12:22:24 -0800, Michael G Schwern wrote: It wouldn't be Test::Harness, it would be a seperate Test::Depends or something. I could live with that, but why do you think it needs to be separate? The T::H documentation makes it quite clear that there are plans to check for additional keywords (beyond /^(not )?ok/ and Bail out!) in the future. (See the Anything else section.) By the way, aside from a desire to avoid lots of class layers, I stored the dependency data in the H::T::Straps object. It is that class that I'd need to augment, and since it is in Alpha mode I figured now was the time. P.S. Btw, I also will be requesting that the stderr output from tests be captured as well. Love to, but can't do it and still have T::H be cross-platform compatible. :( What you can do is have your tests print your diagnostics as lines beginning with a # to STDOUT. I believe T::H::Straps currently picks these up as type other but it may change to comment later. I could replace Test::More::diag() with a version that uses STDOUT. Does that mean the goal of capturing STDERR listed in T::H's TODO list has been abandoned? Scott
Re: changes to T::H to enable continuous testing
On Fri, Feb 06, 2004 at 03:14:33PM -0600, Scott Bolte wrote: On Fri, 6 Feb 2004 12:22:24 -0800, Michael G Schwern wrote: It wouldn't be Test::Harness, it would be a seperate Test::Depends or something. I could live with that, but why do you think it needs to be separate? The T::H documentation makes it quite clear that there are plans to check for additional keywords (beyond /^(not )?ok/ and Bail out!) in the future. (See the Anything else section.) Test::Harness parses 'ok' and 'not ok' and 'Bail out'... Test::* modules produce the output Test::Harness parses. So your extra logic to parse depends on would go into your Test::Harness extension, but the depends_on() function to produce it would go into a seperate Test::* module. Rule Of Thumb: Test::Harness should not be used in a test script. P.S. Btw, I also will be requesting that the stderr output from tests be captured as well. Love to, but can't do it and still have T::H be cross-platform compatible. :( What you can do is have your tests print your diagnostics as lines beginning with a # to STDOUT. I believe T::H::Straps currently picks these up as type other but it may change to comment later. I could replace Test::More::diag() with a version that uses STDOUT. Does that mean the goal of capturing STDERR listed in T::H's TODO list has been abandoned? Nope. Just means I don't know how to do it. -- Michael G Schwern[EMAIL PROTECTED] http://www.pobox.com/~schwern/ Loon.
Re: recommendation for website HTML validation tool?
Thanks to a suggestion by David Wheeler, I was able to build a tool that works for me. Here's the simple testing script I came up with. You are free to use and modify it for your own purposes: #!/usr/bin/perl -w # Check all our static HTML pages in a www tree to see if they are made of valid HTML # originally by Mark Stosberg on 02/05/04 # based on code by Andy Lester use Test::More; use strict; use XML::LibXML; use File::Spec; use File::Find::Rule; my $startpath = $ARGV[0] || die usage: $0 path/to/www; my $rule = File::Find::Rule-new; $rule-or( $rule-new-directory-name('CVS')-prune-discard, # $rule-new-directory-name('Templates')-prune-discard, $rule-new-file-name('*.html') ); my @html = $rule-in( $startpath ); my $nfiles = scalar @html; # Only try to run the tests if we have any static files if ($nfiles) { plan( tests = $nfiles ); for my $filename ( @html ) { eval { my $parser = XML::LibXML-new; $parser-validation(1); $parser-parse_file($filename); }; is($@,'', $filename is valid XHTML); } } else { diag ( No static files found. No tests ran. ) ; } __END__ Mark -- . . . . . . . . . . . . . . . . . . . . . . . . . . . Mark StosbergPrincipal Developer [EMAIL PROTECTED] Summersault, LLC 765-939-9301 ext 202 database driven websites . . . . . http://www.summersault.com/ . . . . . . . .