Expose MAX_SIZE via "our" will make it possible
to use in tests, and configure, later.
Additionally, returning HTTP 500 code for big files is not an
Internal Server Error, just a memory limit... Some browsers
won't show our HTML response with the link to the raw file in
case of errors, either, so
Sorting makes it easier to review the generated result.
---
Makefile.PL | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/Makefile.PL b/Makefile.PL
index 3020f25a..33688095 100644
--- a/Makefile.PL
+++ b/Makefile.PL
@@ -83,13 +83,13 @@ $v->{rsync_xdocs} = [ @{$v->{gz_xdoc
These usages of file-local global variables make the *.t files
incompatible with run_script(). Instead, use anonymous subs,
"our", or pass the parameter as appropriate.
---
t/httpd-corner.t | 8
t/indexlevels-mirror.t | 10 -
t/mda.t| 46 +++
Spawning a new Perl interpreter for every test case
means Perl has to reparse and recompile every single file
it needs, costing us performance and development time.
Now that we've modified our code to avoid global state,
we can preload everything we need.
The new "check-run" test target is now 20
Tests are now faster with the "make check-run" target by
avoiding Perl startup time for each *.t file.
"make check-run" is nearly 3x faster than "make check"
under 1.2.0 due to the dozens of internal improvements
and cleanups since 1.2.0.
I've also beefed up the "solver" tests to cover the ViewVC
We want to be able to use run_script with *.t files, so
t/common.perl putting subs into the top-level "main" namespace
won't work. Instead, make it a module which uses Exporter
like other libraries.
---
MANIFEST | 2 +-
t/common.perl => lib/PublicInbox/TestC
We should support historical archives from the old days,
but I'm not sure how to best go about it, for now, given
how tricky correct handling of modern email addresses is.
We can deal with it if/when somebody decides to import some
ancient archives...
---
TODO | 2 ++
1 file changed, 2 insertions(