A. Costa wrote:
On the other hand you're right to notice that 'URLgrep' is too
ad hoc, that got me thinking the flaw of 'URLgrep' is that it's not
general enough. A 'URLcat' would be much more general.
URLcat() { wget -o /dev/null --output-document=- $1 | html2text -ascii
-nobs ; }
On Fri, 10 Nov 2006 03:11:42 -0500
Joey Hess [EMAIL PROTECTED] wrote:
A. Costa wrote:
URLcat() { wget -o /dev/null --output-document=- $1 | html2text
-ascii -nobs ; }
Already available as lynx -dump url (formatting html), or w3m url
(formats html also of course, and can be used in
On Wed, 8 Nov 2006 15:23:44 -0500
Joey Hess [EMAIL PROTECTED] wrote:
# usage: URLgrep pattern URL
# (ad hoc grep switches return first instance of 'pattern'
# in URL and next line, with numbered lines.)
URLgrep() { wget -o /dev/null --output-document=- $2 |
html2text
A. Costa clumsily typed:
I think you're at half right...
s/at half/at least half/
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
A. Costa wrote:
Is there a text 'grep' for web pages? If not, that'd
be good.
I'm thinking something that acts like this ad hoc function, only better:
# usage: URLgrep pattern URL
# (ad hoc grep switches return first instance of 'pattern'
# in URL and next line, with
Package: moreutils
Version: 0.18
Severity: wishlist
Is there a text 'grep' for web pages? If not, that'd
be good.
I'm thinking something that acts like this ad hoc function, only better:
# usage: URLgrep pattern URL
# (ad hoc grep switches return first instance of 'pattern'
# in
6 matches
Mail list logo