Gisle Vanem <[EMAIL PROTECTED]> writes:
> Why the need for asprintf() in url.c:903? This function is missing
> on DOS/Win32 and nowhere to be found in ./lib.
Wget is supposed to use aprintf, which is defined in utils.c, and is
not specific to Unix.
It's preferable to use an asprintf-like functio
"Juon, Stefan" <[EMAIL PROTECTED]> writes:
> I just noticed these debug messages:
>
> **
> DEBUG output created by Wget 1.10.2 on cygwin.
You are of course aware that this is not the latest Wget (1.11.4)?
As mentioned before, recursive downl
Micah Cowan <[EMAIL PROTECTED]> writes:
> I don't see what you see wrt making the code harder to follow and reason
> about (true abstraction rarely does, AFAICT,
I was referring to the fact that adding an abstraction layer requires
learning about the abstraction layer, both its concepts and its
i
Micah Cowan <[EMAIL PROTECTED]> writes:
>> Or did you mean to write wget version of socket interface? i.e. to
>> write our version of socket, connect,write,read,close,bind,
>> listen,accept,,,? sorry I'm confused.
>
> Yes! That's what I meant. (Except, we don't need listen, accept; and
> we only
Micah Cowan <[EMAIL PROTECTED]> writes:
> My Name? wrote:
>> Hello,
>>
>> I was wondering if there was a way to prevent the title changing...
>> wget is currently nested in another script, and would probally confuse
>> the user as to why the title says "wget file location" is it possible
>> to re
Alain Guibert <[EMAIL PROTECTED]> writes:
>> Maybe you could put a breakpoint in fnmatch and see what goes wrong?
>
> The for loop intended to eat several characters from the string also
> advances the pattern pointer. This one reaches the end of the pattern,
> and points to a NUL. It is not a '*'
Alain Guibert <[EMAIL PROTECTED]> writes:
> On Wednesday, April 2, 2008 at 23:09:52 +0200, Hrvoje Niksic wrote:
>
>> Micah Cowan <[EMAIL PROTECTED]> writes:
>>> It's hard for me to imagine an fnmatch that ignores FNM_PATHNAME
>
> The libc 5.4.33 fnmatc
Alain Guibert <[EMAIL PROTECTED]> writes:
> This old system does HAVE_WORKING_FNMATCH_H (and thus
> SYSTEM_FNMATCH). When #undefining SYSTEM_FNMATCH, the test still
> fails at the very same line. And then it also fails on modern
> systems. I guess this points at the embedded src/cmpt.c:fnmatch()
Micah Cowan <[EMAIL PROTECTED]> writes:
> I'm wondering whether it might make sense to go back to completely
> ignoring the system-provided fnmatch?
One argument against that approach is that it increases code size on
systems that do correctly implement fnmatch, i.e. on most modern
Unixes that we
Micah Cowan <[EMAIL PROTECTED]> writes:
>> It sounds like a libc problem rather than a gcc problem. Try
>> #undefing SYSTEM_FNMATCH in sysdep.h and see if it works then.
>
> It's hard for me to imagine an fnmatch that ignores FNM_PATHNAME: I
> mean, don't most shells rely on this to handle file g
Alain Guibert <[EMAIL PROTECTED]> writes:
> Hello Micah,
>
> On Monday, March 31, 2008 at 11:39:43 -0700, Micah Cowan wrote:
>
>> could you try to isolate which part of test_dir_matches_p is failing?
>
> The only failing src/utils.c test_array[] line is:
>
> | { { "*COMPLETE", NULL, NULL }, "
"mm w" <[EMAIL PROTECTED]> writes:
> #if SIZEOF_VOID_P > 4
> key += (key << 44);
> key ^= (key >> 54);
> key += (key << 36);
> key ^= (key >> 41);
> key += (key << 42);
> key ^= (key >> 34);
> key += (key << 39);
> key ^= (key >> 44);
> #endif
>
> this one is minor, the shift count
Charles <[EMAIL PROTECTED]> writes:
> On Thu, Mar 13, 2008 at 1:17 AM, Hrvoje Niksic <[EMAIL PROTECTED]> wrote:
>> > It assums, though, that the preexisting index.html corresponds to
>> > the one that you were trying to download; it's unclear to me how
>
Micah Cowan <[EMAIL PROTECTED]> writes:
>> When I tried this in my wget, I got different behavior with wget 1.11
>> alpha and wget 1.10.2
>>
>> D:\>wget --proxy=off -r -l 1 -nc -np http://localhost/test/
>> File `localhost/test/index.html' already there; not retrieving.
>>
>>
>> D:\>wget110 --p
Micah Cowan <[EMAIL PROTECTED]> writes:
> Hrvoje Niksic wrote:
>> I agree that clock_getres itself isn't important. Still, Wget needs
>> to choose a clock that actually works out of several possible clocks
>> allowed by POSIX (and common extensions), so it'
Micah Cowan <[EMAIL PROTECTED]> writes:
>> 2) When I download files from a URL I get the following error:
>>
>> Cannot get REALTIME clock frequency: Invalid argument
>
> I can't tell you why that'd happen; Wget falls back to a clock id that
> should be guaranteed to exist. An erroneou
Micah Cowan <[EMAIL PROTECTED]> writes:
> The prerelease still has a potential for crashes: in the Czech locales
> it will tend to crash if the download is large (or slow) enough to push
> minutes into the three-digit zone (that is, if it would take > 1 hour
> and 40 minutes).
How can minutes get
Martin Paul <[EMAIL PROTECTED]> writes:
> Micah Cowan wrote:
>> Then, how was --http-user, --http-passwd working in the past? Those only
>> work with the underlying HTTP authentication protocol (the brower's
>> unattractive popup dialog), which AFAIK can't be affected by CGI forms
>> or JavaScript
Micah Cowan <[EMAIL PROTECTED]> writes:
> Yeah; and at some point there probably will be more widespread use
> (particularly if we do decide to do some transcoding, etc). I'm a
> little unhappy with the change just because it assumes that
> with_thousands_sep is only ever used for the progress bar
Micah Cowan <[EMAIL PROTECTED]> writes:
> Rather than disable NLS altogether if wcwidth or mbtowc are missing,
> I've opted to disable NLS support only for the progress bar itself:
Nice! Progress bar's usage of multibyte functions is quite localized,
so it makes sense to do this. My proposal as
Micah Cowan <[EMAIL PROTECTED]> writes:
> Also: the fix to the locale/progress-bar issues resulted in the
> added use of a couple wide-character/multibyte-related functions,
> mbtowc and wcwidth.
So far Wget has avoided explicit use of wc/mb functions on the account
of portability. Fortunately i
Micah Cowan <[EMAIL PROTECTED]> writes:
> Right. What I was meaning to prevent, though, is the need to do:
>
> foo[foo_data + foo_idx[i]]
>
> and instead do:
>
> foo[i]
That is why my example had a foo function, which turns foo[i] to
foo(i), but otherwise works the same. Using just foo[i] is
Micah Cowan <[EMAIL PROTECTED]> writes:
> Note that you could also do all the pointer maths up-front, leaving
> existing usage code the same, with something like:
>
> static const char foo_data[] = "one\0two\0three";
> static const char * const foo = {foo_data + 0, foo_data + 4,
> foo_data
"Diego 'Flameeyes' Pettenò" <[EMAIL PROTECTED]> writes:
> On 01/feb/08, at 09:12, Hrvoje Niksic wrote:
>
>> Even ignoring the fact that Wget is not a shared library, there are
>> ways to solve this problem other than turning all char *foo[] into
>&g
"Diego 'Flameeyes' Pettenò" <[EMAIL PROTECTED]> writes:
> It is a micro-optimisation, I admit that, but it's not just the
> indirection the problem.
>
> Pointers, and structures containing pointers, need to be
> runtime-relocated for shared libraries and PIC code (let's assume
> that shared librar
"Christopher G. Lewis" <[EMAIL PROTECTED]> writes:
> On Vista, you probably have to run in an administrative command
> prompt.
You mean that you need to be the administrator to run Wget? If so,
why? Surely other programs managed to access the network without
administrator privileges.
"Hopkins, Scott" <[EMAIL PROTECTED]> writes:
> Interesting. Compiled that code and I get the following when running
> the resulting binary.
>
> /var/opt/prj/wget$ strdup_test
> 20001448
As I suspected. Such an obvious strdup bug would likely have been
detected sooner.
> I appear t
"Hopkins, Scott" <[EMAIL PROTECTED]> writes:
> Worked perfect. Thanks for the help.
Actually, I find it surprising that AIX's strdup would have such a
bug, and that it would go undetected. It is possible that the problem
lies elsewhere and that the change is just masking the real bug.
str
"Marcus" <[EMAIL PROTECTED]> writes:
> Is there some way I can WGET to work with a percentage sign in the password?
>
> I.e. WGET ftp://login:[EMAIL PROTECTED]/file.txt
Yes, escape the percentage as %25:
wget ftp://login:[EMAIL PROTECTED]/file.txt
(This is not specific to Wget; '%' is the hex e
Micah Cowan <[EMAIL PROTECTED]> writes:
> What's up with the -Y option?
IIRC it used to be the option to turn on the use of proxies. I
retained it for compatibility because many people were using `-Y on'
in their scripts. It might be the time to retire that option and only
leave the --no-proxy
Micah Cowan <[EMAIL PROTECTED]> writes:
>> I thought the code was refactored to determine the file name after
>> the headers arrive. It certainly looks that way by the output it
>> prints:
>>
>> {mulj}[~]$ wget www.cnn.com
>> [...]
>> HTTP request sent, awaiting response... 200 OK
>> Length: uns
If GnuTLS support will not be ready for the 1.11 release, may I
suggest that we not advertise it in NEWS? After all, it's badly
broken in that it doesn't support certificate validation, which is one
of the most important features of an SSL client. It also doesn't
support many of our SSL command-l
I've noticed that the NEWS file now includes contents that would
previously not have been included. NEWS was conceived as a resource
for end users, not for developers or distribution maintainers. (Other
GNU software seems to follow a similar policy.) I tried hard to keep
it readable by only incl
Micah Cowan <[EMAIL PROTECTED]> writes:
> Actually, the reason it is not enabled by default is that (1) it is
> broken in some respects that need addressing, and (2) as it is currently
> implemented, it involves a significant amount of extra traffic,
> regardless of whether the remote end actually
R Kimber <[EMAIL PROTECTED]> writes:
>> I agree that Wget should allow the caller to find out what
>> happened, but I don't think exit codes can be of much use there.
>> For one, they don't allow distinction between different
>> "successful" conditions, which is a problem in many cases.
>
> I'm no
Gerard <[EMAIL PROTECTED]> writes:
>> In particular, if Wget chooses not to download a file because the
>> local timestamp is still current, or because its size corresponds
>> to that of the remote file, these should result in an exit status
>> of zero.
>
> I disagree. If wget has not downloaded a
Micah Cowan <[EMAIL PROTECTED]> writes:
> Hrvoje Niksic wrote:
>> A Wget user showed me an example of Wget misbehaving.
>
> Hrvoje, do you know if this is a regression over 1.10.2?
I don't think so, but it's probably a regression over 1.9.x. In 1.10
Wget started
Mauro Tortonesi <[EMAIL PROTECTED]> writes:
>> I vote we stick with C. Java is slower and more prone to environmental
>> problems.
>
> not really. because of its JIT compiler, Java is often as fast as
> C/C++, and sometimes even significantly faster.
Not if you count startup time, which is crucia
Micah Cowan <[EMAIL PROTECTED]> writes:
>> The new Wget flags empty Set-Cookie as a syntax error (but only
>> displays it in -d mode; possibly a bug).
>
> I'm not clear on exactly what's possibly a bug: do you mean the fact
> that Wget only calls attention to it in -d mode?
That's what I meant.
Micah Cowan <[EMAIL PROTECTED]> writes:
> I was able to reproduce the problem above in the release version of
> Wget; however, it appears to be working fine in the current
> development version of Wget, which is expected to release soon as
> version 1.11.*
I think the old Wget crashed on empty Se
"Tony Lewis" <[EMAIL PROTECTED]> writes:
> Hrvoje Niksic wrote:
>> > And how is .tar.gz renamed? .tar-1.gz?
>> Ouch.
>
> OK. I'm responding to the chain and not Hrvoje's expression of pain. :-)
>
> What if we changed the semantics of --
Andreas Pettersson <[EMAIL PROTECTED]> writes:
> And how is .tar.gz renamed? .tar-1.gz?
Ouch.
Micah Cowan <[EMAIL PROTECTED]> writes:
>> It just occurred to me that this change breaks backward compatibility.
>> It will break scripts that try to clean up after Wget or that in any
>> way depend on the current naming scheme.
>
> It may. I am not going to commit to never ever changing the curr
Hrvoje Niksic <[EMAIL PROTECTED]> writes:
> Micah Cowan <[EMAIL PROTECTED]> writes:
>
>> Christian Roche has submitted a revised version of a patch to modify
>> the unique-name-finding algorithm to generate names in the pattern
>> "foo-n.html" rathe
Micah Cowan <[EMAIL PROTECTED]> writes:
> Christian Roche has submitted a revised version of a patch to modify
> the unique-name-finding algorithm to generate names in the pattern
> "foo-n.html" rather than "foo.html.n". The patch looks good, and
> will likely go in very soon.
foo.html.n has the
Micah Cowan <[EMAIL PROTECTED]> writes:
> I can't even begin to fathom why some system would fail to compile
> in such an event: _XOPEN_SOURCE is a feature request, not a
> guarantee that you'll get some level of POSIX.
Yes, but sometimes the system headers are buggy. Or sometimes they
work just
Micah Cowan <[EMAIL PROTECTED]> writes:
>> Or getting the definition requires defining a magic preprocessor
>> symbol such as _XOPEN_SOURCE. The man page I found claims that the
>> function is defined by XPG4 and links to standards(5), which
>> explicitly documents _XOPEN_SOURCE.
>
> Right. But w
Micah Cowan <[EMAIL PROTECTED]> writes:
> Note that curl provides the additional check for a macro version in
> the configure script, rather than in the source; we should probably
> do it that way as well. I'm not sure how that helps for this,
> though: if the above test is failing, then either it
Daniel Stenberg <[EMAIL PROTECTED]> writes:
>> It is quite possible that the Autoconf test for sigsetjmp yields a
>> false negative.
>
> I very much doubt it does, since we check for it in the curl
> configure script,
Note that I didn't mean "in general". Such bugs can sometimes show in
one prog
Micah Cowan <[EMAIL PROTECTED]> writes:
>> I know nothing of VMS. If it's sufficiently different from Unix that
>> it has wildly different alarm/signal facilities, or no alarm/signal at
>> all (as is the case with Windows), then it certainly makes sense for
>> Wget to provide a VMS-specific run_w
Micah Cowan <[EMAIL PROTECTED]> writes:
> Okay... but I don't see the logic of:
>
> 1. If the system has POSIX's sigsetjmp, use that.
> 2. Otherwise, just assume it has the completely unportable, and not
> even BSDish, siggetmask.
Are you sure siggetmask isn't BSD-ish? When I tested that cod
Micah Cowan <[EMAIL PROTECTED]> writes:
> I wasn't really expecting VMS to have sigprocmask(); but I expect
> future systems may conceivably have it and lack the BSD ones (and
> perhaps such systems are already in the wild). Anyway, we'll use
> what's available.
I think you're misunderstanding th
Micah Cowan <[EMAIL PROTECTED]> writes:
>>We ain't go no siggetmask(). None on VMS (out as far as V8.3),
>> either, should I ever get so far.
>
> siggetmask is an obsolete BSDism; POSIX has the sigprocmask function,
> which we should prefer.
We do prefer the POSIX way, which is to use sigset
Micah Cowan <[EMAIL PROTECTED]> writes:
> Steven Schweda has started some testing on Tru64, and uncovered some
> interesting quirks; some of them look like flaws I've introduced,
> and others are bugginess in the Tru64 environment itself. It's
> proving very helpful. :)
Is the exchange off-list o
Micah Cowan <[EMAIL PROTECTED]> writes:
> Could you be more specific? AFAICT, wget.h #includes the system headers
> it needs. Considering the config-post.h stuff went at the top of the
> sysdep.h, sysdep.h is already at the top of wget.h,
OK, it should work then. The reasoning behind my worrying
Micah Cowan <[EMAIL PROTECTED]> writes:
> Yes, that appears to work quite well, as long as we seed it right;
> starting with a consistent X₀ would be just as bad as trying them
> sequentially, and choosing something that does not change several times
> a second (such as time()) still makes it like
Micah Cowan <[EMAIL PROTECTED]> writes:
> Is there any reason we can't move the contents of config-post.h into
> sysdep.h, and have the .c files #include "wget.h" at the top, before any
> system headers?
wget.h *needs* stuff from the system headers, such as various system
types. If you take into
Micah Cowan <[EMAIL PROTECTED]> writes:
>> Note that, technically, those are not leaks in real need of
>> plugging because they get called only once, i.e. they do not
>> accumulate ("leak") unused memory. Of course, it's still a good
>> idea to remove them, if nothing else, then to remove false
>
Micah Cowan <[EMAIL PROTECTED]> writes:
> Alright; I'll make an extra effort to avoid non-portable Make
> assumptions then. It's just... portable Make _sucks_ (not that
> non-portable Make doesn't).
It might be fine to require GNU make if there is a good reason for it
-- many projects do. But re
Micah Cowan <[EMAIL PROTECTED]> writes:
> I may take liberties with the Make environment, and assume the
> presence of a GNU toolset, though I'll try to avoid that where it's
> possible.
Requiring the GNU toolset puts a large burden on the users of non-GNU
systems (both free and non-free ones).
Micah Cowan <[EMAIL PROTECTED]> writes:
> version.c: $(wget_SOURCES) $(LDADD)
> printf '%s' 'const char *version_string = "@VERSION@' > $@
> -hg log -r tip --template=' ({node|short})' >> $@
> printf '%s\n' '";' >> $@
"printf" is not portable to older systems, but that ma
Micah Cowan <[EMAIL PROTECTED]> writes:
>> Make my src changes, create a "changeset"... And then I'm lost...
>
> Alright, so you can make your changes, and issue an "hg diff", and
> you've basically got what you used to do with svn.
That is not quite true, because with svn you could also do "svn
"Tony Godshall" <[EMAIL PROTECTED]> writes:
> OK, so let's go back to basics for a moment.
>
> wget's default behavior is to use all available bandwidth.
And so is the default behavior of curl, Firefox, Opera, and so on.
The expected behavior of a program that receives data over a TCP
stream is t
Micah Cowan <[EMAIL PROTECTED]> writes:
> FYI, I've removed the PATCHES file. Not because I don't think it's
> useful, but because the information needed updating (now that we're
> using Mercurial rather than Subversion), I expect it to be updated
> again from time to time, and the Wgiki seems to
"Tony Godshall" <[EMAIL PROTECTED]> writes:
>> My point remains that the maximum initial rate (however you define
>> "initial" in a protocol as unreliable as TCP/IP) can and will be
>> wrong in a large number of cases, especially on shared connections.
>
> Again, would an algorithm where the rate
Micah Cowan <[EMAIL PROTECTED]> writes:
> Among other things, version.c is now generated rather than
> parsed. Every time "make all" is run, which also means that "make
> all" will always relink the wget binary, even if there haven't been
> any changes.
I personally find that quite annoying. :-(
"Tony Godshall" <[EMAIL PROTECTED]> writes:
>> > available bandwidth and adjusts to that. The usefullness is in
>> > trying to be unobtrusive to other users.
>>
>> The problem is that Wget simply doesn't have enough information to be
>> unobtrusive. Currently available bandwidth can and does cha
Jim Wright <[EMAIL PROTECTED]> writes:
> I think there is still a case for attempting percent limiting. I
> agree with your point that we can not discover the full bandwidth of
> the link and adjust to that. The approach discovers the current
> available bandwidth and adjusts to that. The usefu
Jim Wright <[EMAIL PROTECTED]> writes:
>> - --limit-rate will find your version handy, but I want to hear from
>> them. :)
>
> I would appreciate and have use for such an option. We often access
> instruments in remote locations (think a tiny island in the Aleutians)
> where we share bandwidth wi
Micah Cowan <[EMAIL PROTECTED]> writes:
> It is actually illegal to specify byte values outside the range of
> ASCII characters in a URL, but it has long been historical practice
> to do so anyway. In most cases, the intended meaning was one of the
> latin character sets (usually latin1), so Wget
"Tony Lewis" <[EMAIL PROTECTED]> writes:
> The Mozilla community (with a large base of Win32 programmers)
> rejected an open-source package that met their needs better than
> other packages because it didn't have good enough Win32 support? Why
> didn't they just add in the Win32 support so that th
--- Begin Message ---
Hi,I am using wget 1.10.2 in Windows 2003.And the same problem like Cantara.
The file system is NTFS.
Well I find my problem is, I wrote the command in schedule tasks like this:
wget -N -i D:\virus.update\scripts\kavurl.txt -r -nH -P
d:\virus.update\kaspersky
well, after "w
Micah Cowan <[EMAIL PROTECTED]> writes:
> As Josh points out, the question remains whether this should be our
> behavior; I vote yes, as command-line arguments should always override
> rc files, in general. Of course, these values could well have come from
> .wgetrc and not the command-line; but e
"control H" <[EMAIL PROTECTED]> writes:
> After a few hours of headache I found out my --post-data option
> didn't work as I expected because the data I send has to be
> URL-escaped. This is not mentioned both in the manpage and inline
> help. A remark would be helpful.
Note that, in general, it
Esin Andrey <[EMAIL PROTECTED]> writes:
> Hi!
> I have downloaded wget-1.10.2 sources and try to compile it.
> I have some warnings:
>
> /|init.c: In function ‘cmd_spec_prefer_family’
> init.c:1193: warning: доступ по указателю с приведением типа нарушает правила
> перекрытия объектов в памяти
>
Micah Cowan <[EMAIL PROTECTED]> writes:
> I have a question: why do we attempt to generate absolute paths and
> such and CWD to those, instead of just doing the portable
> string-of-CWDs to get where we need to be?
I think the original reason was that absolute paths allow crossing
from any direct
Micah Cowan <[EMAIL PROTECTED]> writes:
> Actually, I was wrong though: sometimes mmap() _is_ failing for me
> (did just now), which of course means that everything is in resident
> memory.
I don't understand why mmapping a regular would fail on Linux. What
error code are you getting?
(Wget tri
Micah Cowan <[EMAIL PROTECTED]> writes:
> Yes, but when mmap()ping with MEM_PRIVATE, once you actually start
> _using_ the mapped space, is there much of a difference?
As long as you don't write to the mapped region, there should be no
difference between shared and private mapped space -- that's
Micah Cowan <[EMAIL PROTECTED]> writes:
> I agree that it's probably a good idea to move HTML parsing to a model
> that doesn't require slurping everything into memory;
Note that Wget mmaps the file whenever possible, so it's not actually
allocated on the heap (slurped). You need some memory to
Micah Cowan <[EMAIL PROTECTED]> writes:
> - Automated packaging and package-testing
What packaging does this refer to exactly?
> - Automatic support for a wide variety of configuration and build
> scenarios, such as configuring or building from a location other than
> the source directory tr
Micah Cowan <[EMAIL PROTECTED]> writes:
>> I don't know. The reason directories are matched separately from
>> files is because files often *don't* match the pattern you've chosen
>> for directories. For example, -X/etc should exclude anything under
>> /etc, such as /etc/passwd, but also /etc/fo
Micah Cowan <[EMAIL PROTECTED]> writes:
>> Converting from Info to man is harder than it may seem. The script
>> that does it now is basically a hack that doesn't really work well
>> even for the small part of the manual that it tries to cover.
>
> I'd noticed. :)
>
> I haven't looked at the scri
Micah Cowan <[EMAIL PROTECTED]> writes:
> I think we should either be a "stub", or a fairly complete "manual"
> (and agree that the latter seems preferable); nothing half-way
> between: what we have now is a fairly incomplete manual.
Converting from Info to man is harder than it may seem. The sc
Micah Cowan <[EMAIL PROTECTED]> writes:
> Yes, but -R has a lesser degree of control over the sorts of
> pathnames that it can constrain: for instance, if one uses
> -Rmyprefix*, it will match files myprefix-foo.html and
> myprefix-bar.mp3; but it will also match notmyprefix.js, which is
> probabl
Micah Cowan <[EMAIL PROTECTED]> writes:
> There seems to be a bit of confusion. For one: we already have a third
> digit (when appropriate); cf 1.10.2.
>
> Second, as with many other software projects, especially GNU ones,
> the version numbers are _not_ decimal numbers. 1.11 does not follow
> 1.1
Micah Cowan <[EMAIL PROTECTED]> writes:
> Someone just asked on the #wget IRC channel if there was a way to
> exclude files with certain names, and I recommended -X, without
> realizing that that option excludes directories, not files.
>
> My question is: why do we allow users to exclude directori
Micah Cowan <[EMAIL PROTECTED]> writes:
> I would like for devs to be able to avoid the hassle of posting
> non-trivial changes they make to the wget-patches list. To my mind,
> there are two ways of accomplishing this:
>
> 1. Make wget-patches a list _only_ for submitting patches for
> considerat
Micah Cowan <[EMAIL PROTECTED]> writes:
>> Mauro and I are subscribed to it. The list served its purpose while
>> Wget was actively maintained. It's up to you whether to preserve it
>> or replace it with a bug tracker patch submission process.
>
> Given the low incidence of patch submission, is
Micah Cowan <[EMAIL PROTECTED]> writes:
> What is the status of the wget-patches list: is it being actively
> used/monitored? Does it still serve its original purpose?
Mauro and I are subscribed to it. The list served its purpose while
Wget was actively maintained. It's up to you whether to pre
Rich Cook <[EMAIL PROTECTED]> writes:
> On Jul 5, 2007, at 11:08 AM, Hrvoje Niksic wrote:
>
>> Rich Cook <[EMAIL PROTECTED]> writes:
>>
>>> Trouble is, it's undocumented as to how to free the resulting
>>> string. Do I call free on it?
>&g
Rich Cook <[EMAIL PROTECTED]> writes:
> Trouble is, it's undocumented as to how to free the resulting
> string. Do I call free on it?
Yes. "Freshly allocated with malloc" in the function documentation
was supposed to indicate how to free the string.
"Virden, Larry W." <[EMAIL PROTECTED]> writes:
> "Tony Lewis" <[EMAIL PROTECTED]> writes:
>
>> Wget has an `aprintf' utility function that allocates the result on
> the heap. Avoids both buffer overruns and
>> arbitrary limits on file name length.
>
> If it uses the heap, then doesn't that open
"Tony Lewis" <[EMAIL PROTECTED]> writes:
> There is a buffer overflow in the following line of the proposed code:
>
> sprintf(filecopy, "\"%.2047s\"", file);
Wget has an `aprintf' utility function that allocates the result on
the heap. Avoids both buffer overruns and arbitrary limits on fil
Daniel Stenberg <[EMAIL PROTECTED]> writes:
> I'm pretty sure the original NTLM code I contributed to wget _had_
> the ability to deal with proxies (as I wrote the support for both
> host and proxy at the same time). It should be fairly easy to bring
> back.
It's easy to bring back the code itse
Micah Cowan <[EMAIL PROTECTED]> writes:
> The GNU Project has appointed me as the new maintainer for wget,
Welcome!
If you need assistance regarding the workings of the internals or
design decisions, please let me know and I'll gladly help. I haven't
had much time to participate lately, but hop
Adrian Sandor <[EMAIL PROTECTED]> writes:
> Thanks a lot Steven,
>
>> Apparently there's more than a little code in src/cookies.c which is
>> not ready for NULL values in the "attr" and "value" members of the
>> "cookie" structure.
>
> Does that mean wget is buggy or does brinkster break the cooki
"George Pavlov" <[EMAIL PROTECTED]> writes:
>> > Permanent cookies are supposed to be present in cookies.txt, and
>> > Wget will use them. Session cookies will be missing (regardless
>> > of how they were set) from the file and therefore will not be
>> > picked up by Wget.
>
> This is not entirel
Poppa Pump <[EMAIL PROTECTED]> writes:
> I actually do know the cookies that are set. What I'd like to do is
> add it to cookies.txt. I attempted to edit the file, but when I load
> the cookies, the ones I've added doesn't show. It only shows the
> ones saved by wget. I'm not even sure what the fo
Poppa Pump <[EMAIL PROTECTED]> writes:
> Now I also need to load 2 more cookie values, but these are set
> using Javascript. Does anyone know how to set those cookies. I can't
> seem to find any info on this. Thanks for your help.
Wget doesn't really distinguish the cookies set by Javascript from
Greg Lindahl <[EMAIL PROTECTED]> writes:
> Host: kpic1 is a HTTP/1.1 feature. So this is non-sensical.
The `Host' header was widely used with HTTP/1.0, which is how it
entered the HTTP/1.1 spec.
For other reasons, Wget should really upgrade to using HTTP/1.1.
1 - 100 of 1929 matches
Mail list logo