[collectd] [PATCH] curl_xml plugin: Check for a curl_easy_perform() error first
The value of CURLINFO_RESPONSE_CODE isn't valid otherwise. Also, use the symbolic name CURLE_OK in all plugins where appropriate. --- src/apache.c |2 +- src/ascent.c |2 +- src/bind.c |2 +- src/curl.c |2 +- src/curl_json.c |2 +- src/curl_xml.c | 13 ++--- src/nginx.c |2 +- src/write_http.c |2 +- 8 files changed, 13 insertions(+), 14 deletions(-) diff --git a/src/apache.c b/src/apache.c index e562138..8458ce1 100644 --- a/src/apache.c +++ b/src/apache.c @@ -612,7 +612,7 @@ static int apache_read_host (user_data_t *user_data) /* {{{ */ assert (st-curl != NULL); st-apache_buffer_fill = 0; - if (curl_easy_perform (st-curl) != 0) + if (curl_easy_perform (st-curl) != CURLE_OK) { ERROR (apache: curl_easy_perform failed: %s, st-apache_curl_error); diff --git a/src/ascent.c b/src/ascent.c index 2378386..94a3938 100644 --- a/src/ascent.c +++ b/src/ascent.c @@ -597,7 +597,7 @@ static int ascent_read (void) /* {{{ */ } ascent_buffer_fill = 0; - if (curl_easy_perform (curl) != 0) + if (curl_easy_perform (curl) != CURLE_OK) { ERROR (ascent plugin: curl_easy_perform failed: %s, ascent_curl_error); diff --git a/src/bind.c b/src/bind.c index 7857a67..ddde840 100644 --- a/src/bind.c +++ b/src/bind.c @@ -1415,7 +1415,7 @@ static int bind_read (void) /* {{{ */ } bind_buffer_fill = 0; - if (curl_easy_perform (curl) != 0) + if (curl_easy_perform (curl) != CURLE_OK) { ERROR (bind plugin: curl_easy_perform failed: %s, bind_curl_error); diff --git a/src/curl.c b/src/curl.c index cb352bf..c6e2ae9 100644 --- a/src/curl.c +++ b/src/curl.c @@ -645,7 +645,7 @@ static int cc_read_page (web_page_t *wp) /* {{{ */ wp-buffer_fill = 0; status = curl_easy_perform (wp-curl); - if (status != 0) + if (status != CURLE_OK) { ERROR (curl plugin: curl_easy_perform failed with staus %i: %s, status, wp-curl_errbuf); diff --git a/src/curl_json.c b/src/curl_json.c index 1494327..2635772 100644 --- a/src/curl_json.c +++ b/src/curl_json.c @@ -798,7 +798,7 @@ static int cj_curl_perform (cj_t *db, CURL *curl) /* {{{ */ curl_easy_getinfo(curl, CURLINFO_EFFECTIVE_URL, url); status = curl_easy_perform (curl); - if (status != 0) + if (status != CURLE_OK) { ERROR (curl_json plugin: curl_easy_perform failed with status %i: %s (%s), status, db-curl_errbuf, (url != NULL) ? url : null); diff --git a/src/curl_xml.c b/src/curl_xml.c index 747e461..6ffad42 100644 --- a/src/curl_xml.c +++ b/src/curl_xml.c @@ -676,6 +676,12 @@ static int cx_curl_perform (cx_t *db, CURL *curl) /* {{{ */ db-buffer_fill = 0; status = curl_easy_perform (curl); + if (status != CURLE_OK) + { +ERROR (curl_xml plugin: curl_easy_perform failed with status %i: %s (%s), + status, db-curl_errbuf, url); +return (-1); + } curl_easy_getinfo(curl, CURLINFO_EFFECTIVE_URL, url); curl_easy_getinfo(curl, CURLINFO_RESPONSE_CODE, rc); @@ -688,13 +694,6 @@ static int cx_curl_perform (cx_t *db, CURL *curl) /* {{{ */ return (-1); } - if (status != 0) - { -ERROR (curl_xml plugin: curl_easy_perform failed with status %i: %s (%s), - status, db-curl_errbuf, url); -return (-1); - } - ptr = db-buffer; status = cx_parse_stats_xml(BAD_CAST ptr, db); diff --git a/src/nginx.c b/src/nginx.c index ecfb307..7568a2c 100644 --- a/src/nginx.c +++ b/src/nginx.c @@ -215,7 +215,7 @@ static int nginx_read (void) return (-1); nginx_buffer_len = 0; - if (curl_easy_perform (curl) != 0) + if (curl_easy_perform (curl) != CURLE_OK) { WARNING (nginx plugin: curl_easy_perform failed: %s, nginx_curl_error); return (-1); diff --git a/src/write_http.c b/src/write_http.c index 6b1c64a..7f5943a 100644 --- a/src/write_http.c +++ b/src/write_http.c @@ -88,7 +88,7 @@ static int wh_send_buffer (wh_callback_t *cb) /* {{{ */ curl_easy_setopt (cb-curl, CURLOPT_POSTFIELDS, cb-send_buffer); status = curl_easy_perform (cb-curl); -if (status != 0) +if (status != CURLE_OK) { ERROR (write_http plugin: curl_easy_perform failed with status %i: %s, -- 1.7.10 pgpwgi5HcEi35.pgp Description: PGP signature ___ collectd mailing list collectd@verplant.org http://mailman.verplant.org/listinfo/collectd
Re: Moving some files?
On Wed, Feb 06, 2013 at 11:17:21PM +0100, Daniel Stenberg wrote: Thanks, I took your advice on a few of those and I'll move off the MacOSX-Framework file too once Nick's patch is in. IIRC, the Android.mk in its new location should still be automatically picked up by the Android build system. I've just committed a small tweak to reflect the move, but I'm not doing Android development at the moment so unfortunately I'm not set up to test it. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: libcurl: Problem when connect to a shared hosting server over ftp+ssl
On Tue, Feb 05, 2013 at 02:16:01PM +0700, chu ngoc hung wrote: Here is my log (when disable host/peer): [...] * FTP 0x282dda0 state change from QUOTE to PASV * Connect data stream passively 229 Extended Passive mode OK (|||38012|) * Trying 69.195.91.50... * Operation timed out * couldn't connect to host * got positive EPSV response, but can't connect. Disabling EPSV This is a sign of a likely firewall, either at the client or host end. Strict firewalls don't let arbitrary TCP connections through, and since this ftp connection is encrypted, deep packet inspection doesn't help. Try the --ftp-ssl-ccc option to send the control connection in the clear, or try the --ftp-port option to make the data connection be initiated from the server instead of the client. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: [collectd] [PATCH] Allow out-of-tree builds
On Sun, Feb 03, 2013 at 01:17:24PM +0100, Florian Forster wrote: +AM_CPPFLAGS += -I$(top_builddir)/src/libcollectdclient/collectd Since binaries using the library need to get linked to it, and not all binaries are, I think I'd rather add this to the binary specific CPP flags, i.e.: collectd{_nagios,ctl,_tg}_CPPFLAGS = $(AM_CPPFLAGS) -I$(top_builddir)/src/libcollectdclient/collectd That makes perfect sense. I was wondering why we don't run into the same problem with the autogenerated src/config.h. I believe the reason is the AC_CONFIG_HEADERS() macro which, I believe, adds $(top_builddir)/src to the include list. That sounds like a reasonable assumption. It does look like it's happening automatically. My attempts to solving this problem using this macro failed, however, because it seems that the semantic of AC_CONFIG_HEADERS() and AC_CONFIG_FILES() differ quite dramatically: The former doesn't expand the @FOO@ placeholders, making the macro unusable for the use-case of lcc_features.h. It seems, because I fail to parse the documentation. It reads: Make AC_OUTPUT create the file(s) in the blank-or-newline-separated list header containing C preprocessor #define statements, and replace ‘@DEFS@’ in generated files with -DHAVE_CONFIG_H instead of the value of DEFS. Do you happen to have an idea if I'm doing something wrong here? I think adding the builddir to the include paths is probably going to be the easiest and most straight forward solution and I'll do that for now. If your hypothesis is correct, then I don't see another way. It's automake that's doing this, not autoconf, which means that it must be treating some macros specially. If it doesn't give AC_CONFIG_FILES this special treatment, then that path will have to be added manually. Which I don't think is an ugly solution. Dan pgpPjmJNdA44F.pgp Description: PGP signature ___ collectd mailing list collectd@verplant.org http://mailman.verplant.org/listinfo/collectd
[collectd] [PATCH] Add protection from infinite redirect loops to curl-using plugins
--- src/apache.c |1 + src/ascent.c |1 + src/bind.c |1 + src/curl.c |1 + src/nginx.c |1 + 5 files changed, 5 insertions(+) diff --git a/src/apache.c b/src/apache.c index 202b73c..e562138 100644 --- a/src/apache.c +++ b/src/apache.c @@ -426,6 +426,7 @@ static int init_host (apache_t *st) /* {{{ */ curl_easy_setopt (st-curl, CURLOPT_URL, st-url); curl_easy_setopt (st-curl, CURLOPT_FOLLOWLOCATION, 1L); + curl_easy_setopt (st-curl, CURLOPT_MAXREDIRS, 50L); if (st-verify_peer != 0) { diff --git a/src/ascent.c b/src/ascent.c index 3a7c393..2378386 100644 --- a/src/ascent.c +++ b/src/ascent.c @@ -562,6 +562,7 @@ static int ascent_init (void) /* {{{ */ curl_easy_setopt (curl, CURLOPT_URL, url); curl_easy_setopt (curl, CURLOPT_FOLLOWLOCATION, 1L); + curl_easy_setopt (curl, CURLOPT_MAXREDIRS, 50L); if ((verify_peer == NULL) || IS_TRUE (verify_peer)) curl_easy_setopt (curl, CURLOPT_SSL_VERIFYPEER, 1L); diff --git a/src/bind.c b/src/bind.c index 288949a..7857a67 100644 --- a/src/bind.c +++ b/src/bind.c @@ -1399,6 +1399,7 @@ static int bind_init (void) /* {{{ */ curl_easy_setopt (curl, CURLOPT_ERRORBUFFER, bind_curl_error); curl_easy_setopt (curl, CURLOPT_URL, (url != NULL) ? url : BIND_DEFAULT_URL); curl_easy_setopt (curl, CURLOPT_FOLLOWLOCATION, 1L); + curl_easy_setopt (curl, CURLOPT_MAXREDIRS, 50L); return (0); } /* }}} int bind_init */ diff --git a/src/curl.c b/src/curl.c index 69a5b95..30d6e45 100644 --- a/src/curl.c +++ b/src/curl.c @@ -378,6 +378,7 @@ static int cc_page_init_curl (web_page_t *wp) /* {{{ */ curl_easy_setopt (wp-curl, CURLOPT_ERRORBUFFER, wp-curl_errbuf); curl_easy_setopt (wp-curl, CURLOPT_URL, wp-url); curl_easy_setopt (wp-curl, CURLOPT_FOLLOWLOCATION, 1L); + curl_easy_setopt (wp-curl, CURLOPT_MAXREDIRS, 50L); if (wp-user != NULL) { diff --git a/src/nginx.c b/src/nginx.c index b76f25b..ecfb307 100644 --- a/src/nginx.c +++ b/src/nginx.c @@ -144,6 +144,7 @@ static int init (void) } curl_easy_setopt (curl, CURLOPT_FOLLOWLOCATION, 1L); + curl_easy_setopt (curl, CURLOPT_MAXREDIRS, 50L); if ((verify_peer == NULL) || IS_TRUE (verify_peer)) { -- 1.7.10 ___ collectd mailing list collectd@verplant.org http://mailman.verplant.org/listinfo/collectd
[collectd] [PATCH] curl* plugins: Added support for POST and arbitrary headers
These plugins can now be used for things like SOAP or XML-RPC calls. --- src/collectd.conf.pod | 41 - src/curl.c| 29 + src/curl_json.c | 28 src/curl_xml.c| 28 4 files changed, 105 insertions(+), 21 deletions(-) diff --git a/src/collectd.conf.pod b/src/collectd.conf.pod index 70bc997..e70caad 100644 --- a/src/collectd.conf.pod +++ b/src/collectd.conf.pod @@ -944,6 +944,19 @@ File that holds one or more SSL certificates. If you want to use HTTPS you will possibly need this option. What CA certificates come bundled with Clibcurl and are checked by default depends on the distribution you use. +=item BHeader IHeader + +A HTTP header to add to the request. Multiple headers are added if this option +is specified more than once. + +=item BPost IBody + +Specifies that the HTTP operation should be a POST instead of a GET. The +complete data to be posted is given as the argument. This option will usually +need to be accompanied by a BHeader option to set an appropriate +CContent-Type for the post body (e.g. to +Capplication/x-www-form-urlencoded). + =item BMeasureResponseTime Btrue|Bfalse Measure response time for the request. If this setting is enabled, BMatch @@ -1002,31 +1015,15 @@ The following options are valid within BURL blocks: Sets the plugin instance to IInstance. =item BUser IName - -Username to use if authorization is required to read the page. - =item BPassword IPassword - -Password to use if authorization is required to read the page. - =item BVerifyPeer Btrue|Bfalse - -Enable or disable peer SSL certificate verification. See -Lhttp://curl.haxx.se/docs/sslcerts.html for details. Enabled by default. - =item BVerifyHost Btrue|Bfalse - -Enable or disable peer host name verification. If enabled, the plugin checks if -the CCommon Name or a CSubject Alternate Name field of the SSL certificate -matches the host name provided by the BURL option. If this identity check -fails, the connection is aborted. Obviously, only works when connecting to a -SSL enabled server. Enabled by default. - =item BCACert Ifile +=item BHeader IHeader +=item BPost IBody -File that holds one or more SSL certificates. If you want to use HTTPS you will -possibly need this option. What CA certificates come bundled with Clibcurl -and are checked by default depends on the distribution you use. +These options behave exactly equivalent to the appropriate options of the +IcURL plugin. Please see there for a detailed description. =back @@ -1100,9 +1097,11 @@ empty string (no plugin instance). =item BVerifyPeer Btrue|Bfalse =item BVerifyHost Btrue|Bfalse =item BCACert ICA Cert File +=item BHeader IHeader +=item BPost IBody These options behave exactly equivalent to the appropriate options of the -IcURL and IcURL-JSON plugins. Please see there for a detailed description. +IcURL plugin. Please see there for a detailed description. =item EltBXPath IXPath-expressionEgt diff --git a/src/curl.c b/src/curl.c index 30d6e45..cb352bf 100644 --- a/src/curl.c +++ b/src/curl.c @@ -60,6 +60,8 @@ struct web_page_s /* {{{ */ int verify_peer; int verify_host; char *cacert; + struct curl_slist *headers; + char *post_body; int response_time; CURL *curl; @@ -148,6 +150,8 @@ static void cc_web_page_free (web_page_t *wp) /* {{{ */ sfree (wp-pass); sfree (wp-credentials); sfree (wp-cacert); + sfree (wp-post_body); + curl_slist_free_all (wp-headers); sfree (wp-buffer); @@ -173,6 +177,23 @@ static int cc_config_add_string (const char *name, char **dest, /* {{{ */ return (0); } /* }}} int cc_config_add_string */ +static int cc_config_append_string (const char *name, struct curl_slist **dest, /* {{{ */ +oconfig_item_t *ci) +{ + if ((ci-values_num != 1) || (ci-values[0].type != OCONFIG_TYPE_STRING)) + { +WARNING (curl plugin: `%s' needs exactly one string argument., name); +return (-1); + } + + *dest = curl_slist_append(*dest, ci-values[0].value.string); + if (*dest == NULL) +return (-1); + + return (0); +} /* }}} int cc_config_append_string */ + + static int cc_config_set_boolean (const char *name, int *dest, /* {{{ */ oconfig_item_t *ci) { @@ -405,6 +426,10 @@ static int cc_page_init_curl (web_page_t *wp) /* {{{ */ wp-verify_host ? 2L : 0L); if (wp-cacert != NULL) curl_easy_setopt (wp-curl, CURLOPT_CAINFO, wp-cacert); + if (wp-headers != NULL) +curl_easy_setopt (wp-curl, CURLOPT_HTTPHEADER, wp-headers); + if (wp-post_body != NULL) +curl_easy_setopt (wp-curl, CURLOPT_POSTFIELDS, wp-post_body); return (0); } /* }}} int cc_page_init_curl */ @@ -466,6 +491,10 @@ static int cc_config_add_page (oconfig_item_t *ci) /* {{{ */ else if (strcasecmp (Match, child-key) == 0) /* Be liberal with failing matches = don't set `status'. */ cc_config_add_match
[collectd] A couple of collectd crashes
I managed to get collectd to segfault in a couple of places while playing with it a bit. The first is in the curl_xml module when the XPATH expression doesn't quite match the input. The crash occurs on line 407 when instance_node-nodeTab[0] is dereferenced. At this point, all members of instance_node are 0, so dereferencing the array isn't a good idea. This patch fixes the problem, although I'm not sure if this particular case actually deserves its own error message: diff --git a/src/curl_xml.c b/src/curl_xml.c index 2a36608..2b1d247 100644 --- a/src/curl_xml.c +++ b/src/curl_xml.c @@ -385,7 +385,7 @@ static int cx_handle_instance_xpath (xmlXPathContextPtr xpath_ctx, /* {{{ */ instance_node = instance_node_obj-nodesetval; tmp_size = (instance_node) ? instance_node-nodeNr : 0; -if ( (tmp_size == 0) (is_table) ) +if (tmp_size == 0) { WARNING (curl_xml plugin: relative xpath expression for 'InstanceFrom' \%s\ doesn't match The second problem occurred once in stop_write_threads() during shutdown, in this loop: for (q = write_queue_head; q != NULL; q = q-next) { plugin_value_list_free (q-vl); sfree (q); i++; } Once q has been freed by sfree(), it's no longer safe to dereference in the for statement. I'm attaching a fix for that. On a side note, the check for NULL in sfree() isn't actually necessary--ANSI C specifies that free() must be safe when given a NULL pointer. Dan From 43ed73d243635a86e5e72da434092f578d593269 Mon Sep 17 00:00:00 2001 From: Dan Fandrich d...@coneharvesters.com Date: Mon, 4 Feb 2013 23:59:41 +0100 Subject: [PATCH] Fix a NULL pointer dereference during shutdown --- src/plugin.c |6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/src/plugin.c b/src/plugin.c index 7037234..942f8bf 100644 --- a/src/plugin.c +++ b/src/plugin.c @@ -810,10 +810,12 @@ static void stop_write_threads (void) /* {{{ */ pthread_mutex_lock (write_lock); i = 0; - for (q = write_queue_head; q != NULL; q = q-next) + for (q = write_queue_head; q != NULL; ) { + write_queue_t *q1 = q; plugin_value_list_free (q-vl); - sfree (q); + q = q-next; + sfree (q1); i++; } write_queue_head = NULL; -- 1.7.10 pgpV4CmAY7ouh.pgp Description: PGP signature ___ collectd mailing list collectd@verplant.org http://mailman.verplant.org/listinfo/collectd
Re: [collectd] [PATCH] curl* plugins: Added support for POST and arbitrary headers
On Mon, Feb 04, 2013 at 02:36:17PM -0800, Kimo Rosenbaum wrote: Would you mind creating a pull request (https://github.com/collectd/collectd/pulls)? (or attach as an attachment) I would like to try this patch but I think my email client is eating it up. Here it is as an attachment. Dan From 70367b8a715d6e865066c5b0d8ca0ef23a9c46ff Mon Sep 17 00:00:00 2001 From: Dan Fandrich d...@coneharvesters.com Date: Sat, 2 Feb 2013 23:50:13 +0100 Subject: [PATCH] curl* plugins: Added support for POST and arbitrary headers These plugins can now be used for things like SOAP or XML-RPC calls. --- src/collectd.conf.pod | 41 - src/curl.c| 29 + src/curl_json.c | 28 src/curl_xml.c| 28 4 files changed, 105 insertions(+), 21 deletions(-) diff --git a/src/collectd.conf.pod b/src/collectd.conf.pod index 70bc997..e70caad 100644 --- a/src/collectd.conf.pod +++ b/src/collectd.conf.pod @@ -944,6 +944,19 @@ File that holds one or more SSL certificates. If you want to use HTTPS you will possibly need this option. What CA certificates come bundled with Clibcurl and are checked by default depends on the distribution you use. +=item BHeader IHeader + +A HTTP header to add to the request. Multiple headers are added if this option +is specified more than once. + +=item BPost IBody + +Specifies that the HTTP operation should be a POST instead of a GET. The +complete data to be posted is given as the argument. This option will usually +need to be accompanied by a BHeader option to set an appropriate +CContent-Type for the post body (e.g. to +Capplication/x-www-form-urlencoded). + =item BMeasureResponseTime Btrue|Bfalse Measure response time for the request. If this setting is enabled, BMatch @@ -1002,31 +1015,15 @@ The following options are valid within BURL blocks: Sets the plugin instance to IInstance. =item BUser IName - -Username to use if authorization is required to read the page. - =item BPassword IPassword - -Password to use if authorization is required to read the page. - =item BVerifyPeer Btrue|Bfalse - -Enable or disable peer SSL certificate verification. See -Lhttp://curl.haxx.se/docs/sslcerts.html for details. Enabled by default. - =item BVerifyHost Btrue|Bfalse - -Enable or disable peer host name verification. If enabled, the plugin checks if -the CCommon Name or a CSubject Alternate Name field of the SSL certificate -matches the host name provided by the BURL option. If this identity check -fails, the connection is aborted. Obviously, only works when connecting to a -SSL enabled server. Enabled by default. - =item BCACert Ifile +=item BHeader IHeader +=item BPost IBody -File that holds one or more SSL certificates. If you want to use HTTPS you will -possibly need this option. What CA certificates come bundled with Clibcurl -and are checked by default depends on the distribution you use. +These options behave exactly equivalent to the appropriate options of the +IcURL plugin. Please see there for a detailed description. =back @@ -1100,9 +1097,11 @@ empty string (no plugin instance). =item BVerifyPeer Btrue|Bfalse =item BVerifyHost Btrue|Bfalse =item BCACert ICA Cert File +=item BHeader IHeader +=item BPost IBody These options behave exactly equivalent to the appropriate options of the -IcURL and IcURL-JSON plugins. Please see there for a detailed description. +IcURL plugin. Please see there for a detailed description. =item EltBXPath IXPath-expressionEgt diff --git a/src/curl.c b/src/curl.c index 30d6e45..cb352bf 100644 --- a/src/curl.c +++ b/src/curl.c @@ -60,6 +60,8 @@ struct web_page_s /* {{{ */ int verify_peer; int verify_host; char *cacert; + struct curl_slist *headers; + char *post_body; int response_time; CURL *curl; @@ -148,6 +150,8 @@ static void cc_web_page_free (web_page_t *wp) /* {{{ */ sfree (wp-pass); sfree (wp-credentials); sfree (wp-cacert); + sfree (wp-post_body); + curl_slist_free_all (wp-headers); sfree (wp-buffer); @@ -173,6 +177,23 @@ static int cc_config_add_string (const char *name, char **dest, /* {{{ */ return (0); } /* }}} int cc_config_add_string */ +static int cc_config_append_string (const char *name, struct curl_slist **dest, /* {{{ */ +oconfig_item_t *ci) +{ + if ((ci-values_num != 1) || (ci-values[0].type != OCONFIG_TYPE_STRING)) + { +WARNING (curl plugin: `%s' needs exactly one string argument., name); +return (-1); + } + + *dest = curl_slist_append(*dest, ci-values[0].value.string); + if (*dest == NULL) +return (-1); + + return (0); +} /* }}} int cc_config_append_string */ + + static int cc_config_set_boolean (const char *name, int *dest, /* {{{ */ oconfig_item_t *ci) { @@ -405,6 +426,10 @@ static int cc_page_init_curl (web_page_t *wp) /* {{{ */ wp-verify_host ? 2L : 0L); if (wp-cacert != NULL
Re: [collectd] A couple of collectd crashes
On Tue, Feb 05, 2013 at 12:02:17AM +0100, Dan Fandrich wrote: I managed to get collectd to segfault in a couple of places while playing with it a bit. The first is in the curl_xml module when the XPATH expression doesn't quite match the input. I've spotted some other questionable code around there. cx_handle_instance_xpath() line 362 unconditionally clears the block starting at vl-type_instance, but only at line 368 does it check if that's a NULL pointer. Also, line 387 sets tmp_size to 0 if instance_node is NULL, then exits the function on line 395. Then, way down at line 420, it checks if instance_node is NULL and treats it differently, but it cannot be NULL at that point. Clearly, the logic in this module needs some love. I don't quite feel so bad now in failing to get my XPATH configuration right on the first try! Dan pgpVmPkRvlq21.pgp Description: PGP signature ___ collectd mailing list collectd@verplant.org http://mailman.verplant.org/listinfo/collectd
[collectd] [PATCH] curl's numeric options are always at minimum long, never int
This can affect portability to some architectures. --- src/apache.c | 12 ++-- src/ascent.c | 12 ++-- src/bind.c |4 ++-- src/curl.c |8 src/curl_json.c |6 +++--- src/curl_xml.c |2 +- src/nginx.c | 12 ++-- src/write_http.c |6 +++--- 8 files changed, 31 insertions(+), 31 deletions(-) diff --git a/src/apache.c b/src/apache.c index c31dd87..202b73c 100644 --- a/src/apache.c +++ b/src/apache.c @@ -373,7 +373,7 @@ static int init_host (apache_t *st) /* {{{ */ return (-1); } - curl_easy_setopt (st-curl, CURLOPT_NOSIGNAL, 1); + curl_easy_setopt (st-curl, CURLOPT_NOSIGNAL, 1L); curl_easy_setopt (st-curl, CURLOPT_WRITEFUNCTION, apache_curl_callback); curl_easy_setopt (st-curl, CURLOPT_WRITEDATA, st); @@ -425,24 +425,24 @@ static int init_host (apache_t *st) /* {{{ */ } curl_easy_setopt (st-curl, CURLOPT_URL, st-url); - curl_easy_setopt (st-curl, CURLOPT_FOLLOWLOCATION, 1); + curl_easy_setopt (st-curl, CURLOPT_FOLLOWLOCATION, 1L); if (st-verify_peer != 0) { - curl_easy_setopt (st-curl, CURLOPT_SSL_VERIFYPEER, 1); + curl_easy_setopt (st-curl, CURLOPT_SSL_VERIFYPEER, 1L); } else { - curl_easy_setopt (st-curl, CURLOPT_SSL_VERIFYPEER, 0); + curl_easy_setopt (st-curl, CURLOPT_SSL_VERIFYPEER, 0L); } if (st-verify_host != 0) { - curl_easy_setopt (st-curl, CURLOPT_SSL_VERIFYHOST, 2); + curl_easy_setopt (st-curl, CURLOPT_SSL_VERIFYHOST, 2L); } else { - curl_easy_setopt (st-curl, CURLOPT_SSL_VERIFYHOST, 0); + curl_easy_setopt (st-curl, CURLOPT_SSL_VERIFYHOST, 0L); } if (st-cacert != NULL) diff --git a/src/ascent.c b/src/ascent.c index 993e480..3a7c393 100644 --- a/src/ascent.c +++ b/src/ascent.c @@ -539,7 +539,7 @@ static int ascent_init (void) /* {{{ */ return (-1); } - curl_easy_setopt (curl, CURLOPT_NOSIGNAL, 1); + curl_easy_setopt (curl, CURLOPT_NOSIGNAL, 1L); curl_easy_setopt (curl, CURLOPT_WRITEFUNCTION, ascent_curl_callback); curl_easy_setopt (curl, CURLOPT_USERAGENT, PACKAGE_NAME/PACKAGE_VERSION); curl_easy_setopt (curl, CURLOPT_ERRORBUFFER, ascent_curl_error); @@ -561,17 +561,17 @@ static int ascent_init (void) /* {{{ */ } curl_easy_setopt (curl, CURLOPT_URL, url); - curl_easy_setopt (curl, CURLOPT_FOLLOWLOCATION, 1); + curl_easy_setopt (curl, CURLOPT_FOLLOWLOCATION, 1L); if ((verify_peer == NULL) || IS_TRUE (verify_peer)) -curl_easy_setopt (curl, CURLOPT_SSL_VERIFYPEER, 1); +curl_easy_setopt (curl, CURLOPT_SSL_VERIFYPEER, 1L); else -curl_easy_setopt (curl, CURLOPT_SSL_VERIFYPEER, 0); +curl_easy_setopt (curl, CURLOPT_SSL_VERIFYPEER, 0L); if ((verify_host == NULL) || IS_TRUE (verify_host)) -curl_easy_setopt (curl, CURLOPT_SSL_VERIFYHOST, 2); +curl_easy_setopt (curl, CURLOPT_SSL_VERIFYHOST, 2L); else -curl_easy_setopt (curl, CURLOPT_SSL_VERIFYHOST, 0); +curl_easy_setopt (curl, CURLOPT_SSL_VERIFYHOST, 0L); if (cacert != NULL) curl_easy_setopt (curl, CURLOPT_CAINFO, cacert); diff --git a/src/bind.c b/src/bind.c index 5b7d7a0..288949a 100644 --- a/src/bind.c +++ b/src/bind.c @@ -1393,12 +1393,12 @@ static int bind_init (void) /* {{{ */ return (-1); } - curl_easy_setopt (curl, CURLOPT_NOSIGNAL, 1); + curl_easy_setopt (curl, CURLOPT_NOSIGNAL, 1L); curl_easy_setopt (curl, CURLOPT_WRITEFUNCTION, bind_curl_callback); curl_easy_setopt (curl, CURLOPT_USERAGENT, PACKAGE_NAME/PACKAGE_VERSION); curl_easy_setopt (curl, CURLOPT_ERRORBUFFER, bind_curl_error); curl_easy_setopt (curl, CURLOPT_URL, (url != NULL) ? url : BIND_DEFAULT_URL); - curl_easy_setopt (curl, CURLOPT_FOLLOWLOCATION, 1); + curl_easy_setopt (curl, CURLOPT_FOLLOWLOCATION, 1L); return (0); } /* }}} int bind_init */ diff --git a/src/curl.c b/src/curl.c index 2160b98..69a5b95 100644 --- a/src/curl.c +++ b/src/curl.c @@ -370,14 +370,14 @@ static int cc_page_init_curl (web_page_t *wp) /* {{{ */ return (-1); } - curl_easy_setopt (wp-curl, CURLOPT_NOSIGNAL, 1); + curl_easy_setopt (wp-curl, CURLOPT_NOSIGNAL, 1L); curl_easy_setopt (wp-curl, CURLOPT_WRITEFUNCTION, cc_curl_callback); curl_easy_setopt (wp-curl, CURLOPT_WRITEDATA, wp); curl_easy_setopt (wp-curl, CURLOPT_USERAGENT, PACKAGE_NAME/PACKAGE_VERSION); curl_easy_setopt (wp-curl, CURLOPT_ERRORBUFFER, wp-curl_errbuf); curl_easy_setopt (wp-curl, CURLOPT_URL, wp-url); - curl_easy_setopt (wp-curl, CURLOPT_FOLLOWLOCATION, 1); + curl_easy_setopt (wp-curl, CURLOPT_FOLLOWLOCATION, 1L); if (wp-user != NULL) { @@ -399,9 +399,9 @@ static int cc_page_init_curl (web_page_t *wp) /* {{{ */ curl_easy_setopt (wp-curl, CURLOPT_USERPWD, wp-credentials); } - curl_easy_setopt
[collectd] [PATCH] Allow out-of-tree builds
The generated header file lcc_features.h and collectd.h cause problems otherwise. --- src/Makefile.am |1 + src/libcollectdclient/Makefile.am |2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/src/Makefile.am b/src/Makefile.am index f31c176..bd00029 100644 --- a/src/Makefile.am +++ b/src/Makefile.am @@ -16,6 +16,7 @@ AM_CPPFLAGS += -DPIDFILE='${localstatedir}/run/${PACKAGE_NAME}.pid' endif AM_CPPFLAGS += -DPLUGINDIR='${pkglibdir}' AM_CPPFLAGS += -DPKGDATADIR='${pkgdatadir}' +AM_CPPFLAGS += -I$(top_builddir)/src/libcollectdclient/collectd sbin_PROGRAMS = collectd collectdmon bin_PROGRAMS = collectd-nagios collectdctl collectd-tg diff --git a/src/libcollectdclient/Makefile.am b/src/libcollectdclient/Makefile.am index 9fbf0d1..1d4dff5 100644 --- a/src/libcollectdclient/Makefile.am +++ b/src/libcollectdclient/Makefile.am @@ -11,7 +11,7 @@ nodist_pkgconfig_DATA = libcollectdclient.pc BUILT_SOURCES = collectd/lcc_features.h libcollectdclient_la_SOURCES = client.c network.c network_buffer.c -libcollectdclient_la_CPPFLAGS = $(AM_CPPFLAGS) +libcollectdclient_la_CPPFLAGS = $(AM_CPPFLAGS) -I$(top_builddir)/src/libcollectdclient/collectd -I$(top_srcdir)/src libcollectdclient_la_LDFLAGS = -version-info 1:0:0 libcollectdclient_la_LIBADD = if BUILD_WITH_LIBGCRYPT -- 1.7.10 ___ collectd mailing list collectd@verplant.org http://mailman.verplant.org/listinfo/collectd
Re: [Mageia-dev] Cleaning up init
On Thu, Jan 31, 2013 at 10:00:36AM +, Colin Guthrie wrote: While I'm pretty sure no such init script exists and would break in other ways, it's still a valid point. I've changed it to {} now. True, but if we haven't told our users what their symbolic links should look like, we can't be going about deleting the wrong files. Anyone else have any comments on this? The revised solution instead breaks when the link contains double quotes. How about this instead: find -L find /etc/rc.d/rc{0,1,2,3,4,5,6,7}.d -type l -exec rm -f {} + Dan
Re: [Mageia-dev] Cleaning up init
On Tue, Jan 29, 2013 at 10:19:46AM +, Colin Guthrie wrote: find /etc/rc.d/rc{0,1,2,3,4,5,6,7}.d -type l -exec sh -c 'if [ ! -e {} ]; then rm -f {}; fi' \; This does much the same job, but find is already used in multiple places so probably a better solution even if the command is more convoluted! If anyone spots any issues with this, please shout. This solution has a problem handling files with embedded spaces and special shell characters. It could potentially be bad if run on a system where a link such as /etc/rc.d/rc0.d/S01Super * Star Program exists. Dan
Re: nfs mount succeeds but apps don't find the files
On Sat, Jan 26, 2013 at 06:41:02PM +0100, Tsjoecha wrote: The mount succeeds perfectly, if I go in the terminal to /sdcard/Pictures and execute the ls command, I can see all my pics from my nas-device. Though, when starting apps, like EZ File Explorer, the folder seems to be empty. Other apps don't see the files neither. What can I do to resolve this issue? I'm using BusyBox 1.20.2 It sounds like it could be a file/mount permissions issue. Dan ___ busybox mailing list busybox@busybox.net http://lists.busybox.net/mailman/listinfo/busybox
Re: libcurl failed to create/upload file with digest authentication when header If-None-Match: * is added
On Fri, Jan 25, 2013 at 06:38:49PM +, Yingping Lu wrote: We’d like to make sure we don’t overwrite an existing object when we upload a file to a server, so we put a header If-None-Match: *, however, the upload operation always failed with digest authentication. I tracked the code a bit and found out that libcurl was sending two requests, the first request had content-length of 0, the server side responded with creating the object. The second request had the real length, but the operation failed since the request had header If-None-Match: *, and the object was created by the first request, so the server side responded with http code 413, i.e. precondition failed. Is this the server’s issue or libcurl’s issue? Thanks. Can you show actual options used and output trace that show what you're seeing? There are some code paths where libcurl legitimately sends more than one request, so it's important to know whether the behaviour you're seeing is expected in this case or not. It sounds like not, but we need more details to see what could be going on. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Simple HTTP PUT - response content printed to screen
On Fri, Jan 25, 2013 at 05:37:11PM -0500, Joe Lorenz wrote: I'm having this problem when using libcurl in my own (Linux) app, but I can demonstrate the same problem using the curl tool. If a simple HTTP file upload (presumably a 'PUT'), and the file gets created at the destination, the response content is printed to screen: Writing output to stdout is libcurl's default behaviour. If that's not what you want, then install a CURLOPT_WRITEFUNCTION handler and do something else with the data. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Simple HTTP PUT - response content printed to screen
On Sat, Jan 26, 2013 at 06:54:25AM -0500, Joe Lorenz wrote: Oh. I hadn't considered that since a 201 or a 200 response doesn't get printed and nothing prints out for a GET or file transfers with other protocols. libcurl prints whatever is sent for (most) status codes; after all, that's generally the reason someone does a GET on a resource--to obtain the data there. Some HTTP status codes don't include a body, so naturally there's nothing to print there. Try taking a look at the headers in the cases where nothing is printed--you'll probably see a Content-Length: 0 (or equivalent) in the response. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: LibCurl Time out Issue
On Fri, Jan 25, 2013 at 05:39:16PM +0530, Venkatesh Prabu Narayanan wrote: I am new to this forum. I am using libcurl to download files from the server to the client. I am using the following options in my curl connection in client side, curl_easy_setopt(curlPtr, CURLOPT_LOW_SPEED_LIMIT, 1); curl_easy_setopt(curlPtr, CURLOPT_LOW_SPEED_TIME, 360L); curl_easy_setopt(curlPtr, CURLOPT_IGNORE_CONTENT_LENGTH, 1); curl_easy_setopt(curlPtr, CURLOPT_TCP_NODELAY, 1); Even though I have made it to time out only if no bytes was received for a period of 6 minutes (360 seconds), the operation seems to be timed out even after receiving response bytes within the stipulated time. [...] * Operation timed out after 36 milliseconds with 205512 bytes received * Closing connection #0 * Timeout was reached This is what would be expected if the client sent 205512 bytes and then the server stopped responding for 360 seconds. You didn't say how long it was from the start of the transfer before this timeout was reached, or how you know that the response bytes were actually being received. A network sniffer would tell you where the problem lies. Dan I am using the libcurl version 7.19.3. I know it is an older version. But please let me know what is went wrong here. Does upgrading to latest version help in my case. I would appreciate your answer. Why don't you try it and see? Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: anonymous ftp
On Mon, Jan 21, 2013 at 10:06:09AM -0500, Ashby, Nick wrote: I am using cURL on Linux and trying to both upload and download to different machines. I want to use the anonymous FTP. I have had success with using the command line curl –O ftp://anonymous@ipaddress/remotepath to receive files but I am having an issue going the other way. I have tried curl –T localpath ftp:// anonymous@ipaddress/remotepath and get a curl: (25) Failed FTP upload: 550. If I use the same form and change it to sftp and actually use name and password it works. How can I use FTP and anonymous to successfully upload files? Thanks. That is the correct syntax to use. It's likely that the ftp server doesn't allow anonymous uploads, or requires that they be uploaded to a different directory. The -v option may show a more detailed message from the server. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Can I post my open source library?
On Thu, Jan 17, 2013 at 09:05:54PM +0800, John Kenedy wrote: Before I post I want to make sure admin is okay with it. So its like this, I have extend microsoft .NET's WebClient class and build a syntax parser that is specially for web scraping functions. Can I post my open source library link here, so everyone can take a look and probably inspire me to build more features? It is some what related to curl but not too much. Is it related to curl in that it uses libcurl or is it related to curl in that it uses HTTP too? If the latter, then there are better places to get such feedback than this list. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: [PATCH] Addition of trailer headers in HTTP requests generated by libcurl
On Thu, Jan 17, 2013 at 02:35:12PM +0200, Chrysovaladis Datsios wrote: The feature is implemented as another in curl_easy_setopt() function. This new option is named CURLOPT_HTTPTRAILERHEADER and adds the trailer headers appended on a curl_slist. Example code snippet using the new feature: struct curl_slist *trailer_http_hdrs; trailer_http_hdrs = curl_slist_append(trailer_http_hdrs, mytrailerheader: myvalue); curl_easy_setopt(curl, CURLOPT_HTTPTRAILERHEADER, trailer_http_hdrs); This interface is simple, but it obviates the entire need to have this feature at all if such headers have to be *defined* before the transfer. Trailer headers are designed for headers whose content isn't known, or isn't efficient to calculate, before the transfer has been performed. What a user of this really would want is a way to set such headers *after* the data has been transferred. Normally, libcurl requires all parameters of a transfer to be known and set before the curl_easy_perform() call. In other cases where libcurl needs application intervention during a transfer, it performs callbacks. That would be a natural way to get the actual transfer header data immediately before they're sent in a natural libcurl way. But, admittedly, it would be simpler to implement a method just like you've done, but documented in such a way that the application is allowed to call curl_easy_setopt(CURLOPT_HTTPTRAILERHEADER,...) again from within the data read callback once the actual header data are known. I'm not sure if there are any other curl_easy_setopt options that are allowed to be called during a transfer or not (I recall there may be one), which is why this seems less desirable to me. -- This are the diffs from the original library: curl-7.28.1 I hope it makes it to be included in a future verson. diff -ur curl_orig/include/curl/curl.h curl_new/include/curl/curl.h --- curl_orig/include/curl/curl.h 2012-09-26 12:46:15.0 +0300 +++ curl_new/include/curl/curl.h 2013-01-17 12:31:24.000460704 +0200 @@ -1536,6 +1536,9 @@ /* set the SMTP auth originator */ CINIT(MAIL_AUTH, OBJECTPOINT, 217), + /* points to a linked list of trailer headers, struct curl_slist kind */ + CINIT(HTTPTRAILERHEADER, OBJECTPOINT, 218), + CURLOPT_LASTENTRY /* the last unused */ } CURLoption; diff -ur curl_orig/lib/http.c curl_new/lib/http.c --- curl_orig/lib/http.c 2012-11-13 23:02:16.0 +0200 +++ curl_new/lib/http.c 2013-01-17 12:33:20.817372849 +0200 @@ -1524,6 +1524,9 @@ char *ptr; struct curl_slist *headers=conn-data-set.headers; + char *tptr; This declaration could be moved into the while loop below, where it's needed. + struct curl_slist *trailer_headers=conn-data-set.trailer_headers; + while(headers) { ptr = strchr(headers-data, ':'); if(ptr) { @@ -1590,6 +1593,31 @@ } headers = headers-next; } + + while(trailer_headers) { +ptr = strchr(trailer_headers-data, ':'); +if(ptr) { + tptr = --ptr; /* the point where the trailer header field ends */ + ptr++; /* pass the colon */ + while(*ptr ISSPACE(*ptr)) +ptr++; + + if(*ptr) { +/* only send this if the contents was non-blank */ + +char tfield[CURL_MAX_HTTP_HEADER]; +strncpy(tfield, trailer_headers-data, tptr-trailer_headers-data+1); This will overflow tfield given a long enough user-supplied header. +tfield[tptr-trailer_headers-data+1] = '\0'; +CURLcode result = Curl_add_bufferf(req_buffer, Trailer: %s\r\n, + tfield); +tptr = NULL; This could be moved after the next if statement. +if(result) + return result; + } +} +trailer_headers = trailer_headers-next; + } + return CURLE_OK; } diff -ur curl_orig/lib/transfer.c curl_new/lib/transfer.c --- curl_orig/lib/transfer.c 2012-11-13 23:02:16.0 +0200 +++ curl_new/lib/transfer.c 2013-01-17 12:33:40.002719777 +0200 @@ -929,6 +929,34 @@ that instead of reading more data */ } +/* The last chunk has zero size of data i.e. 0\r\n*/ +if(k-upload_chunky == true data-req.upload_present == 5 + !strncmp(data-req.upload_fromhere, 0\r\n, 3) ) { What happens when the user wants to send the actual data 0\r\n? This will falsely trigger as the end of transfer. It's clearly a layering violation performing this check way down here in the guts of the send routine. + + Curl_send_buffer *trailer_buffer = Curl_add_buffer_init(); + result = Curl_add_bufferf(trailer_buffer, 0\r\n); + if(result) +return result; + + char *ptr; + struct curl_slist *trailer_headers=conn-data-set.trailer_headers; + while(trailer_headers) { +ptr = strchr(trailer_headers-data, ':'); +if(ptr) { + result = Curl_add_bufferf(trailer_buffer, %s\r\n, +
Re: Compiling libcurl with ssh2 library for Borland 5.5
On Mon, Jan 14, 2013 at 11:27:06AM -0500, Khaled El Manawhly wrote: I apologize for repeating my email, but I still didn't get a reply. I have still been unable to compile with Borland, and the access violation while compiling with MSVC is still there. Has anyone run into something similar? I've seen only a very small handful of people ever post to this list about the Borland compiler, so if you're looking for specific help, you might be hard-pressed to find it. But, if you provide lots of details, we'd love to help speculate as the cause! You say above that you are unable to compile with Borland, but your earlier message implies that you were able to compile it, you just didn't get the expected results. Please start by explaining what the exact problem is that you're seeing, along with the inputs and outputs that you're getting. Since it seems that noone else has experience exactly this problem before, we'll need specifics to try to help. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Prevent curl to trigger progressbar on 302 redirections
On Fri, Jan 11, 2013 at 09:38:13AM +0100, Alexandre Filgueira wrote: Sorry, I have an error on my writting: to now trigger I ment to say to not trigger 2013/1/11 Alexandre Filgueira alexfilgue...@cinnarch.com Content-Length: 369 My guess this is the difference. The one redirect has a body, and the other doesn't: I see... so, if I want to bypass that to just show one progressbar, could I check if it is a redirection first, and knowing that tell curl to now trigger the progressbar when content-length 0? Sure, that would work, at the cost of another 5 or so round-trips. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: CURL custom request
On Tue, Jan 08, 2013 at 11:12:14PM +0700, Dung Nguyen wrote: hi! I have server with i need to call the request like this GET /test HTTP/1.1 I'm assuming that this is the URL-decoded version of GET /test%0a%0a and not some other unprintable characters. Not that it really matters, because HTTP bans them all, and as far as NL (a.k.a. %0a) goes, libcurl appears to strip it out before sending the URI. The proper way to insert such characters into a URL is to percent encode them, which libcurl handles just fine. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: [Mageia-dev] df lying?
On Mon, Jan 07, 2013 at 01:46:37PM -0500, Felix Miata wrote: So the OP questions remain: 1-how can the free block count be identical before and after adding 37,925,705K bytes in 7 files to that journal-free, 0 reserved for root filesystem? Are you absolutely positive that this is the right filesystem you're checking? A symbolic link in the path could easily change that if you don't explicitly check. Have you tried giving df an argument of the actual path to one of the downloaded files instead of the filesystem mount point? df is smart enough to extract the actual filesystem on which that file is stored and display information for that. Dan
Re: [bagder/curl] 13606b: build: make use of 93 lib/*.c renamed files
On Sat, Jan 05, 2013 at 02:29:19PM +0100, Gisle Vanem wrote: Steve Holme steve_ho...@hotmail.com wrote: I'm still more in favour of going back to the old names and reverting the change. Me too. I am as well. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: getinmemory.c problems
On Thu, Dec 27, 2012 at 07:36:00PM -0600, Derek Martin wrote: 1. It calls realloc() every time through the callback. This is terribly inefficient, though it appears to be somewhat required by libcurl, since (as I found out in my testing) it does not necessarily make the content-length available to you until after perform() returns. The down side of calling realloc() repeatedly is that it contributes to memory fragmentation and can cause your data to be copied around multiple times. Neither are especially conducive to good performance. getinmemory.c is designed to be an easy-to-understand demonstration program, not a production-ready one. But, it's not as inefficient as you might think given a reasonable realloc() implementation. The one glibc uses, for example, only actually reallocs at an O(log n) rate. Since you SHOULD be able to get the content length (from the Content-Length header) before you start reading the body, it should be possible to avoid doing a realloc() EVER. In practice, libcurl sometimes does, and sometimes does not make the content length available prior to reading the body via curl_easy_getinfo(). You either get the actual value, or zero. 2. If you point getinmemory.c at, say, www.google.com (or several other sites I tried, including one of my own), you get no results. It just prints 0 bytes retrieved and dies a happy death. Clearly, this is the Wrong Thing™. Though to be honest, it's not clear to me why it does that. It does work with most sites I tried... including others of my own. Have you looked at the headers from the server in these cases? You'll probably find that there is no Content-Length: header sent. The Web 2.0 world of dynamically-generated pages means that at many sites even the server doesn't know what the page length is going to be until it's completely sent. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: getinmemory.c problems
On Fri, Dec 28, 2012 at 10:54:14AM -0600, Derek Martin wrote: Fair enough... though it's still not clear to me why the example program retrieves no data. Other than memory allocation scheme, it does not appear to be substantively different from mine. getinmemory works fine for me when accessing www.google.com. What URL are you having trouble with? libcurl handles chunked transfers transparently, so a lack of Content-Length should be irrelevant for a simple transfer. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: http 403 fobidden from YouTube
On Thu, Dec 27, 2012 at 05:37:54PM +0800, 黎小辉 wrote: My mediaplay use curl for http stream download. When I play the video , often return the 403 fobidden error recently. I don't know whether I can get the 403 substatus error code using the libcurl. How to do it? Thanks! Are you looking for the CURLINFO_RESPONSE_CODE option to curl_easy_getinfo()? Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: CURLE_COULDNT_RESOLVE_HOST using libcurl easy interface
On Fri, Dec 21, 2012 at 08:41:44AM -0500, Dave Galloway wrote: Another piece of the puzzle. One of our other programmers has a CentOS box with our development environment. He built my test program on it and it runs fine on the CentOS system. It segfaults on the old RH system and says the kernel is too old. I did some digging into the build output and there are some messages about needing the shared libraries from glibc at runtime for gethostbyname, gethostbyname_r, etc. I am guessing that the glibc that we build against is incompatible with the newer CentOS libraries?? Is it possible to substitute the piece of glibc for internet-type calls on our build box with a newer one and not break things? glibc isn't designed for that kind of mixing-and-matching of pieces. If name resolving is an issue, you'd probably be better off either fixing the configuration on the remote system, or linking against another resolver (like c-ares), which can be done statically if necessary. But, C-ares uses some of the same configuration files (/etc/resolv.conf) so if that's the source of the problem on that other machine, that won't help. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: CURLE_COULDNT_RESOLVE_HOST using libcurl easy interface
On Wed, Dec 19, 2012 at 04:10:46PM -0500, Dave Galloway wrote: I must be losing it because I can't seem to see the problem. I have cut down my test program to a bare minimum. I have tried a newer version of libcurl, 7.21.4 instead of 7.19.6, that I had locally. If I call curl_easy_setopt(curl, CURLOPT_URL, google.com) after initializations and such and then curl_easy_perform(curl) it fails. Note that google.com isn't a URL--it's a host name. libcurl tries to guess correctly what protocol you want in cases like this, but you shouldn't depend on it. If I call curl_easy_setopt(curl, CURLOPT_URL, 74.125.140.101) it works. The failure is on CentOS 5.4. Works fine both ways on the older Red Hat box and my Win 7 box. At the getaddrinfo level in curl_addrinfo.c I am getting a -2 (NONAME) error which bubbles back up and eventually returns the CURLE_COULDNT_RESOLVE_HOST error. I can ping google.com and 74.125.140.101 from the command line on all systems with no problem. Any good ideas? Next up is to build the latest libcurl. What name resolver back-end is your libcurl configured to use? Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Not able to cross-compile curl for ARM uClinux
On Fri, Dec 14, 2012 at 07:10:54PM +0100, Bernard Evensrud wrote: Thanks for your input. Coming from the microcontroller world, I must admit this is over my head. I do not understand these error messages. There might be a possible fault in the toolchain? That's what it looks like. It matches what I suspected originally, that *-elf-gcc toolchains aren't normally capable of compiling generic applications. I downloaded a complete toolchain package from the moxa website, the manufacturer for this embedded computer. Do I have any alternatives to try, or can I compile my own toolchain (when I do not know what I am doing)? You said you have used this toolchain to compile applications before. Were these POSIX applications or something specific to this embedded computer? What OS is this embedded computer running? What happens when you compile Hello World? The program that configure is complaining about is contained in the config.log file--can you repeat the results by compiling it manually? Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Not able to cross-compile curl for ARM uClinux
On Thu, Dec 13, 2012 at 02:43:06PM +0100, Bernard Evensrud wrote: I am using a Ubuntu 12.10 64-bit computer, trying to compile libcurl.a for use in a project with embedded uClinux. However, I only get an error message when running .configure with arm-elf-gcc. Compiling a native version using GCC is no problem. Compiling other small projects with arm-elf-gcc also works very well. *-elf-gcc compilers are generally not set up to build normally applications, but rather low-level components like kernels and bootloaders. Typically, you want to use a compiler that has the OS and/or libc name in place of elf. But if this compiler works for other applications, then perhaps that's not the case here. This is what happens: bernard@bernard-home ~/curl-7.28.1 $ ./preconfig ./configure --disable-shared --disable-ares --disable-cookies --disable-crypto-auth --disable-ipv6 --disable-verbose --without-libidn --without-zlib --disable-ldap --disable-thread --disable-shared --host=arm-elf --build=x86_64 --without-ssl --without-libssh2 I would drop the --build option unless you actually need it. checking whether to enable maintainer-specific portions of Makefiles... no checking whether to enable debug build options... no checking whether to enable compiler optimizer... (assumed) yes checking whether to enable strict compiler warnings... no checking whether to enable compiler warnings as errors... no checking whether to enable curl debug memory tracking... no checking whether to enable hiding of library internal symbols... yes checking whether to enable c-ares for DNS lookups... no checking for sed... /bin/sed checking for grep... /bin/grep checking for egrep... /bin/grep -E checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes checking for arm-elf-strip... arm-elf-strip checking curl version... 7.28.1 checking build system type... x86_64-pc-none checking host system type... arm-unknown-elf checking for style of include used by make... GNU checking for arm-elf-gcc... /usr/local/bin/arm-elf-gcc checking for C compiler default output file name... configure: error: in `/home/bernard/curl-7.28.1': configure: error: C compiler cannot create executables See `config.log' for more details. Did you see `config.log' for more details? Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: [Mageia-dev] python-curl build fail in cauldron
can someone have a look at python-curl build fail in cauldron ? Looks like a missing a -lcrypto in the link line, likely due to that being an implied dependency through libcurl. Without looking at the source, it might be as easy to fix as setting env LIBS=-lcrypto on the compile line. Dan
Re: [Mageia-dev] python-curl build fail in cauldron
On Wed, Dec 12, 2012 at 04:31:54PM +, Colin Guthrie wrote: Of course if that is the problem, then a proper patch to it's build system (autotools, cmake or whatever is used) and submitting the fix upstream is the correct course of action :) True, but the latest upstream release is 4 years old, so I wouldn't hold out for that. Dan
Re: Potential Fix to libcurl 7.28.1 HPUX11.11 rw_lock.h compiler error developed
On Wed, Dec 12, 2012 at 01:59:33AM -0500, Frank Chang wrote: Yang Tse and Dan Fandrich, Thank you for your help .I just developed a potential fix to the nasty rw_lock.h problem.on HPUX 11.11. curl 7.28.1 compiles and links successfully if you use the following fix. For example if you go lib/gopher.c and add #define TRUE 1 before #include net/ if.h #ifdef HAVE_NET_IF_H #define TRUE 1 #include net/if.h #endif Yang Tse and Dan Fandrich, Please give me your opinion of this potential fix to the HPUX 11..11 rw_lock.h compiler errors. Thank you again for your help. I don't have the file so I can't review the fix, but if it works for you--great! Another way would be do disable gopher from the libcurl build altogether. And finally, it looks like every one of the system #includes at the top of that file can actually be removed altogether. They were probably pasted from another file and aren't actually needed there. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Potential Fix to curl 7.28.1 for HPUX 11.11 CC compiler /usr/include/sys/rw_lock.h compiler error
On Wed, Dec 12, 2012 at 05:22:10AM -0500, Frank Chang wrote: Dan Fandrich, Thank you for you reply. Here is the /usr/include/sys/ rw_lock..file for HPUX 11.11 for your potential review if you have the time. The offending lines are around line 169. You may contact me at frank_chan...@hotmail.com or frankchan...@gmail.com if you have any questions. Thank you for all of your help the past week and half. /* @(#) rw_lock.h $Date: 2001/02/02 16:37:40 $Revision: r11.11/1 PATCH_11.11 (PHKL_23302) */ /* * BEGIN_DESC * * File: * common/sys/rw_lock.h * * Purpose: * Header file for kernel read-write locks implemented in sys/rw_lock.c. * The HP-UX read-write locks were ported (with some modifications) * from OSF/1, hence all the copyrights from the OSF/1 code. I see a lot of copyright statements in this file, but no indication that you're allowed to post it to a public mailing list. Last time I checked, you had to pay HP $1K to get their OS. The source of the problem is pretty obvious by looking at the header; The compile error was deliberately introduced to catch an obvious problem with the TRUE and FALSE definition. In any case, Daniel just pushed a fix to gopher.c to remove the includes altogether, so this won't be a problem in future releases. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: 2. Re: Potential Fix to libcurl 7.28.1 HPUX11.11 rw_lock.h compiler developed(Daniel Stenberg)
On Wed, Dec 12, 2012 at 06:26:13AM -0500, Frank Chang wrote: Daniel Stenberg and Dan Fandrich, There are actually more libcurl 7.28.1 files that require the same change as Daniel Stenberg just made for gopher.c. They are 1) ./lib/tftp.c 2) ./lib/easy.c 3)./lib/transfer.c 4)./lib/telnet.c 5)./lib/if2ip.c 6)./lib/dict.c 7)./lib/url.c 8)./lib/url.c 9)./lib/http.c 10)./lib/file.c Thank you, Dnaiel Stenberg and Dan Fandrich for all of your help. It's unlikely that there are many more curl source files where includes can simply be dropped. If you can identify some, please let us know. Alternately, if you can identify locations where include files are missing (i.e. they are dependencies to other include files and documented as such by SuS, C89 or otherwise), then please also let us know. That alternative is probably more likely. But without such data, I would just chalk this up to another vendor shipping a broken compile environment. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Potential Fix to libcurl 7.28.1 HPUX11.11 rw_lock.h compiler error developed
On Wed, Dec 12, 2012 at 04:21:51PM +0100, Yang Tse wrote: Compilation was failing on line that reads as This is probably not a good thing. In order to make this happen _KERNEL must have been defined somewhere. In any case, I've just pushed https://github.com/bagder/curl/commit/f254c59dc7 which should solve this issue no matter if _KERNEL is defined or not on HP-UX. I really don't think HP-UX expects every application to have to define TRUE and FALSE macros in order to compile. It's much more likely that there's something borked in the compile environment, or HP-UX simply needs another header header file to be included first. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Potential Fix to libcurl 7.28.1 HPUX11.11 rw_lock.h compiler error developed
On Wed, Dec 12, 2012 at 06:10:39PM +0100, Yang Tse wrote: Probably true. But given recent record history of OP not providing complete details, commit f254c59dc7 might be best workaround we can provide. Unless OP actually digs into the problem and provides a better fix than what was previously proposed. I'm still waiting for confirmation that those results were from pristine curl source built in the normal manner. Plus, there is evidence on the download page that others are building for HP-UX and I don't remember hearing from anyone else about problems compiling there. Not that there aren't any; I'd just prefer more evidence of the need for such a dreadful hack first. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Potential Fix to libcurl 7.28.1 HPUX11.11 rw_lock.h
On Wed, Dec 12, 2012 at 12:32:30PM -0500, Frank Chang wrote: Yang Tse and Dan Fandrich, I have plenty of detailed information on this problem and other questions I have emailed to the curl-library-developers list. However, many times when I try to submit error log information or other large files, the email-list moderator API seems not to like large attachments. That's an indication that you haven't put enough effort into finding the problem yourself first. We're generally a pretty friendly and helpful bunch around here, but it's not our job to debug your app--that's your job. Detailed information is good, but useless information (which is the majority of what a large attachment would consist of) is not. We'd like to see evidence that you've done your own due diligence before calling out for help. We'd also like to see relevant information about the problem at hand--the kind of information that a programmer would need to debug the problem. Some of your previous messages were lacking in both of these areas. The canonical link on this topic is http://www.catb.org/esr/faqs/smart-questions.html If there is a bona fide problem with curl on HP-UX, we'd certainly like to fix it. But, given that others are already building curl on HP-UX without complaint here, it's up to you to convince us that it's actually a curl problem, and not a problem in how you're trying to use curl. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Possible Curl-7.28.1 compiler error message on aCC compiler
On Tue, Dec 11, 2012 at 12:56:38PM -0500, Frank Chang wrote: Daniel Stenberg, On HPUX B.11.11 using the aCC compiler, we obtain the following compile error shown below, Error 20: /usr/include/sys/rw_lock.h, line 169 # ';' expected before 'probably'. This is probably not a good thing Could anyone suggest how we might get around this problem? Thank you. Very few if any other people reading this list have access to the file /usr/include/sys/rw_lock.h on your system causing this problem. You'll have to look into the file and figure out what it is that is going wrong. It's likely to be a conflicting macro, or a missing dependent #include or something along those lines. Porting to a new development environment is always fun! Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: [PATCH] configure: fix cross pkg-lconfig detection
On Sun, Dec 09, 2012 at 02:20:17PM -0500, Frank Chang wrote: Daniel Stenberg, I just tested the latest libcurl-7.28.1.tar.gz download on Solaris UNIX. We ran the following sequence: 1) gunzip libcurl-7.28.1.tar.gz 2) tar -zvf libcurl-7.28.1tar 3) ./configure 4) make and we observe lots of gcc --tag=CC compile directives. We were curious about whether Solaris CC == gcc --tag=CC. Thank you. Once again, you failed to provide an actual log of what you're describing, so I'm going to go on a limb and guess at what you're actually trying to describe. curl's build system doesn't add --tag=CC directives to the gcc command line. It does, however, use libtool, and libtool uses the --tag=CC directive to control its operation. libtool operates as a wrapper around the compiler in use (whether gcc or cc or any other), and the --tag option (as well as --mode) informs libtool of the type of compile that's happening. You can find out more about libtool and its role in the build by reading the extensive libtool documentation (or http://www.gnu.org/software/libtool/libtool.html if you don't already have it installed). Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Problems with SMTP connection re-using (SSL read: error:00000000:lib(0):func(0):reason(0))
On Fri, Dec 07, 2012 at 11:28:14AM +0100, Volker Schmid wrote: I'm just having this problem. I successfully sent an email with libcurl and the connection stays open (first line). Then I try to send another one: ... 2012-12-06 15:33:49 : CURL: Connection #0 to host smtp.xx.com left intact 2012-12-06 15:34:59 : CURL: Re-using existing connection! (#0) with host (nil) 2012-12-06 15:34:59 : CURL: Connected to (nil) (80.67.29.4) port 25 (#0) 2012-12-06 15:34:59 : CURL: MAIL FROM:q...@xxx.com 2012-12-06 15:34:59 : CURL: SSL read: error::lib(0):func(0):reason(0), errno 54 2012-12-06 15:34:59 : CURL: SSL_write() returned SYSCALL, errno = 32 2012-12-06 15:34:59 : CURL: Closing connection #0 2012-12-06 15:34:59 : CURL: Failure when receiving data from the peer Any idea why it complains with SSL read: error::lib(0):func(0):reason(0), errno 54? You'd have to look up the OpenSSL documentation to see what errno 54 means, but if the next line can be believed and errno=32 is actually a syscall error, then it's EPIPE - Broken pipe. It sound like it's just that the remote server has closed the connection. Does libcurl make a new connection and retry when this happens? If not, then it probably should. Am I doing something wrong here? But occasionally it is working, but I don't know the reason (same code, same smtp-server, same user): Different timeouts before the connection is closed, probably. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: [Mageia-dev] failure to install a src.rpm in a user build directory
On Wed, Dec 05, 2012 at 11:50:05PM +0100, sardine wrote: I've installed cauldron in a vmware vm, and wanting to rebuild a rpm as a user in my /home I've setup the build environment as usual (I've done that a lot of times in the past and even a few weeks ago). For me everything is OK but a src.rpm install gives this output : [jerome@localhost Téléchargements]$ LC_ALL=C rpm -i getmail-4.35.0-1.mga3.src.rpm error: failed to create directory %{_topdir}: / /home/jerome/rpm: Permission denied error: getmail-4.35.0-1.mga3.src.rpm cannot be installed That message implies that it's trying to create a directory called / /home/jerome/rpm i.e. with a space, for which a Permission denied error would be expected. It looks to me like there's a configuration problem somewhere. Dan
Re: RedHat Linux libcurl 7.28.1 gcc compiler error log file
On Wed, Dec 05, 2012 at 07:04:50AM -0500, Frank Chang wrote: Daniel Stenberg and Libcurl 7.28.1 Developers, Per Daniel Stenberg's request, we are posting the first several hundred lines of the compile errors from Red Hat Enterprise Linux 4 for libcurl 7.28.1. Is there a potential workaround so that Red Hat Enterprise 4.XX OS licensees do not have to upgarde to Red Hat Enterprise Linux 5. Thank you make[1]: Entering directory `/net/beige/export/marc/DQT/EmailLib/lirh5g_deb' errfn=asyn-thread.c ; gcc -g -Wall -W -Wno-unused -Wno-sign-compare -D_DEBUG -pthread -DTHREADSAFE -m32 -D_NO_GUI -DBUILDING_LIBCURL -I../Include -I../../ cpswindows/Include -I../curl-7.28.1 -I../../FuzzyMatchingLib/Include -I../../ util/mdLicense -I../../sqlite.3.7 -I../../util -fpic ../curl-7.28.1/ asyn-thread.c -o asyn-thread.o -c ${errfn}.ERR 21 ; res=$? ; if [ ${res} Just what the the configure command look like that you used before this compile? And what do the last 30 lines of configure's output say? curl doesn't use sqlite, or FuzzyMatchingLib or mdLicense, for example, which tells me this isn't an ordinary compile on plain RHEL 4 machine. Those extra -I lines could certainly throw off the compile if there are things like setup.h files in them, as could stray CFLAGS or other environment variables. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: RedHat Linux libcurl 7.28.1 compiler errors
On Tue, Dec 04, 2012 at 02:22:50PM -0500, Frank Chang wrote: Daniel Stenberg, This morning, our Project Director and I tested the libcurl 7.28.1 ./configure file on RedHatLinux. There were over 50+ compiler errors. The RedHat Linux OS and GNU compiler information are shown below.[xyz@build01 EmailLib]$ gcc -v Using built-in specs. Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/ u sr/share/info --enable-shared --enable-threads=posix --enable-checking=release - -with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable- libgcj-multifile --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable -java-awt=gtk --disable-dssi --enable-plugin --with-java-home=/usr/lib/jvm/ java- 1.4.2-gcj-1.4.2.0/jre --with-cpu=generic --host=x86_64-redhat-linux Where did those configure flags come from? Some of those just plainly do not make sense. Have you tried with plain ./configure? And as others have said, if you'd like specific help, please give specific information, like the actual error messages. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: HTTPS CURL get slow when calling at first time
On Tue, Nov 27, 2012 at 08:42:41PM +0100, Oscar Koeroo wrote: Have you Wiresharked an empty SSL handshake? It is interesting to see how fast your TCP/IP is handhaking and the latency from SSL/TLS Client Hello and Server Hello to a complete session. IMHO guessing is inefficient, measuring is the start of science. Another big potential source of latency on startup is the DNS lookup. A few bad DNS servers in the lookup path, or a misconfigured name service switch config could potentially add up to 40 seconds of delay. Subsequent connects wouldn't see that delay because the results would be cached for a while. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: HTTPS CURL get slow when calling at first time
On Tue, Nov 27, 2012 at 11:59:51PM +0200, gokhansen...@gmail.com wrote: According to the logs, client resolves the ip and connects to 443 port w/o delay. it is the start of ssl handshake failing to kick of for a long time. Is this a Windows platform? What SSL back-end is libcurl compiled to use? Have you tried another? Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: PATCH: curl-config --static-libs should output nothing when compiled with --disable-static
On Wed, Nov 21, 2012 at 05:17:39PM +0100, Wouter Van Rooy wrote: When configuring curl with --disable-static and then running curl-config --static-libs it still outputs the non-present libraries. Attached is a patch to fix this. The patch removes the non-present libraries, but outputting nothing is still (theoretically) a valid output. It should probably output an error message (to stderr) and exit with a nonzero return code in that case. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: testing for connectivity to the SMTP server using libcurl APIs
On Thu, Nov 22, 2012 at 11:52:39AM +, Chandran, Naveen wrote: Please let me know if there is a way to test for proper SMTP server connectivity using libcurl ‘without actually sending an email’. Although this could otherwise be done with a simple telnet EHLO (or HELO), I would like to know if it can be accomplished using the libcurl API(s). I'm not sure if it's supported already, but it makes sense to me that the CURLOPT_NOBODY option could be usurped for this purpose. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Does libcurl supports LDAP connections over SOCKS proxy?
On Thu, Nov 15, 2012 at 02:47:58PM +0400, Anton Malov wrote: Tried to set custom socket for LDAP connection using LDAP_OPT_DESC option http://msdn.microsoft.com/en-us/library/aa367019(VS.85).aspx This is not working because operation returns 0x35 which stands for LDAP_UNWILLING_TO_PERFORM The server is not willing to handle directory requests. Do you get a different error when you connect directly to the LDAP server? This error makes it sound like the connection to the LDAP server actually succeeded, but the server has trouble with the request. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Sleep and wakeup on http request AND communication socket number
On Thu, Oct 25, 2012 at 07:28:28PM +0800, JALINDAR wrote: Then how to get at least opened socket for handle by libcurl.as i have to send this port number to the server This is obviously not for one of the standard protocols that libcurl supports, then. There is a way to get the socket being used by an easy handle, and it can be used for nonstandard hacks: the curl_easy_getinfo option CURLINFO_LASTSOCKET. There are also a couple of callbacks that can be used to access sockets as they're created: CURLOPT_SOCKOPTFUNCTION and CURLOPT_OPENSOCKETFUNCTION. You're on your own if you use these to change the socket state from what libcurl believes it to be, though. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Sleep and wakeup on http request AND communication socket number
On Tue, Oct 23, 2012 at 08:55:50PM +0800, JALINDAR wrote: In my application I want to sleep() and want to wake up on the http connection request on already open socket using libcurl handle? You can use the multi interface to get total control over when network data is processed. Rather than using sleep, you'd use select or poll with a timeout. Read up on the multi interface and take a look at some of the sample programs to see how that works. Also want to get to know the reason of wake up time out or http connection request? select and poll will say which socket is ready or whether it timed out waiting. how can one get the socket number opened by libcurl handle for communication? The multi functions provide several ways of getting these for the purposes of waiting on them. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: libcurl: resolver with HAVE_ALARM not thread safe?!
On Mon, Oct 22, 2012 at 02:24:01AM +0200, Thorben Thuermer wrote: on http://curl.haxx.se/libcurl/features.html we find: libcurl is designed and implemented entirely thread safe. libcurl uses certain system calls to obtain information. Some of the most crucial ones are the name resoluition calls (the gethostby* family). i just debugged a problem in a multithreaded application ( https://github.com/volkszaehler/vzlogger , thread in german: http://volkszaehler.org/pipermail/volkszaehler-users/2012-October/000619.html ) that appers to be caused by libcurl's (ab)use of alarm() to timeout gethostbyname() not being thread-safe. (as described here: https://bugzilla.redhat.com/show_bug.cgi?id=539809 ). there appears to be a race-condition when two threads run requests simultaneously, which reproducibly leads to a crash with the logging function being called with an invalid buffer/file, without any actual dns-timeout having occured. (the application only accesses localhost, which is defined in /etc/hosts, curiously the problem persists if a numeric IP is used.) If the problem exists even with a numeric IP, then it's unlikely to be a DNS issue since numeric addresses bypass the external resolver. Are you making SSL connections by chance? There is more you may need to do in that case; http://curl.haxx.se/libcurl/c/libcurl-tutorial.html#Multi-threading the problem disappeared when i compiled libcurl without HAVE_ALARM. CURLOPT_NOSIGNAL ought to do the same at run-time, but it's odd that this occurs with numeric addresses (assuming SSL is not involved). i'm not sure if this is a libcurl bug or just a documentation issue, but it has cost us some time to track down... This is from curl_easy_setopt(3) http://curl.haxx.se/libcurl/c/curl_easy_setopt.html CURLOPT_NOSIGNAL Pass a long. If it is 1, libcurl will not use any functions that install signal handlers or any functions that cause signals to be sent to the process. This option is mainly here to allow multi-threaded unix applications to still set/use all timeout options etc, without risking getting signals. (Added in 7.10) And this is from libcurl-tutorial(3) http://curl.haxx.se/libcurl/c/libcurl-tutorial.html#Multi-threading When using multiple threads you should set the CURLOPT_NOSIGNAL option to 1 for all handles. libcurl now gives a number of choices of DNS resolver back-end at compile time, some of which are immune to this problem. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: [PATCH] Add compiler and configure info to mutt -v output (closes #3537)
On Thu, Oct 18, 2012 at 03:12:21PM -0500, David Champion wrote: Thanks for the reviews! Changes: * Uses a C txt2c instead of Python - no new build prereqs But it's going to break when cross-compiling, whereas the Python version doesn't. Dan
Re: fflush()ing libcurl's FILE* for the file:// protocol...
On Sun, Oct 07, 2012 at 01:06:15AM +0200, Sebastian Rasmussen wrote: If it's truly needed, then sync() could always be called in the application's write callback that occurs immediately after the file write. Now I'm not following your reasoning. What write callback? Sorry, I meant the read callback. Right after the fwrite() completes, the read callback is called again for more data. The progress callback could be used to, but that's rate limited. Ok, I'll take a look at this on monday and see if I can figure out how this works and see if I can do something based on test 513. Thanks for the pointer, without it I would have been forever lost looking at the vast test code! I hardly ever write a new test from scratch. Cutting-and-pasting from the nearest existing test case saves so much writing boilerplate! Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: fflush()ing libcurl's FILE* for the file:// protocol...
On Fri, Oct 05, 2012 at 07:03:47PM +0200, Sebastian Rasmussen wrote: Alright, how about something along the lines of the attached? Basically I'm adding a CURLOPT_UNBUFFERED_WRITES which causes libcurl to call setbuf() with NULL, and also to call fsync() to make sure that the written file data actually makes it to the file system (whether it makes it to disk is an entirely different matter). fsync() is completely orthogonal to unbuffered writes and shouldn't be included in the patch. On many journaled filesystems, fsync() essentially forces a complete sync(), affecting throughput of the entire system. It's a big stick that should be used sparingly. If it's truly needed, then sync() could always be called in the application's write callback that occurs immediately after the file write. Admittedly, that's not as efficient on systems where fsync() doesn't effectively equal sync(), but it should be needed in so few applications that I'm not too worried. I'm not confident enough about how curl's test harness works to add a test that sets this option and tests it. I'm not even sure what such a test can do apart from just making sure that the CURLOPT is accepted. But as you can see from the patch I'm hoping for the patch to eventually be included in 7.27.1. :) You could do it pretty easily with an external libtest program, using test 513 as a base. It might be tricky to make completely portable due to differing stdio buffering implementations, but it should be possible to make it work most places. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: fflush()ing libcurl's FILE* for the file:// protocol...
On Wed, Oct 03, 2012 at 02:46:18PM +0200, Sebastian Rasmussen wrote: So therefore I propose that a new return value for READFUNCTION is added which instructs libcurl to call fflush() on the FILE*, thereby forcing the data to the file system. Does anyone have any opinion on this? Also does this draft patch adhere to the design philosophy of libcurl? I'm happy to improve my draft patch to a state where it can be included, so I'd really like you review comments and opinions. :) I think a much more straightforward approach would be simply to have a CURLOPT that sets unbuffered mode for the FILE object (i.e. calls setbuf() with no buffer given). Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: [PATCH]: curl-config CPPFLAG_CURL_STATICLIB
On Sat, Sep 29, 2012 at 09:22:35PM -0600, Kelly Anderson wrote: curl-config is not quite right, here's a patch. Thanks for the patch, but it's already been fixed in git. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: curl to try the SSL connection with previous connection session ID
On Thu, Sep 27, 2012 at 12:52:28PM -0400, bala suru wrote: Since I am newbie to the curl , can you provide some more information on how to re use the handle ..? You mean to say perform only the below step with the new request again..!! /* Perform the request, res will get the return code */ res = curl_easy_perform(curl); Right. You can of course set new options on the handle (such as a new URL) before reusing it. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: How to delete file from sftp server by libcurl
On Thu, Sep 20, 2012 at 11:29:00AM +0800, 黄心怡 wrote: I am trying to delete a file from sftp server. Here is my code: CString cmd = delete 2.key; CURL *curl = curl_easy_init(); curl_easy_setopt(curl, CURLOPT_SSH_AUTH_TYPES, CURLSSH_AUTH_PASSWORD); curl_easy_setopt(curl, CURLOPT_USERPWD, myUserPass.c_str()); curl_easy_setopt(curl, CURLOPT_URL, myUrl.c_str()); curl_easy_setopt(curl, CURLOPT_PORT, atoi(m_port.c_str())); curl_easy_setopt(curl, CURLOPT_CUSTOMREQUEST, cmd.c_str()); CURLOPT_CUSTOMREQUEST is documented to be valid for HTTP, FTP or POP3 only. What you are looking for is CURLOPT_QUOTE or the PRE or POST variant thereof. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: utmp fix for mips64 uClibc systems
On Mon, Sep 24, 2012 at 09:12:01PM +0200, Waldemar Brodkorb wrote: +if (access(filename, R_OK | W_OK) == -1) { +c=open(filename, O_WRONLY | O_CREAT, 0664); +if (c 0) { 0 is a perfectly valid file descriptor. This should be c = 0, or just use xopen(). +close(c); +} +} Dan ___ busybox mailing list busybox@busybox.net http://lists.busybox.net/mailman/listinfo/busybox
Re: Wrong behavior when activate the LOW_SPEED_LIMIT and LOW_SPEED_TIME
On Mon, Sep 24, 2012 at 11:31:07AM +0800, Jie He wrote: curl_easy_setopt(handle_curl, CURLOPT_MAX_RECV_SPEED_LARGE, 1024); curl_easy_setopt(handle_curl, CURLOPT_FOLLOWLOCATION, 1L); curl_easy_setopt(handle_curl, CURLOPT_LOW_SPEED_LIMIT, 1); curl_easy_setopt(handle_curl, CURLOPT_LOW_SPEED_TIME, 60); Note that these are using the wrong types. CURLOPT_MAX_RECV_SPEED_LARGE expects a curl_off_t, while CURLOPT_LOW_SPEED_LIMIT and CURLOPT_LOW_SPEED_TIME expect long arguments. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Patch to add CURLOPT_SSLENGINE_WITH_OPT for curl_easy_setopt
On Thu, Sep 20, 2012 at 02:31:08PM -0400, Andrew Prout wrote: The attached patch adds a new option for curl_easy_setopt: CURLOPT_SSLENGINE_WITH_OPT. It's be nice if this feature could be merged into libcurl. It's an variation of CURLOPT_SSLENGINE that lets you set the pre and post engine init commands to be passed to OpenSSL. More info is available at: http://www.openssl.org/docs/crypto/engine.html#Advanced_configuration_support The patch was originally written for libcurl v7.22, but I've updated the option ID to avoid conflicts and it applies compiles against v7.27. Below is a simplified example of a program that uses the dynamic engine to load a PKCS#11 based on the Using Engine_pkcs11 with the openssl command example from: http://www.opensc-project.org/engine_pkcs11/wiki/QuickStart -Andrew Prout -- CURL *ch = NULL; struct curl_sslengineinfo ei; char *preopts[] = { SO_PATH, /usr/lib64/openssl/engines/engine_pkcs11.so, ID, pkcs11, LIST_ADD, 1, LOAD, NULL, MODULE_PATH, /path/to/my/pkcs11.so, NULL }; char *CertID = d3a805a58810fbe89ece27d9f5e3170e61eb3e2b; // ID field from PKCS#11 library, use pkcs11-tool to discover ei.enginename = dynamic; ei.preopt = preopts; ei.postopt = NULL; curl_global_init(CURL_GLOBAL_ALL); ch = curl_easy_init(); curl_easy_setopt(ch, CURLOPT_URL, https://localhost/restricted;); curl_easy_setopt(ch, CURLOPT_SSLENGINE_WITH_OPT, ei); curl_easy_setopt(ch, CURLOPT_SSLCERTTYPE, ENG); curl_easy_setopt(ch, CURLOPT_SSLCERT, CertID); curl_easy_setopt(ch, CURLOPT_SSLKEYTYPE, ENG); curl_easy_setopt(ch, CURLOPT_SSLKEY, CertID); curl_easy_perform(ch); I can see the need for this option, but this patch stands out as not being in the same style as other libcurl options. Passing in a struct, creating a NULL-terminated pointer list, and setting three separate options at once are all examples of this. I suggest separating the pre and post options into two separate curl_easy_setopt calls and leaving the CURLOPT_SSLENGINE option alone. And I suggest using one of the existing list types to store the name/value list pairs. The struct curl_slist type is the obvious one to use for this, but the fact that the contents are paired almost makes me want to abuse struct curl_httppost instead. There's actually a pretty good mapping between what's required for these options and the curl_httppost types; CURLFORM_COPYNAME would contain the name part of each option, and CURLFORM_COPYCONTENTS would contain the value part. As elegant as that would be, it may be abusing the intended use of this type a bit too much. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: use libcurl with polarssl, linking issue.
On Wed, Sep 19, 2012 at 02:54:56PM -0700, Emanuel Patern wrote: I tried to build curl with polarssl instead of openssl. Here what I did : Libcurl version : 7.27.0 Polar ssl version : 1.1.4 To build polar ssl I did : make CC=gcc APPS= make DESTDIR=/c/pssl install To build curl I did : perl Configure CFLAGS=-Os -ffunction-sections -fdata-sections -flto LDFLAGS=-Wl,-s -Wl,-Bsymbolic -Wl,--gc-sections -flto -Os --without-libidn,zlib,ldap-lib --disable-ipv6,manual,shared,verbose,debug,gopher --enable-hidden-symbols --enable-static --prefix=/c/curl --with-polarssl=/c/polarssl I don't know what perl Configure does but --without- and --disable- arguments can't be separated by commas like this with autoconf. The directory specified with the --with-polarssl= option is not the same as that you show for the PolarSSL build above. If PolarSSL uses an autoconf-like build system, then DESTDIR isn't going to produce the right output directory structure; in autoconf-based builds, the --prefix option does that. Rather than adding all those optimization options right away, try getting a standard build working first. If one of those optimizations isn't compatible with your build, it's just going to get in the way. Now Everything looks okay, I have added licurl.a, libpolarssl.a into my project with link option -DCURL_STATICLIB. After Compile I get those errors during linking : To your project? I don't know what kind of build system you're using, but clearly, it's not libcurl's, which increases the chance dramatically that the problem is not in curl but rather your environment. = ...\libcurl.a(libcurl_la-polarssl.o):polarssl.c:(.text$polarssl_connect_step2+0x4b)||undefined reference to `ssl_handshake'| ...\libcurl.a(libcurl_la-polarssl.o):polarssl.c:(.text$polarssl_connect_step2+0xd6)||undefined reference to `ssl_get_ciphersuite'| What does the link command-line look like? It looks like the PolarSSL library isn't being linked. It may be the order of the library given on the link command-line is wrong. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: cURL on devices with unstable internet connection
On Sat, Sep 15, 2012 at 04:19:34PM +0300, Alexander Shashkevych wrote: I'm engineer which is porting webkit browser for car head unit and we are using curl as http(s) backend in our project. We have following use case: internet access established via mobile phone and it's unstable. In some cases phone loses internet connection when curl still downloading some resources. Within 10-20 seconds connection is restored, but curl no longer downloading any resources. I read FAQ page and there was noted that curl unable to detect situations like unplugged cables, so it looks similar to our use case with broken connection. In curl error codes I also not found any code that could be informative about abnormal temination of downloading process... As I understood there is only one option: use timeouts? Maybe there some other solutions are exist to determine such unexpectendly broken connections instead of timeouts? Maybe I missed something? http://curl.haxx.se/docs/faq.html#Why_doesn_t_cURL_return_an_error There's no way in the general case to tell when an Internet connection is down. Is it considered down when someone uplugs an Ethernet cable? What if the device has a wireless connection too--is it still considered down? What if a bulldozer takes out some fibre on the other end of town? What about a bulldozer at the other end of the country? It's not a simple question to answer, so TCP/IP just keeps trying. In your case, you probably have it easier. You likely only have a single 3G Internet connection to the rest of the world and have direct access to the modem to tell when it's up or not. You can use that information if you want to tear down the interface when the link goes down, which ought to return a socket error and have libcurl exit with an error code. But, since the connection might go up again shortly afterward, you're probably better off playing with the timeouts instead. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Help needed with CURLOPT_HEADERDATA and friends
On Mon, Sep 10, 2012 at 06:43:03PM +0100, John Doe wrote: I'm afraid I can't make head or tail of the sparse documentation, probably because I haven't been programming in C/C++ long enough to be used to such sparseness. But I digress. Have you looked at http://curl.haxx.se/libcurl/? You may want to start with the tutorial, move to some of the example programs, then dive into the man pages. So far, I've cobbled together the code below, it picks up x-amzn-Requestid ok.. but then I can't get it to deal with the JSON error and the program segfaults. Unfortunately, you haven't provided enough code for us to say what is wrong. One obvious thing to check is that the callback functions are all static member functions, since libcurl has no concept of C++ objects. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Possible patch about http custom header is used for proxy connection
On Thu, Sep 06, 2012 at 04:37:05PM +0200, Daniel Stenberg wrote: I would like to introduce a complement to CURLOPT_HTTPHEADER that is used only for CONNECT requests. Called CURLOPT_PROXYHEADER or CURLOPT_CONNECTHEADER perhaps. The headers provided to CURLOPT_HTTPHEADER would then not be used at all in CONNECT requests. What do you think about using that approach? Backwards compatibility would be an issue. But if you keep the current behaviour until the app sets a CURLOPT_CONNECTHEADER (even to set a dummy or NULL header), it ought to work. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Port from URI is not taken
On Thu, Aug 30, 2012 at 06:30:03PM +, Sidde Gowda wrote: I have a port 8080 in the URI from where I am trying to fetch and store a file. But I see port 80 is taken from the logs. Do I need to use port option to specify? You can use the CURLOPT_PORT option, but the one in the URL should be used otherwise. What is the exact URL you are using? Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: One or more libs available at link-time are not available run-time
On Fri, Aug 17, 2012 at 11:59:45AM +0100, Jeremy Hughes wrote: Does 7.27.0 require additional libraries that aren't needed by earlier versions? Yes, it has optional features that can require new dependencies. You should still be able to disable those features in the configure script to return to the same feature set as earlier versions, though. Try running curl-config --libs on the installed earlier and newer versions and compare the results to see if there are new dependencies in your configuration. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Wrong http request size calculation when uploading /proc/cpuinfo
On Thu, Aug 09, 2012 at 11:13:13PM +0200, Florian Pritz wrote: So far I have only seen this happen for files read from some pseudo file system like /proc and those tend to be small. The same thing would happen, for example, on block devices, pipes or special character devices. I think you could load the file into ram and add a (configurable) upper limit. The default could be something like 50MiB. The user can increase as they see fit and 0 could mean unlimited. If you hit the limit you throw an error. It won't be perfect, but at least the error will give users a clue about what's going on. There some danger in doing that. If the user is trying to send data directly from a pipe or a slow network socket or /dev/video, then it could take a *long* time to reach 50 MiB at which time the program will just exit without doing anything. Meanwhile, all that data is gone forever. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: [Mageia-dev] What to use instead of /dev/usb/lp*
On Wed, Jul 25, 2012 at 10:26:13AM -0700, Jeff Robins wrote: I'll give that a try. Is there a better/newer way to access the device? I'd like to update and fix the program if possible. Using cups would be the best. If the program is currently opening the lp device directly, then it might be as simple as replacing the call to fopen(/dev/lp,w) with popen(lp,w). That will pipe the data to the lp program which will spool it to the printer through CUPS. Dan
Re: [Mageia-dev] What to use instead of /dev/usb/lp*
On Wed, Jul 25, 2012 at 03:27:52PM -0700, Jeff Robins wrote: I'm not sure if that will work. CUPS doesn't have a driver for the device and it's not really a printer. It just uses the USB printer class. It's used to cut shapes into or out of paper. This includes moving the blade up and down. I looked at the program and I thinks its sending G-code, which is used for CNC work. CUPS has a rule to send raw binary data directly to the device without interpreting it first. If this device emulates a printer at the USB level, it ought to work. At worst, you may have to add the -oraw option or add a rule to /usr/share/cups/mime/mime.convs to force cups to treat the data as raw binary (instead of text, for example) so it's sent directly to the printer instead of being rasterized first. Dan
Re: HTTP Pipelining Contributions
On Tue, Jul 24, 2012 at 03:01:26PM +, George Rizkalla wrote: The proposed algorithm involves balancing HTTP requests over multiple TCP sockets, while avoiding use of HTTP pipelining in instances where we believe errors are likely to occur, or where it is likely that there would be a performance hit if pipelining is used. This is set of proposals sounds really useful, but I wonder if it's better kept in the application rather than in libcurl. Most of the suggestions can be done today without changing (and complicating) libcurl, using its existing interfaces and callbacks. And some of them only make sense with additional data that only the application can provide, which would require numerous additional options and interfaces to bring into libcurl. Really what this calls for is a layer sitting above libcurl that takes care of queuing handles into appropriate pipelines, creating new ones as necessary and optimizing the balance of requests to sockets as appropriate. The low-level requests can already be performed by libcurl--this would really be a value added layer that would only be used by applications that need it. I guess what I've had in the back of my mind for a while is a new library, let's call it libcurlapp, sitting on top of libcurl. This could be a place for those numerous suggestions over the years for features that don't quite make sense in a low-level library like libcurl, but would still be really useful for some applications. Things like sophisticated proxy selection, memory and disk caching of results, handle pools, Metalink support, and now pipeline optimization. Those sort of features would IMHO be well-suited for a libcurlapp, even if not in libcurl itself. I'm not volunteering to lead such a project :-) or trying to dampen the enthusiasm for improved pipelining support in libcurl, just throwing the idea out there to consider. I could see how such a feature set would be useful, but I'm not sure whether it could be done in such a way that's general enough for use a wide-range of different applications. But, if it's written to use stock libcurl underneath, then it ought to be able to be designed in such a way that apps could mix and match just those additional features they want out of the new library, while leaving the full power of libcurl available as well. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Unable to send email with libcurl (cross compiled for arm) application
On Mon, Jul 23, 2012 at 11:52:31PM -0700, usama yaseen wrote: yes previously ssl support was not enabled and https was not included in the protocols supported, i installed SSL from source and now my configure option is like this: ./configure --host=arm-none-linux-gnueabi --build=i686-linux CFLAGS='-Os' --enable-smtp --with-ssl after installing curl, when i type curl-config --protocols in the terminal, https is included in the list of supported protocols. But what does curl --version say? That's the more reliable indicator since it takes the run-time linking into account, which curl-config does not. But even after all this the output of sample application is the same, Exactly the same? When i try to open any site with the curl, i got this which is surely related to the certificate: root@am180x-evm:/home# curl https://www.gmail.com curl: (60) SSL certificate problem: certificate is not yet valid More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a bundle of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. Now how can i resolve this ? should i specify the certificate in while configuring ? and from where do i get the certificate of arm ? or will it be the same as of pc ? See http://curl.haxx.se/docs/sslcerts.html Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Using libcurl in win32 DLL in MVSC 2010
On Tue, Jul 24, 2012 at 09:51:09AM +0200, Bej Glee wrote: - Configuration properties-Linker-Input-Additional Dependencies: C:\cURL\ lib\Release\curllib.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib; comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib; odbc32.lib;odbccp32.lib - These are copied into the my project directory: curllib.dll, libeay32.dll, openldap.dll, ssleay32.dll, libsasl.dll. I don't see any static OpenSSL libraries here--just the dynamic ones. If you're trying to create a single DLL, you'll need ther static versions. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Using libcurl in win32 DLL in MVSC 2010
On Mon, Jul 23, 2012 at 11:00:48AM +0200, Bej Glee wrote: I want to create a win32 DLL what use libcurl. My goal is: read a HTML page and get back as string. An external program (not C++) call this DLL, and get the HTML page in string. I want to create ONE dll which contains the necessary resources (all dll). Then you'll have to statically link libcurl and all its dependencies into your single DLL. The project build successful - dll has been created, but when I call getHTML() function I get error: cannot load library 'example.dll' (error 126). Someone can tell you what to set in the project? You didn't provide any information as to how you linked your DLL, so we can't tell what went wrong. Chances are, it's depending on another DLL (such as libcurl.dll) which isn't available. There are dependency walking tools available for Windows that should show you what dependencies are missing. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Unable to send email with libcurl (cross compiled for arm) application
On Wed, Jul 18, 2012 at 06:38:25AM -0700, usama yaseen wrote: I am using libcurl for am-1808 running linux[i need to send emails with my c-program], i have compiled the libcurl successfully for arm and the sample application gets compiled successfully, but when i run this application on arm-board, i got the following output. * About to connect() to smtp.gmail.com port 587 (#0) * Trying 74.125.127.108... * 0x12008 is at send pipe head! * Connected to smtp.gmail.com (74.125.127.108) port 587 (#0) 220 mx.google.com ESMTP pf8sm6421301pbc.44 EHLO am180x-evm 250-mx.google.com at your service, [115.186.161.64] 250-SIZE 35882577 250-8BITMIME 250-STARTTLS 250 ENHANCEDSTATUSCODES STARTTLS 220 2.0.0 Ready to start TLS QUIT And then it remains stuck there, the output of the same program on my pc is, What SSL library is this build using? What is the output of curl --version? Can you connect to an https site using that curl binary? There is some issue with the certificates, i have tried setting ca-bundle while configuring lib-curl, but it didn't helped. Here's my command for configuring libcurl. ./configure --host=arm-none-linux-gnueabi --build=i686-linux CFLAGS='-Os' --with-ca-bundle=/etc/ssl/certs/ca-certificates.crt --enable-smtp There's no --with-ssl on this configure line. Do you even have SSL support in that binary? If so, curl shouldn't even try an SSL connection if that's the case, so maybe there's a libcurl bug there. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Library version.
On Wed, Jul 18, 2012 at 12:45:13PM +0300, Slav wrote: Hello. I downloaded libcurl from http://curl.haxx.se/download/ curl-7.26.0.tar.gz, built it with c-ares using both approaches (with symlink to ares sources and using already built ares) but curl_version_info( CURLVERSION_NOW )-version = 7.22.0, curl_version_info( CURLVERSION_NOW )- version_num = 0x71600 and (what is the most sad) curl_version_info( CURLVERSION_NOW )-ares = NULL. But LIBCURL_VERSION from curl/curlver.h is defined as 7.26.0. Why is it so? You're probably dynamically linking against an old libcurl.so.4 lying around on the system instead of the new one you just compiled. ldd ./curl should show you what libraries are being used (on Linux, anyway). Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Porting libcurl on Android with SSL Support
On Tue, Jul 17, 2012 at 11:39:00AM +0200, Tancho . wrote: On Tue, Jul 17, 2012 at 3:34 AM, Guenter li...@gknw.net wrote: there might be cases where you need such a makefile - different folks have different needs; unless Dan agrees it should be removed I think we keep it; and for those who're hasty and dont read ./docs/INSTALL to get informed of another possible way to go once they see an Android.mk in the root we can explicitely point to ./docs/INSTALL at the top of the makefile; wouldnt that help? I personally think that it's misleading, since it's close to impossible to compile using it. so maybe removing it is a bit too radical, but there should be a warning against it (or something) . That may be true for the standalone toolchain, but there's more than one way to compile for Android. Have you ever tried compiling against the NDK or from the full source release? You need an Android.mk when you do that. Unfortunately, I no longer do Android development so I can't keep it up to date, but it worked fine for me up to a few months ago. I think it's a great idea to point users to the INSTALL file from within Android.mk for a more detailed discussion of the Android options. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Library version.
On Sat, Jul 21, 2012 at 11:24:01PM +0300, Slav wrote: Hmm, that's really so. I was ascerting that new (1.26) curl is successfully installed by removing all curl.so files from filesystem and after installation new curl.so appeared and compilation command is `g++ ./main.cpp -L/usr/local/ lib -lcurl -o test`. Why does system builds against some *.so.4 files? What the system builds your curl against and what it runs it against are two completely different things. One has to do with the gcc configuration and the other with the run-time linker configuration. You can deliberately choose to run a curl with a different library version at run-time, which is essentially what the system is doing on your behalf. If you want to run your curl against a specific libcurl at run-time, you can either configure the run-time linker to do that at run-time (LD_LIBRARY_PATH or /etc/ld.so.conf), configure the run-time linker at compile time (-rpath), put the libcurl.so.4 file where the run-time linker will use it by default (e.g. into /usr/lib), or statically link it. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: [Mageia-dev] [changelog] [RPM] cauldron core/release libsoup-2.39.4.1-1.mga3
On Thu, Jul 19, 2012 at 11:36:02PM +0200, Nicolas Lécureuil wrote: in the devel ? i think a libsoup-l10n ( noarch package ) would be a better idea. This has been split into a -common package before, such as libexif12-common. Dan
Re: Unit tests with several responses
On Fri, Jul 20, 2012 at 02:15:46AM +, Joe Mason wrote: dataNUM Send back this contents instead of the data one. The num is set by: A) The test number in the request line is 1 and this is the remainder of [test case number]%1. So I have sections data1, data2, data3, and in the protocol section, GET requests for /10001, /10002 and /10003. If I understand this right, it should send back the response in data1 when it gets the request for /10001, etc. But I must not understand this, because when I run this, I get server returned nothing (no headers, no data). I also can't figure out how it interacts with The GET number in the example file you provided was incorrect. If test 2023 is supposed to request data1 then the GET request must be for http://%HOSTIP:%HTTPPORT/20230001 B) The request was HTTP and included digest details, which adds 1000 to NUM and with swsbounce. Subsequent requests will be for data1001 and so on. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: [Mageia-dev] [changelog] [RPM] cauldron core/release libsoup-2.39.4.1-1.mga3
On Thu, Jul 19, 2012 at 05:28:41PM -0400, Charles A Edwards wrote: On Thu, 19 Jul 2012 23:12:34 +0200 Olav Vitters wrote: file /usr/share/locale/be/LC_MESSAGES/libsoup.mo from install of lib64soup2.4_1-2.39.4.1-1.mga3.x86_64 conflicts with file fr om package libsoup2.4_1-2.39.3-1.mga3.i586 Note that there are two different base versions here, visible when they're lined up: lib64soup2.4_1-2.39.4.1-1.mga3.x86_64 libsoup2.4_1-2.39.3-1.mga3.i586 It is miss-packaged. The locale files should be included in the devel rpm Not in the library rpm. No, they shouldn't--then the internationalization would only be available when the user installs the development packages, which doesn't make sense. Dan
Re: configure: disable OpenSSL check
On Mon, Jul 16, 2012 at 08:37:11PM +0200, pcworld wrote: I already successfully cross-compiled libcurl for a system running ARM. Now I'd like to add OpenSSL support by passing --with-ssl to configure, though there are some checks in the configure script that fail. Is there any way to disable these SSL checks and tell it that I'm sure it will work fine on the target OS? I'd prefer a solution that passes some additional commands to configure or sets an environment variable, to modifying the existing build scripts. There is some discussion on compiling with OpenSSL in the INSTALL file. Generally, it's preferred to use pkg-config instead of passing a path with --with-ssl. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
libexif project security advisory July 12, 2012
libexif project security advisory July 12, 2012 PROBLEM DESCRIPTION A number of remotely exploitable issues were discovered in libexif and exif, with effects ranging from information leakage to potential remote code execution. The issues are: CVE-2012-2812: A heap-based out-of-bounds array read in the exif_entry_get_value function in libexif/exif-entry.c in libexif 0.6.20 and earlier allows remote attackers to cause a denial of service or possibly obtain potentially sensitive information from process memory via an image with crafted EXIF tags. CVE-2012-2813: A heap-based out-of-bounds array read in the exif_convert_utf16_to_utf8 function in libexif/exif-entry.c in libexif 0.6.20 and earlier allows remote attackers to cause a denial of service or possibly obtain potentially sensitive information from process memory via an image with crafted EXIF tags. CVE-2012-2814: A buffer overflow in the exif_entry_format_value function in libexif/exif-entry.c in libexif 0.6.20 allows remote attackers to cause a denial of service or possibly execute arbitrary code via an image with crafted EXIF tags. CVE-2012-2836: A heap-based out-of-bounds array read in the exif_data_load_data function in libexif 0.6.20 and earlier allows remote attackers to cause a denial of service or possibly obtain potentially sensitive information from process memory via an image with crafted EXIF tags. CVE-2012-2837: A divide-by-zero error in the mnote_olympus_entry_get_value function while formatting EXIF maker note tags in libexif 0.6.20 and earlier allows remote attackers to cause a denial of service via an image with crafted EXIF tags. CVE-2012-2840: An off-by-one error in the exif_convert_utf16_to_utf8 function in libexif/exif-entry.c in libexif 0.6.20 and earlier allows remote attackers to cause a denial of service or possibly execute arbitrary code via an image with crafted EXIF tags. CVE-2012-2841: An integer underflow in the exif_entry_get_value function can cause a heap overflow and potentially arbitrary code execution while formatting an EXIF tag, if the function is called with a buffer size parameter equal to zero or one. CVE-2012-2845: An integer overflow in the function jpeg_data_load_data in the exif program could cause a data read beyond the end of a buffer, causing an application crash or leakage of potentially sensitive information when parsing a crafted JPEG file. There are no known public exploits of these issues. AFFECTED VERSIONS All of the described vulnerabilities affect libexif version 0.6.20, and most affect earlier versions as well. SOLUTION Upgrade to version 0.6.21 which is not vulnerable to these issues. CHECKSUMS Here are the MD5 sums of the released files: 0e744471b8c3b3b1534d5af38bbf6408 exif-0.6.21.tar.bz2 78b9f501fc19c6690ebd655385cd5ad6 exif-0.6.21.tar.gz 27339b89850f28c8f1c237f233e05b27 libexif-0.6.21.tar.bz2 9321c409a3e588d4a99d63063ef4bbb7 libexif-0.6.21.tar.gz aa208b40c853792ba57fbdc1eafcdc95 libexif-0.6.21.zip Here are the SHA1 sums of the released files: 74652e3d04d0faf9ab856949d7463988f0394db8 exif-0.6.21.tar.bz2 d23139d26226b70c66d035bbc64482792c9f1101 exif-0.6.21.tar.gz a52219b12dbc8d33fc096468591170fda71316c0 libexif-0.6.21.tar.bz2 4106f02eb5f075da4594769b04c87f59e9f3b931 libexif-0.6.21.tar.gz e5990860e9ec5a6aedde0552507a583afa989ca2 libexif-0.6.21.zip ACKNOWLEDGEMENTS Mateusz Jurczyk of Google Security Team reported the issues CVE-2012-2812, CVE-2012-2813 and CVE-2012-2814. Yunho Kim reported the issues CVE-2012-2836 and CVE-2012-2837. Dan Fandrich discovered the issues CVE-2012-2840, CVE-2012-2841 and CVE-2012-2845. REFERENCES http://libexif.sf.net pgp6L8xir1vkW.pgp Description: PGP signature
Re: how to force sslv3 using libcurl
On Wed, Jul 11, 2012 at 05:26:19PM -0400, Anu Shrestha wrote: just doesn’t work for me. How do you force SSL version? Any order it needs to be in? It doesn’t matter what I set to I get the handshake error(below is the code) and it only tries connecting to SSLv3. It’s been 5 days trying to fix the problem with no luck. Any help would be much much appreciated. Are you sure it's actually using SSLv3 or is that just Wireshark displaying the lowest common denominator version? Command line: = works curl -X POST -vvv --sslv3 -d @input_file https:// username:passw...@kmc03bit.xx.xx/invoke/processRequest --cacert /usr/local/etc/ssl/ca.pem You can use the --libcurl option to have curl write a little program that sets these options in the same way and see if that works any better. curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, 1); curl_easy_setopt(curl, CURLOPT_SSL_VERIFYHOST, 1); Note that these have to be type long, not int. On 07/09/2012 10:58 PM, Anu Shrestha wrote: Summary: Handshaking between the client and host cipher(TLSV1/SSLV3) is not compatible with current version of curl 7.24. It used to work with 7.21. What changes between the version could have this? One significant one was commit db1a856b4f7cf6ae334fb0656b26a18eea317000, which has since been made configurable with the CURLSSLOPT_ALLOW_BEAST option. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: libcurl: Windows vs Mac
On Wed, Jul 11, 2012 at 12:38:04PM -0700, Igor Korot wrote: Running on Mac/Cocoa this executes fine. When ran on Windows however, I am getting CURLE_WRITE_ERROR and the errorMsg contains: Failed writing body(1416 != 1460). This error occurs when the write callback returns an error. Are you setting a write callback? Are you setting any write data? Windows has an issue with the default write callback due to issues sharing DLLs (see the tutorial for more info). Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Porting libcurl on Android with SSL Support
On Thu, Jul 05, 2012 at 07:17:01PM +0200, Guenter wrote: Am 05.07.2012 15:37, schrieb Tancho .: 2. the script I'm using is actually just setting the environment and then calling the curl config script, it's based on the explanations in the Android.mk file comments by the dev team you mean the Android.mk in libcurl source tree? Then sorry, I cant comment on this one since I never used it; but if you wait a bit I'm sure that Dan will comment on this ... If you use the standalone Android toolchain (from make-standalone-toolchain.sh), you don't need Android.mk or the instructions mentioned within it. I used Android.mk when creating Android ROMs that included libcurl built-in, but you will probably also need it when writing code using the NDK. If you can avoid it, do so, since getting the config.h file built correctly the first time using the instructions in Android.mk can be tricky and time consuming. You could also try creating config.h using the standalone toolchain, then using the NDK and Android.mk to build libcurl. I'm not positive that the compile and link options are the same in both environments, but the kind of things that configure looks for ought to be. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Session still remains even connection get closed
On Wed, Jul 04, 2012 at 06:28:34PM +0530, Swamy Mudhbasalar wrote: I tried with CURLOPT_FORBID_REUSE option, the connection is getting close but session is still there. I can see the session active in vSphere Client UI I don't know what vSphere Client UI is, nor what you mean by session in this context. HTTP has no concept of sessions, so I'm assuming you mean some kind of higher-layer application session. Actually i am downloading the vm file from ESX server via vCenter. I think there is some problem with libcurl 7.21.0 while interacting with vmware web service. I am suspecting that the libcurl is not doing - Log's out and Disconnects from the WebService. Libcurl is failing to do _service.Logout(_sic.sessionManager); _service.Dispose(); I have no idea what this is. Means is not logging out from the session manager Please let me know is there any option in libcurl to logout from the session manager. It sounds like this is some kind of session tracked by an application. If it uses standard HTTP to log out from the session manager, then libcurl can do it. All you need to do is find out what HTTP request is needed to do so. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Session still remains even connection get closed
On Tue, Jul 03, 2012 at 11:19:36AM +0530, Swamy Mudhbasalar wrote: I am downloading many files from the server. I used persistent connection and reusing the session. After getting all the files closing the connection using curl_easy_cleanup. Problem is still am seeing a session active on server side. Is there anyway i can close this session before curl_easy_cleanup. The CURLOPT_FORBID_REUSE option will automatically close the connection after every transfer. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: using libcurl for HTTP PUT - but curl_easy_perform always returns CURLE_OK
On Tue, Jul 03, 2012 at 05:35:33PM +0200, Pit Müller wrote: What is the official way for error handling here? Should i generally ignore the return value and always check only CURLINFO_RESPONSE_CODE? There are two levels of possible errors here: at the network level and at the HTTP level. The return value will tell you about any network level or configuration problems, while CURLINFO_RESPONSE_CODE will tell you about any higher-level problems. You can use the CURLOPT_FAILONERROR option to make libcurl convert some HTTP errors into CURLE-type errors, but read about the caveats first. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Chunked Transfer read callback pause
On Sat, Jun 30, 2012 at 06:47:38PM +0500, Usman Raza wrote: Hello all, im using curl for chunked transfer in a separate thread, by reading from a buffer, could anyone tell me how could i pause it, so i could maybe switch to another thread (which will write to the buffer) and then switch back to resume transfer? i did came across the curl_easy_pause(CURL *handle , int bitmask ) function which could be used with returning CURL_READFUNC_PAUSE from a callback, but im not sure how i can use it to accomplish thread synchronization.Appreciate any help on this.Thanks. libcurl is inherently paused while in a callback, so if you're just using the easy interface and therefore have just a single transfer outstanding, you can ignore curl_easy_pause and just do your thread synchronization within the read callback. Simply don't return from that callback with valid data until it's available! You can do any sort of synchronization you want in that callback, just don't return before the data is ready. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: FTP response timeout
On Thu, Jun 28, 2012 at 09:33:44AM -0700, Jeff McKay wrote: How would I find out if CURLOPT_TCP_KEEPALIVE is supported on my platform (Windows)? If you enable logging, you should see a message Failed to set SO_KEEPALIVE on fd X if setting keepalive failed on a recent libcurl. So, there would be no solution if I am using a libcurl version prior to 7.25? I am trying to avoid --keepalive-time was added to the curl front-end in 7.18.0, but much later to libcurl. Your application can always enable TCP keepalive itself in the CURLOPT_SOCKOPTFUNCTION or CURLOPT_OPENSOCKETFUNCTION callback function even in older libcurl versions. the pain of doing an upgrade. Another bit of advice I got was to periodically send a NOOP instruction to the ftp server on the control channel, but I don't think I have this capability if libcurl is handling the ftp transfer, correct? Correct. Adding NOOP support on the ftp control channel has been suggested in the past, but nobody has stepped up and submitted any code that implements it. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: FTP response timeout
On Wed, Jun 27, 2012 at 02:16:14PM -0700, Jeff McKay wrote: I have seen an earlier response to the question of what error #28 in ftp means but didn't quite understand it. Looking at the log of my ftp session, it appears that libcurl uploads the entire file (about 350 megs) based on the data fetches received. At the end, I get FTP response timeout, control connection looks dead. This does not seem to happen with smaller files. Is this my problem, or with the ftp server? If my problem, what can I do about it? It sounds like the connection is going through a stateful router or NAT and the control connection times out due to inactivity and is closed by the router. You can try the --keepalive-time option (if it's supported on your platform) to force periodic traffic on the control channel to keep the connection alive. I am experimenting with CURLOPT_FTP_RESPONSE_TIMEOUT but have not gotten the results yet from my customer. From the documentation, it seems that without this option, libcurl defaults to CURLOPT_TIMEOUT which I do not set, therefore the wait is infinite? But I still get the ftp timeout error. libcurl uses a hard-coded 60 second timeout for getting the response after a ftp transfer. I'm not sure why this doesn't use CURLOPT_FTP_RESPONSE_TIMEOUT instead. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: [PATCH] blkid: recognize nilfs2
On Tue, Jun 26, 2012 at 08:32:05AM -0700, Sven-Göran Bergh wrote: Adds support for nilfs2 to blkid. Enabled with CONFIG_FEATURE_VOLUMEID_NILFS. Since this is nilfs2, not nilfs1, most (all?) of the references to nilfs should probably say nilfs2 instead. + // The scondary superblock is not always used, so ignore it for now. s/scondary/secondary/ Dan ___ busybox mailing list busybox@busybox.net http://lists.busybox.net/mailman/listinfo/busybox
Re: [Patch] Test for IPv6 cookies
On Tue, Jun 19, 2012 at 12:08:30PM +, Ghennadi Procopciuc wrote: I've made a test case for problem signaled by Andrei Cipu (my supervisor)[0]. Can you take a look ? It does seem like a big oversight that there aren't any cookie tests over IPv6--this looks like a good addition. It looks to me like you should be able to perform this test without resorting to a new libtest binary, though--just use the normal curl binary. That's faster and cleaner since the test system doesn't need to build and store yet another target for use by a single test. There should also be cookies and cookiejar keywords in the keywords section, and the test number should be the next unused sequential one in the general section (currently 1408). Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: CURL_DISABLE_COOKIES and CURLOPT_VERBOSE
On Tue, Jun 19, 2012 at 08:35:21AM -0700, Tom Bishop, Wenlin Institute wrote: As a relatively new user of libcurl, I made the silly mistake of trying to use cookies when CURL_DISABLE_COOKIES was defined. It took me a while to figure out what the problem was. To help others avoid the same mistake, maybe the following could be added in getinfo.c: The curl_easy_setopt call should have returned CURLE_UNKNOWN_OPTION when attempting to enable cookie handling in the first place. You are checking the return code, aren't you? Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html
Re: Source code analysis of curl v7.26.0
On Wed, Jun 06, 2012 at 01:11:04PM -0700, William Betts wrote: That was nice of you :). I'm guessing you'd want to send Daniel Stenberg an email off the list. I believe his email address is dan...@haxx.se. There's also the curl-security at haxx.se address, which goes to a small group of developers which is useful to get some feedback in case Daniel is on vacation or something. Dan --- List admin: http://cool.haxx.se/list/listinfo/curl-library Etiquette: http://curl.haxx.se/mail/etiquette.html