Re: Where is curlx_dyn_addn defined?

2024-05-28 Thread Dan Fandrich via curl-library
On Tue, May 28, 2024 at 05:52:08PM -0500, Bill Pierce via curl-library wrote:
> I like to figure things like this out myself. I find that it's the best way 
> to learn how things work. So, I
> grabbed the sources from github using Git Bash on May 24, 2024, but when I 
> tried to compile a test program
> with selected libcurl files, curlx_dyn_addn was undefined. I found that it is 
> called in many places in the
> libcurl sources using Windows Explorer's search feature, but I couldn't find 
> where it is defined.

It's called Curl_dyn_addn() in the source (in lib/dynbuf.c), but it's renamed 
to curlx_dyn_addn with a macro in lib/dynbuf.h.  It's a bit backwards, but 
there are reasons.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: OpenPGP card not available

2024-04-09 Thread Dan Fandrich
On Tue, Apr 09, 2024 at 12:11:31PM +0200, Werner Koch wrote:
> By default we are not using PC/SC on Linux but direct access to the
> reader via USB.  Now if pcscd is already running and has access to the
> reader scdaemon won't be able to access the reader via USB.
> 
> 2.2 falls back to PC/SC if it can't use the reader via USB.

That explains the difference it nicely.

> Either shutdown pcscd or add
> 
> disable-ccid-driver
> 
> to ~/.gnupg/scdaemon.conf

Shutting down pcscd fixed it!  But I have other software that needs pcscd to
access the card, so I added "disable-ccid" to scdaemon.conf and gpg now works
even though pcscd is running.  Thanks for the help.

Dan

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
https://lists.gnupg.org/mailman/listinfo/gnupg-users


OpenPGP card not available

2024-04-09 Thread Dan Fandrich
Running "gpg --card-status" with a configured Yubikey plugged in on an x86_64
Linux machine just gives me these errors when running 2.4.5:

gpg: selecting card failed: No such device
gpg: OpenPGP card not available: No such device

However, leaving everything else the same and just running 2.2.42 (& earlier
2.2.x) gives me the output I'd expect with that command.  I've tried some of
the advice I've found of adding "reader-port Yubico Yubi" and "pcsc-shared" to
scdaemon.conf didn't make a difference. Enabling some scdaemon logging shows
this interesting bit in the log file:

2024-04-08 16:45:28 scdaemon[62168] DBG: chan_7 <- SERIALNO
2024-04-08 16:45:28 scdaemon[62168] DBG: apdu_open_reader: BAI=70202
2024-04-08 16:45:28 scdaemon[62168] DBG: apdu_open_reader: new device=70202
2024-04-08 16:45:28 scdaemon[62168] ccid open error: skip
2024-04-08 16:45:28 scdaemon[62168] DBG: chan_7 -> ERR 100696144 No such device 


With 2.2.42, I see this (with an actual serial number) and all works well:

2024-04-08 16:38:43 scdaemon[36563] DBG: chan_7 <- SERIALNO
2024-04-08 16:38:43 scdaemon[36563] DBG: apdu_open_reader: BAI=70202
2024-04-08 16:38:43 scdaemon[36563] DBG: apdu_open_reader: new device=70202
2024-04-08 16:38:43 scdaemon[36563] ccid open error: skip
2024-04-08 16:38:43 scdaemon[36563] DBG: chan_7 -> S SERIALNO 
D000
2024-04-08 16:38:43 scdaemon[36563] DBG: chan_7 -> OK
...

Running "echo SERIALNO | scd/scdaemon --server" is enough.  I've tried both
pcsc-lite 1.9.9 and 2.0.3 without a difference.  I'm not sure how to drill
down to figure out further to figure out what else could be causing the
failure. One obvious difference is that the working version is linked against
libpthread.so.0 but the failing one is linked against libnpth.so.0, but that
seems to have to do with locking which I wouldn't expect to make difference
with a simple local test.

I was hoping to bisect to the problem except that the 2.3 and 2.4 branches fail
at their .0 versions. Does someone have a suggestion to debug further?

Dan

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
https://lists.gnupg.org/mailman/listinfo/gnupg-users


OpenPGP card not available

2024-04-08 Thread Dan Fandrich
Running "gpg --card-status" with a configured Yubikey plugged in on an x86_64
Linux machine just gives me these errors when running 2.4.5:

gpg: selecting card failed: No such device
gpg: OpenPGP card not available: No such device

However, leaving everything else the same and just running 2.2.42 (& earlier
2.2.x) gives me the output I'd expect with that command.  I've tried some of
the advice I've found of adding "reader-port Yubico Yubi" and "pcsc-shared" to
scdaemon.conf didn't make a difference. Enabling some scdaemon logging shows
this interesting bit in the log file:

2024-04-08 16:45:28 scdaemon[62168] DBG: chan_7 <- SERIALNO
2024-04-08 16:45:28 scdaemon[62168] DBG: apdu_open_reader: BAI=70202
2024-04-08 16:45:28 scdaemon[62168] DBG: apdu_open_reader: new device=70202
2024-04-08 16:45:28 scdaemon[62168] ccid open error: skip
2024-04-08 16:45:28 scdaemon[62168] DBG: chan_7 -> ERR 100696144 No such device 


With 2.2.42, I see this (with an actual serial number) and all works well:

2024-04-08 16:38:43 scdaemon[36563] DBG: chan_7 <- SERIALNO
2024-04-08 16:38:43 scdaemon[36563] DBG: apdu_open_reader: BAI=70202
2024-04-08 16:38:43 scdaemon[36563] DBG: apdu_open_reader: new device=70202
2024-04-08 16:38:43 scdaemon[36563] ccid open error: skip
2024-04-08 16:38:43 scdaemon[36563] DBG: chan_7 -> S SERIALNO 
D000
2024-04-08 16:38:43 scdaemon[36563] DBG: chan_7 -> OK
...

Running "echo SERIALNO | scd/scdaemon --server" is enough.  I've tried both
pcsc-lite 1.9.9 and 2.0.3 without a difference.  I'm not sure how to drill
down to figure out further to figure out what else could be causing the
failure. One obvious difference is that the working version is linked against
libpthread.so.0 but the failing one is linked against libnpth.so.0, but that
seems to have to do with locking which I wouldn't expect to make difference
with a simple local test.

I was hoping to bisect to the problem except that the 2.3 and 2.4 branches fail
at their .0 versions. Does someone have a suggestion to debug further?

Dan

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
https://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Reproducing the release tarballs

2024-04-01 Thread Dan Fandrich via curl-library
On Sun, Mar 31, 2024 at 11:24:27AM +0200, Daniel Stenberg wrote:
> On Sat, 30 Mar 2024, Dan Fandrich via curl-library wrote:
> 
> > SPDX seems to be the standard SBOM format for this that tools are
> > starting to expect.  The format is able to handle complex situations,
> > but given the very limited scope needed in curl and for source releases
> > only, once you get a template file set up the first time filling in the
> > details for every release should be simple.
> 
> I can't but to feel that this is aiming (much) higher than what I want to
> do. If someone truly thinks SPDX is a better way to provide this information
> then I hope someone will step up and convert the scripts to instead use this
> format.
> 
> This is a SBOM for the tarball creation, not for curl.

Well, what is the tarball but the tarball of "curl"?  SPDX can provide
information on the files in the tarball as well as the files used to create the
tarball. How much you provide is up to you, but the more information available,
the more possibilities there are for others to use it.

> I rather start with something basic and simple, as we don't even know if
> anyone cares or wants this information.

That makes sense. SPDX is definitely heavier weight than a few version numbers
in an .md file. But, a lot more useful, too.

> > Even running "reuse spdx" in the curl tree (the same tool that's keeping
> > curl in REUSE compliance in that CI build) will output a SPDX file for
> > curl.
> 
> I tried it just now. It produces 86,000 lines of output! And yet I can't
> find a lot of helpful content within the output for our purpose here.

That example was just the first one I thought of that you might already have on
your system (due to the work in getting REUSE compliance some time ago). It
doesn't solve the problem at hand, but it shows what SPDX looks like and it
could still be integrated into a final curl SPDX file provided with each
release if we wanted it to. Few projects provide SPDX files right now which is 
why
companies using SPDX only for license compatibility checking need to run "reuse
spdx" on the source code themselves. But if curl provided that SPDX file already
filled in with each release, including the additional information on the
dependencies used to create the tar ball itself, that single file can serve two
purposes.  Even more purposes, actually, since it could be additionally be used
for security scanning, such as finding that curl used a back-door autoconf m4
macro found only in the tarball (if that ends up happening one day).

> It does not seem like a suitable tool for this.

Agreed. It just gives a flavour of one of the kinds of things a SPDX file can
provide, but could become part of a solution.

A tool that might actually do what you want is
https://pypi.org/project/distro2sbom/  That creates a SPDX file listing all the
packages in the current system (e.g. Debian packages on Debian).  You probably
don't want to run that on your personal system (way too many irrelevant
packages), but it could be run from a minimal container used just to create a
tarball to provide a more easily reproducible set of packages for others to
fall on want to completely reproduce that build process.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Reproducing the release tarballs

2024-03-30 Thread Dan Fandrich via curl-library
On Sat, Mar 30, 2024 at 06:29:48PM +0100, Daniel Stenberg via curl-library 
wrote:
> Any proposals for how to document the exact set of tools+versions I use for
> each release in case someone in the future wants to reproduce an ancient
> release tarball?

SPDX seems to be the standard SBOM format for this that tools are starting to
expect.  The format is able to handle complex situations, but given the very
limited scope needed in curl and for source releases only, once you get a
template file set up the first time filling in the details for every release
should be simple.

The spec is at https://spdx.dev/use/specifications/ but it's probably easier to
look at some simple examples to get a feel for it. Even running "reuse spdx" in
the curl tree (the same tool that's keeping curl in REUSE compliance in that CI
build) will output a SPDX file for curl. That one doesn't include the source
build dependencies that your interested in (because that's not what that
particular tool does) but could be a start of something. The curl SBOM could
also include Debian package names+versions as dependencies.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: libcurl and s3/minio

2024-03-26 Thread Dan Fandrich via curl-library
On Tue, Mar 26, 2024 at 03:16:31PM -0600, R C via curl-library wrote:
> btw; you mentioned : "curl versions since 7.75.0 have AWS signature 
> calculation
> built-in, with the
> 
> --aws-sigv4 option."
> 
> is there something similar, a function,  in libcurl?

--libcurl tells me it's CURLOPT_AWS_SIGV4
(https://curl.se/libcurl/c/CURLOPT_AWS_SIGV4.html).
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: libcurl and s3/minio

2024-03-26 Thread Dan Fandrich via curl-library
On Tue, Mar 26, 2024 at 02:17:10PM -0600, R C via curl-library wrote:
> > >      -H "Host: $URL" \
> > This is seldom needed because curl adds it on its own.
> without it the script doesn't work with minio
[...]
> > >      ${PROTOCOL}://$URL${MINIO_PATH}

I don't know what minio is, but looking at how $URL is used in the second line
it appears to hold a host name, not a URL (it's a confusing name), so curl
should be setting the same thing already. But, I'm just guessing because I
don't know exactly what those variables hold. It can make things more brittle
to add low-level headers that aren't actually needed as when the script is
changed in the future, it can break things. It sounds like these headers are
being cargo-culted in so cleaning them up could save effort in the long term.
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: libcurl and s3/minio

2024-03-26 Thread Dan Fandrich via curl-library
On Tue, Mar 26, 2024 at 11:36:07AM -0600, R C via curl-library wrote:
> I am trying to find out how to write something, using libcurl, to do some io
> with a minio object store (s3 compatible)
> 
> I did go a bit through the examples page (some I have used as een example for
> other projects), but could really find what I was looking for. I did find a
> script that uses curl (the command) that seems to work
> 
> this is a code fragment, for what I try to write into C.

Do you know about the --libcurl option? It can write your code for you.

> curl --insecure \

--insecure is a bad idea, especially when you're sending credentials over the
wire. You should fix your certificate store so that it's not needed.

> 
>     -o "${OUT_FILE}" \
> 
>     -H "Host: $URL" \

This is seldom needed because curl adds it on its own. 

> 
>     -H "Date: ${DATE}" \

Date: on a request? I've never seen that before. Is that needed by AWS
signatures?

>     -H "Content-Type: ${CONTENT_TYPE}" \

This one doesn't make much sense on a GET request, because there is no content
being sent. Did you really want Accept:?

>     -H "Authorization: AWS ${USERNAME}:${SIGNATURE}" \

curl versions since 7.75.0 have AWS signature calculation built-in, with the
--aws-sigv4 option.

>     ${PROTOCOL}://$URL${MINIO_PATH}
> 
> I saw an example called httpcustomheader, which came closest to what I'm
> looking for I think.

This is a very simple request with one custom header, so simple.c will do fine
with the addition of CURLOPT_HTTPHEADER which you can see how to do in many
other examples. But look at the curl's built-in AWS support first.
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: [gphoto-devel] gphoto2 auto detect errors

2024-03-13 Thread Dan Fandrich
On Wed, Mar 13, 2024 at 08:55:30PM +, david farrell wrote:
> Hi I am having some problems with gphoto2 auto detect errors, hope someone can
> help.
> 
> 1. gphoto auto detect error :[ 'Model Port ' , ' _ _ _ _ _ _ _' ]
> 2. gphoto - - auto detect error : Command ' gphoto - - auto detect ' returned
> non- zero exit status 139.
> 
> I keep getting these errors in my Web UI for my Creality Sonic Pad, I 
> contacted
> Creality but they don't seem to know the cause so I am hoping you can help ? 

It's going to take some more details to help. If this is a Linux system, exit
status 139 likely means the program ended with a SIGSEGV. It's unlikely there's
a backtrace log generated for this, but getting one would be the easiest way to
debug this further. That means logging onto the device and running the command
under a debugger or using other means of generating a stack trace.

Dan


___
Gphoto-devel mailing list
Gphoto-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/gphoto-devel


Re: M1 macOS | Memory leaks at SSL that is used by libcurl/8.1.2 (SecureTransport)

2024-01-30 Thread Dan Fandrich via curl-library
Is the code calling curl_global_cleanup() before checking for leaks? Does this 
happen on the latest curl releae (8.5.0)?
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: HTTP header validation

2024-01-29 Thread Dan Fandrich via curl-library
On Mon, Jan 29, 2024 at 08:59:03PM +, Stephen Booth via curl-library wrote:
> I eventually tracked the problem down to the bearer token being passed
> having an extra newline inserted at the end. This was copied through to
> the http request (adding a blank line and making the server ignore any
> subsequent http headers breaking the upload).

This is a case of GIGO. The man page even warns against this:

curl makes sure that each header you add/replace is sent with the proper
end-of-line marker, you should thus not add that as a part of the header
content: do not add newlines or carriage returns, they only mess things up
for you. curl passes on the verbatim string you give it without any filter
or other safe guards. That includes white space and control characters.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Seek problem with curl_formadd with CURLFORM_STREAM

2024-01-29 Thread Dan Fandrich via curl-library
On Mon, Jan 29, 2024 at 07:33:59PM +, Jeff Mears via curl-library wrote:
> I have code that’s attempting to use CURLFORM_STREAM with curl_formadd, and it
> is getting a CURLE_SEND_FAIL_REWIND error from the library.
> 
> Looking at the libcurl code, it looks like it might be a bug, but it’s hard 
> for
> me to tell for sure.  A full example of how the library is being used would
> take a while to construct.

If it's a bug, it's unlikely to get fixed because this API is deprecated. This
is your excuse to move to the supported API. See
https://github.com/curl/curl/commit/f0b374f662e28bee194038c3e7d5fae4cb498b06
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: CURL_FTP_HTTPSTYLE_HEAD

2024-01-18 Thread Dan Fandrich via curl-library
On Wed, Jan 17, 2024 at 03:28:11PM +0100, Nejc Drašček via curl-library wrote:
> I'm using ftp library ( github.com/embeddedmz/ftpclient-cpp ), which under
> the hood uses libcurl, and some requests are "polluting" stdout with http
> headers:
> 
> Last-Modified: Mon, 15 Jan 2024 14:32:44 GMT
> Content-Length: 0
> Accept-ranges: bytes
> 
> According to comment in lib/ftp.c this define is/was supposed to be removed
> at next so bump. The define is still enabled on master at the time of this
> message. Modifying libcurl source locally is not an option since we're using
> vcpkg to manage (external) libraries.
> 
> If there is a way to do this I would be much obliged to be pointed in that
> direction.

I zoomed right over the identifier in the subject and didn't see it. Those
#ifdefs were added 17 years ago, and given curl's goal of backward
compatibility and no SONAME bumps, it's unlikely to be removed in the next 17
years.

Having said that, this codes writes to the write function so what happens to it 
is
under the application's control. If an application doesn't want it written to
stdout, it shouldn't write it to stdout. But, if the application is performing
a NOBODY request over FTP, presumably it wants to get some metadata for that
URL and therefore some output. In this respect, ftp: is handled the same as
http: support in this respect. Both:
  curl -I ftp://mirror2.tuxinator.org/robots.txt
and
  curl -I https://www.tuxinator.org/robots.txt
return similar kinds of information in similar ways.

I don't know what ftpclient-cpp does or wants to do with these requests, but it
sounds like it's not doing them in the way you want or expect. That's more
likely to be a problem with it rather than with curl.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: problem with unpaused connection

2024-01-18 Thread Dan Fandrich via curl-library
On Thu, Jan 18, 2024 at 01:46:34PM +0300, Sergey Bronnikov via curl-library 
wrote:
> Before Curl version 8.4.0 everything worked fine (exactly with Curl 8.3.0),
> but after updating Curl to 8.4.0 in our HTTP client

Have you tried 8.5.0? There have been some important HTTP/2 changes since that
version.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: CURL_FTP_HTTPSTYLE_HEAD

2024-01-17 Thread Dan Fandrich via curl-library
On Wed, Jan 17, 2024 at 03:28:11PM +0100, Nejc Drašček via curl-library wrote:
> I'm using ftp library ( github.com/embeddedmz/ftpclient-cpp ), which under
> the hood uses libcurl, and some requests are "polluting" stdout with http
> headers:
> 
> Last-Modified: Mon, 15 Jan 2024 14:32:44 GMT
> Content-Length: 0
> Accept-ranges: bytes
> 
> According to comment in lib/ftp.c this define is/was supposed to be removed

Which comment? Which define? There are well over 4000 lines in that file and I
don't see any relevant comment or define added since the last 3 releases.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Experimenting with parallel tests on Debian

2024-01-12 Thread Dan Fandrich via curl-library
On Thu, Jan 11, 2024 at 10:44:42AM -0300, Samuel Henrique via curl-library 
wrote:
> I have recently pushed an experimental build of curl with parallel test
> execution on Debian. This was done with the hopes of helping reporting issues
> and understanding if it's feasible to enable it for non-experimental builds.
> 
> We have quite a diverse set of supported architectures and different build
> hosts[0].
> 
> There were a few failures that went away after retries. I have not done any
> investigation other than noting the failed tests were not always the same and
> at least one failure occurred on a host with a high number of CPU threads (16,
> high-ish for non-server standards nowadays).

You've discovered why we haven't turned on parallel tests by default yet.
They're quite reliable when run on an unloaded machine, such as a developer's
PC, but CI and build machines (especially in the free CI tiers) tend to be
heavily oversubscribed. This results in highly variable timing and task
scheduling, and, unfortunately, some of the tests are fairly sensitive to this.
Some of the worst ones have keywords "flaky" and "timing-dependent" so they can
be easily skipped if desired.

There are a couple of classes of issues still left in the tests, that if
solved, would eliminate some timing dependencies and make them more reliable.
For example, one of them has to do with sending data immediately before closing
a connection, which tends to make the final "QUIT" command in ftp tests
disappear. The reason for most of these are hard to figure out though, given
that they almost never fail locally when you try (although icing has a theory
about this particular one)>

> All the builds were done following the suggestion of using 7 workers per CPU
> thread [1] and without valgrind.
> 
> Do note that I did not try a lower number of workers and I'm only sending this
> in case someone is interested in finding possible bugs. I plan to keep testing
> future releases and me or someone else from Debian might report something more
> concrete in the future.

I've found reducing the number of workers makes things better, but even at only
2 workers, you still see failures on the most oversubscribed hosts. If someone
could figure out how to consistently make a/some tests fail locally, it would
go a long way toward finding and fixing the cause.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: [gphoto-devel] I have a suggestion for gphoto2

2023-12-28 Thread Dan Fandrich
On Tue, Dec 26, 2023 at 09:07:34PM +, dan b wrote:
> Has canon 1300d / t6b compatibility 

It should work fine over USB.

> I was wonder/ would like to see the RPI 3B etc..
> 
> To connect either wirelessly through Bluetooth or
> 
> Through HDMI

Wirelessly through HDMI? I'm not sure how that would work. But 

> I’m trying to enable wireless RPI 3B touch screen to have a live view
> 
> And or control shutter release from the RPI 3B screen either astrophotography
> 
> Or nature where you can monitor the camera’s angle / focus

You can write a program using libgphoto2 to do this if you want; this kind of
highly-specific application is out of the scope of the gphoto2 project.
There may be a program out there that does something similar already—there are
lots of libgphoto2-using programs around.

Dan


___
Gphoto-devel mailing list
Gphoto-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/gphoto-devel


[akregator] [Bug 477891] Digest authentication failure

2023-12-16 Thread Dan Fandrich
https://bugs.kde.org/show_bug.cgi?id=477891

--- Comment #5 from Dan Fandrich  ---
For the record, the Qt issue seems to be
https://bugreports.qt.io/browse/QTBUG-98280

-- 
You are receiving this mail because:
You are the assignee for the bug.

[akregator] [Bug 477891] Digest authentication failure

2023-12-16 Thread Dan Fandrich
https://bugs.kde.org/show_bug.cgi?id=477891

--- Comment #5 from Dan Fandrich  ---
For the record, the Qt issue seems to be
https://bugreports.qt.io/browse/QTBUG-98280

-- 
You are receiving this mail because:
You are watching all bug changes.

[akregator] [Bug 477891] Digest authentication failure

2023-12-16 Thread Dan Fandrich
https://bugs.kde.org/show_bug.cgi?id=477891

Dan Fandrich  changed:

   What|Removed |Added

 Resolution|--- |UPSTREAM
 Status|REPORTED|RESOLVED

--- Comment #4 from Dan Fandrich  ---
I traced the Akregator code and found that it seems to use the QtNetwork
classes to perform HTTP requests. I created a standalone Qt application to
perform a similar request and discovered that it truncates the response field
to 128 bits as well. So, it seems to be a problem in Qt itself (I tried both
5.15.2 and 5.15.7).

-- 
You are receiving this mail because:
You are the assignee for the bug.

[akregator] [Bug 477891] Digest authentication failure

2023-12-16 Thread Dan Fandrich
https://bugs.kde.org/show_bug.cgi?id=477891

Dan Fandrich  changed:

   What|Removed |Added

 Resolution|--- |UPSTREAM
 Status|REPORTED|RESOLVED

--- Comment #4 from Dan Fandrich  ---
I traced the Akregator code and found that it seems to use the QtNetwork
classes to perform HTTP requests. I created a standalone Qt application to
perform a similar request and discovered that it truncates the response field
to 128 bits as well. So, it seems to be a problem in Qt itself (I tried both
5.15.2 and 5.15.7).

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: Empty file name in CURLOPT_COOKIEFILE optimization

2023-12-13 Thread Dan Fandrich via curl-library
On Wed, Dec 13, 2023 at 09:49:07PM +, Dmitry Karpov via curl-library wrote:
> I propose to add a simple check for the cookie file name length and call 
> fopen() only if it is greater than zero like:

Sounds reasonable.

>if(data) {
>  FILE *fp = NULL;
> -if(file) {
> +if(file && strlen(file) > 0) {
>if(!strcmp(file, "-"))

This forces a traversal of the entire string, which isn't necessary. This would
be much faster:

if(file && *file) {

Are you able to turn this into a PR?

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Callback after http request has been submitted

2023-12-07 Thread Dan Fandrich via curl-library
On Thu, Dec 07, 2023 at 08:02:04PM +0100, Jeroen Ooms via curl-library wrote:
> I am looking for a way in libcurl to trigger a callback once, after a
> http request has been completely submitted (including upload if any),
> but before the server has responded. So basically when we have done
> our job, and we are waiting for a (potentially slow) http request to
> return a response status.

Why not call the callback function from within the CURLOPT_READFUNCTION.3? The
only difference is where the data about to be sent has been buffered; is the OS
or (potentially) within libcurl or the HTTP library. libcurl doesn't ask the OS
when the data has left the network interface, if that's what you're wanting.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


[akregator] [Bug 477891] Digest authentication failure

2023-12-02 Thread Dan Fandrich
https://bugs.kde.org/show_bug.cgi?id=477891

--- Comment #3 from Dan Fandrich  ---
Unfortunately, my server isn't public. I could probably come up with a
dockerfile to run a local server if you'd like.

-- 
You are receiving this mail because:
You are the assignee for the bug.

[akregator] [Bug 477891] Digest authentication failure

2023-12-02 Thread Dan Fandrich
https://bugs.kde.org/show_bug.cgi?id=477891

--- Comment #3 from Dan Fandrich  ---
Unfortunately, my server isn't public. I could probably come up with a
dockerfile to run a local server if you'd like.

-- 
You are receiving this mail because:
You are watching all bug changes.

[akregator] [Bug 477891] Digest authentication failure

2023-12-01 Thread Dan Fandrich
https://bugs.kde.org/show_bug.cgi?id=477891

--- Comment #1 from Dan Fandrich  ---
One thing I just noted: the other clients respond with 64 hexadecimal
characters (i.e. 256 bits) in the "response" field of the Authorization:
header, but akregator responds with 32 hex bytes (i.e. 128 bits). It doesn't
look like Akregator is responding properly to an algorithm=SHA-256
authorization.

-- 
You are receiving this mail because:
You are the assignee for the bug.

[akregator] [Bug 477891] Digest authentication failure

2023-12-01 Thread Dan Fandrich
https://bugs.kde.org/show_bug.cgi?id=477891

--- Comment #1 from Dan Fandrich  ---
One thing I just noted: the other clients respond with 64 hexadecimal
characters (i.e. 256 bits) in the "response" field of the Authorization:
header, but akregator responds with 32 hex bytes (i.e. 128 bits). It doesn't
look like Akregator is responding properly to an algorithm=SHA-256
authorization.

-- 
You are receiving this mail because:
You are watching all bug changes.

[akregator] [Bug 477891] New: Digest authentication failure

2023-12-01 Thread Dan Fandrich
https://bugs.kde.org/show_bug.cgi?id=477891

Bug ID: 477891
   Summary: Digest authentication failure
Classification: Applications
   Product: akregator
   Version: 5.24.3
  Platform: Flatpak
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: kdepim-b...@kde.org
  Reporter: d...@coneharvesters.com
  Target Milestone: ---

SUMMARY
Accessing a feed on a lighttpd server protected with HTTP Digest authentication
with algorithm=SHA-256 fails with a server error:

mod_auth.c.1334) digest: (a2ca643c55f46828b66002b5bed0e4e0): invalid format

akregator just silently fails to download the feed and shows the name in red.

STEPS TO REPRODUCE
1. Configure a feed served by a lighttpd server protected with SHA-256 Digest
authentication
2. Try to "Fetch feed"

OBSERVED RESULT
No feed and a red feed name

EXPECTED RESULT
Feed contents available for browsing

SOFTWARE/OS VERSIONS
Linux/KDE Plasma: 
KDE Frameworks Version: 5.111.0
Qt Version: 5.15.10

ADDITIONAL INFORMATION
The protected RSS feed link can be accessed fine (including authentication)
with Firefox, Chrome, curl and xh, so it's unlikely to be a server problem.
Running this from Flatpak will first hit #477889 before it gets to the point
where this bug is encountered.

-- 
You are receiving this mail because:
You are watching all bug changes.

[akregator] [Bug 477891] New: Digest authentication failure

2023-12-01 Thread Dan Fandrich
https://bugs.kde.org/show_bug.cgi?id=477891

Bug ID: 477891
   Summary: Digest authentication failure
Classification: Applications
   Product: akregator
   Version: 5.24.3
  Platform: Flatpak
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: kdepim-bugs@kde.org
  Reporter: d...@coneharvesters.com
  Target Milestone: ---

SUMMARY
Accessing a feed on a lighttpd server protected with HTTP Digest authentication
with algorithm=SHA-256 fails with a server error:

mod_auth.c.1334) digest: (a2ca643c55f46828b66002b5bed0e4e0): invalid format

akregator just silently fails to download the feed and shows the name in red.

STEPS TO REPRODUCE
1. Configure a feed served by a lighttpd server protected with SHA-256 Digest
authentication
2. Try to "Fetch feed"

OBSERVED RESULT
No feed and a red feed name

EXPECTED RESULT
Feed contents available for browsing

SOFTWARE/OS VERSIONS
Linux/KDE Plasma: 
KDE Frameworks Version: 5.111.0
Qt Version: 5.15.10

ADDITIONAL INFORMATION
The protected RSS feed link can be accessed fine (including authentication)
with Firefox, Chrome, curl and xh, so it's unlikely to be a server problem.
Running this from Flatpak will first hit #477889 before it gets to the point
where this bug is encountered.

-- 
You are receiving this mail because:
You are the assignee for the bug.

[akregator] [Bug 477889] New: Cannot access password-protected feeds

2023-12-01 Thread Dan Fandrich
https://bugs.kde.org/show_bug.cgi?id=477889

Bug ID: 477889
   Summary: Cannot access password-protected feeds
Classification: Applications
   Product: akregator
   Version: 5.24.3
  Platform: Flatpak
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: kdepim-b...@kde.org
  Reporter: d...@coneharvesters.com
  Target Milestone: ---

SUMMARY
Trying to access a feed URL protected with HTTP Digest authentication fails.
For the feed I encountered this on the UI shows dialogs to accept a self-signed
certificate, then silently fails to download the feed and the feed name in the
list on the left turns red. The console shows these messages:

kf.kio.core: Can't communicate with kiod_kpasswdserver (for checkAuthInfo)!
kf.kio.core: Can't communicate with kiod_kpasswdserver (for queryAuthInfo)!

An older version (20.12.0) installed locally (not with via Flatpak) works fine,
and requests the username and password from the user the first time, then
automatically uses those credentials on subsequent uses.

kf.kio.core: Can't communicate with kiod_kpasswdserver (for checkAuthInfo)!
kf.kio.core: Can't communicate with kiod_kpasswdserver (for queryAuthInfo)!

STEPS TO REPRODUCE
1. Configure a feed that requires Digest authentication
2. Try to "Fetch feed"
3. Go out and do some gardening because you aren't going to be reading RSS
feeds

OBSERVED RESULT
No feed and a red feed name

EXPECTED RESULT
Feed contents available for browsing

SOFTWARE/OS VERSIONS
Linux/KDE Plasma: 
KDE Plasma Version: 
KDE Frameworks Version: 5.111.0
Qt Version: 5.15.10

ADDITIONAL INFORMATION
The Flatpak permissions for org.kde.akregator do not include
org.kde.kpasswdserver in the [Session Bus Policy] section. Running "sudo
flatpak override org.kde.akregator --talk-name=org.kde.kpasswdserver" lets
akregator get past this problem (but then it encounters another, which I'll
open momentaryily).

-- 
You are receiving this mail because:
You are watching all bug changes.

[akregator] [Bug 477889] New: Cannot access password-protected feeds

2023-12-01 Thread Dan Fandrich
https://bugs.kde.org/show_bug.cgi?id=477889

Bug ID: 477889
   Summary: Cannot access password-protected feeds
Classification: Applications
   Product: akregator
   Version: 5.24.3
  Platform: Flatpak
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: kdepim-bugs@kde.org
  Reporter: d...@coneharvesters.com
  Target Milestone: ---

SUMMARY
Trying to access a feed URL protected with HTTP Digest authentication fails.
For the feed I encountered this on the UI shows dialogs to accept a self-signed
certificate, then silently fails to download the feed and the feed name in the
list on the left turns red. The console shows these messages:

kf.kio.core: Can't communicate with kiod_kpasswdserver (for checkAuthInfo)!
kf.kio.core: Can't communicate with kiod_kpasswdserver (for queryAuthInfo)!

An older version (20.12.0) installed locally (not with via Flatpak) works fine,
and requests the username and password from the user the first time, then
automatically uses those credentials on subsequent uses.

kf.kio.core: Can't communicate with kiod_kpasswdserver (for checkAuthInfo)!
kf.kio.core: Can't communicate with kiod_kpasswdserver (for queryAuthInfo)!

STEPS TO REPRODUCE
1. Configure a feed that requires Digest authentication
2. Try to "Fetch feed"
3. Go out and do some gardening because you aren't going to be reading RSS
feeds

OBSERVED RESULT
No feed and a red feed name

EXPECTED RESULT
Feed contents available for browsing

SOFTWARE/OS VERSIONS
Linux/KDE Plasma: 
KDE Plasma Version: 
KDE Frameworks Version: 5.111.0
Qt Version: 5.15.10

ADDITIONAL INFORMATION
The Flatpak permissions for org.kde.akregator do not include
org.kde.kpasswdserver in the [Session Bus Policy] section. Running "sudo
flatpak override org.kde.akregator --talk-name=org.kde.kpasswdserver" lets
akregator get past this problem (but then it encounters another, which I'll
open momentaryily).

-- 
You are receiving this mail because:
You are the assignee for the bug.

Re: systemd-resolved support

2023-11-24 Thread Dan Fandrich via curl-library
On Fri, Nov 24, 2023 at 08:25:05PM +0100, Max Kellermann via curl-library wrote:
> For the long term, I was wondering whether libcurl would be interested
> in incorporating a systemd-resolved mode if I were to submit a pull
> request.

Wouldn't it work to simply switch to c-ares for resolving instead of adding a
new resolver back-end?
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: How to identify that extensions are not supported on SFTP server? [libcurl]

2023-11-22 Thread Dan Fandrich via curl-library
On Wed, Nov 22, 2023 at 07:26:15PM +, Nimit Dhulekar via curl-library wrote:
> We have been using statvfs as a CURLOPT_QUOTE command via libcurl to identify
> whether the entry on the SFTP server is a file or folder. Is there any way to
> know in advance (preferably through libcurl) that a certain command is not
> supported on an SFTP server?

libssh2 just throws away the list of SFTP protocol extensions sent by the
server that would allow it (or clients) to know whether or not statvfs would
work or not.  I don't know how libssh deals with this info, but determining a
priori if statvfs will work using libssh2 would require changing libssh2 to
return this info to the client somehow, then writing a libssh2 program to query
that info before running curl, or somehow shoehorning that function into
libcurl (via curl_easy_getinfo() perhaps).

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: [gphoto-devel] Support for Canon PowerShot G5X Mark II

2023-10-31 Thread Dan Fandrich
On Tue, Oct 31, 2023 at 08:29:54PM +0100, Jens Lieder wrote:
> Hello I own a Canon PowerShot G5X Mark II
>
> I tried to use it for a photo booth, but it seems not to be supported.  May be
> it will be supported in future.

According to http://gphoto.org/doc/remote/, it is supported.

> *** Error ***  
> An error occurred in the io-library ('Could not claim the USB device'): Could
> not claim interface 0 (Device or resource busy). Make sure no other program
> (gvfs-gphoto2-volume-monitor) or kernel module (such as sdc2xx, stv680,
> spca50x) is using the device and you have read/write access to the device.
> *** Error (-53: 'Could not claim the USB device') ***  

Did you make sure no other program (gvfs-gphoto2-volume-monitor) or kernel
module (such as sdc2xx, stv680, spca50x) is using the device? Do you have
read/write access to the device?


___
Gphoto-devel mailing list
Gphoto-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/gphoto-devel


Re: Fwd: Adding IPFS Trustless Gateway Protocol Questions

2023-10-26 Thread Dan Fandrich via curl-library
On Thu, Oct 26, 2023 at 03:42:34PM +0200, Hugo Valtier via curl-library wrote:
> Instead of using the Path Gateway it uses the Trustless Gateway which
> answers with a stream of blocks and walks the merkle-tree, verifies
> hashes and deserializes it on the fly.
> This would make curl or libcurl capable of downloading ipfs:// content
> from any reachable IPFS node, not just a localhost trusted one.

I'm far from an expert in IPFS, but my understanding was that there were two
main ways to get files over IPFS: one is to get them via HTTP from an IPFS
gateway that knows about IPFS (what curl does now) and the other is to become a
full-fledged node in the IPFS network and speak the IPFS protocols to the
world.  What you describe sounds like a third method, where one may somehow
find a full IPFS node that happens to have your file and talk a subset of the
IPFS protocol to get that file. Is that an accurate assessment? If so, is that
really a mode that would be used by a significant number of people?  How do you
find an appropriate node for each file, for example?
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: "getaddrinfo() thread failed to start" under heavy load

2023-10-17 Thread Dan Fandrich via curl-library
On Tue, Oct 17, 2023 at 12:29:55PM +, m brandenberg via curl-library wrote:
> On Mon, 16 Oct 2023, Matt Toschlog via curl-library wrote:
> > I'm using libcurl in a voice server app running on Ubuntu 20.04 on an
> > Amazon AWS EC2 instance.  When I get around 500 users on the system I
> > start getting the error "getaddrinfo() thread failed to start" in my
> > curl_easy_perform() calls.
> > 
> > Memory doesn't seem to be an issue -- I'm not going above 15%
> > utilization.  Perhaps there's a Linux limitation (number of threads, for
> > example) that I'm running up against but I haven't found it.
> 
> Few ideas but I can confirm.  On Debian through Buster and libcurl
> 7.64, I've seen this on occasion.  ~1000 servers with 1000s of
> client connections each.  I'll get a small, micro-burst of resolver
> failures due to thread failure with a hint that resolver
> piggy-backing may not be working correctly.  Hosts are safe on
> memory, process and system fd limits, and process/thread fork
> limits.  Another resource seems involved but haven't got beyond that.
> Problem clears on retry a second or two later.

Switching to c-ares for resolving won't fix the underlying issue but there's a
good chance it will avoid it altogether. It's more resource efficient than the
threaded resolver.
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Test server SSL handshake using libcurl

2023-10-06 Thread Dan Fandrich via curl-library
On Fri, Oct 06, 2023 at 02:54:22PM +, Taw via curl-library wrote:
> Hi, I am trying to use libcurl to test a handshake with an internal server.
> Unfortunately GET/HEAD methods do no work, I get 404 error from the server.
> Practically I would like the cURL equivalent of this command: "openssl 
> s_client
> -connect : -cert="
> I can use OpenSSL lib to do it, but cURL is more elegant and OpenSSL is not
> that friendly.

You should be able to do that with CURLOPT_CONNECT_ONLY.
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Handling Cloudfare issues

2023-09-26 Thread Dan Fandrich via curl-library
On Tue, Sep 26, 2023 at 11:29:16PM +0200, Mac-Fly via curl-library wrote:
> To rant a little: I don't now whats wrong with the internet these days
> and why such checks are required at all. I am sure they break a lot of
> applications like mine! (Rant off.)

You're preaching to the choir here.

> I am sure I am not the only one and now I am searching here for answers
> because I believe many curl users are affected, too. Please help me! :-)

There's a project called curl-impersonate that uses a patched version of curl
to exactly impersonate browsers in how they talk to sites; headers, cookies,
TLS negotiation flags, etc. That has worked for me to get around this problem,
but I'm sure it won't be long before even that won't be enough.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: 8.3.0: test1474 fails every time

2023-09-23 Thread Dan Fandrich via curl-library
On Sat, Sep 23, 2023 at 08:16:42PM +0200, Christian Weisgerber via curl-library 
wrote:
>   So in the end this regress test is built on assumptions and is
>   therefor non-portable and prone to fail.

That basically verifies my guess as to what was happening. Thanks for following
up on this.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: curl_multi_perform creates new thread

2023-09-21 Thread Dan Fandrich via curl-library
On Thu, Sep 21, 2023 at 12:08:33PM -0400, Anass Meskini via curl-library wrote:
> Thanks Dan for the clarification. 
> I think it might be worth mentioning in the doc that this function might 
> create
> a thread.

It is called the "threaded resolver" after all, and that name is found in 8
different documentation files in the source tree, including in
libcurl-thread(3) which is the go-to location for information about threading
in libcurl. Where else would you like to see it? curl_multi_perform(3) seems
like a poor place for it, because *absolutely everything* happens in a call to 
curl_multi_perform and we can't document everything there.

> Would setting CURLOPT_RESOLVE result in curl_multi_perform never creating a 
> new
> thread?

I'm pretty sure that the name resolve cache is checked before the system
resolver thread is created, so this should work.

> Also when I run multi-app.c in helgrind, I see:
> 
> ==339231== 
> ---Thread-Announcement--
> ==339231==
> ==339231== Thread #1 is the program's root thread
> ==339231==
> ==339231== 
> 
> ==339231==
> ==339231== Thread #1: pthread_mutex_destroy with invalid argument
> ==339231==    at 0x483FC96: ??? (in /usr/lib/x86_64-linux-gnu/valgrind/
> vgpreload_helgrind-amd64-linux.so)
> ==339231==    by 0x572D16F: ??? (in /usr/lib/x86_64-linux-gnu/
> libp11-kit.so.0.3.0)
> ==339231==    by 0x4011F6A: _dl_fini (dl-fini.c:138)
> ==339231==    by 0x49468A6: __run_exit_handlers (exit.c:108)
> ==339231==    by 0x4946A5F: exit (exit.c:139)
> ==339231==    by 0x4924089: (below main) (libc-start.c:342)
> ==339231==
> ==339231== 
> 
> ==339231==
> ==339231== Thread #1: pthread_mutex_destroy with invalid argument
> ==339231==    at 0x483FC96: ??? (in /usr/lib/x86_64-linux-gnu/valgrind/
> vgpreload_helgrind-amd64-linux.so)
> ==339231==    by 0x4011F6A: _dl_fini (dl-fini.c:138)
> ==339231==    by 0x49468A6: __run_exit_handlers (exit.c:108)
> ==339231==    by 0x4946A5F: exit (exit.c:139)
> ==339231==    by 0x4924089: (below main) (libc-start.c:342)
> 
> 
> Are these false positives?

Not that I'm aware. Note that libcurl uses pthread mutexes aside from
threading, so these a not (necessarily) related to the threaded resolver. Other
libraries like OpenSSL, GnuTLS and libssh also uses mutexes, so this might not
ever be an issue in libcurl.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: curl_multi_perform creates new thread

2023-09-20 Thread Dan Fandrich via curl-library
On Wed, Sep 20, 2023 at 09:11:32PM -0400, Anass Meskini via curl-library wrote:
> I compiled curl from the github repository with --with-openssl then I compiled
> multi-app.c.
> When I run the program in gdb and add a breakpoint for pthread_create I see:

curl will use the threaded resolver option by default, so yes, this is
expected. You can configure with the --disable-threaded-resolver or
--enable-ares option to avoid this.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: 8.3.0: test1474 fails every time

2023-09-19 Thread Dan Fandrich via curl-library
On Tue, Sep 19, 2023 at 01:47:45PM +0200, Daniel Stenberg wrote:
> Maybe we should consider adding a way to
> disable/enable tests based on the OS name where it runs?

There's already the "win32" feature for that platform since it's needed often
because of its "special" behaviour.  For finer-grained detection (e.g. msys but
not Cygwin) a number of tests are using ; this is the first time I
can find that we're skipping a test based on another OS. If that starts
happening more often it might be worthwhile adding that feature, otherwise
 works fine, which I've done in PR#11888.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: 8.3.0: test1474 fails every time

2023-09-19 Thread Dan Fandrich via curl-library
On Mon, Sep 18, 2023 at 01:13:34PM +0200, Christian Weisgerber via curl-library 
wrote:
> Dan Fandrich:
> > I wanted to try this patch on NetBSD to see if it's related
> > to Nagle's algorithm, but couldn't get to the point where I could try it:
> 
> > -http://%HOSTIP:%HTTPPORT/we/want/%TESTNUMBER -T 
> > %LOGDIR/test%TESTNUMBER.txt --limit-rate 64K --expect100-timeout 0.001
> > +http://%HOSTIP:%HTTPPORT/we/want/%TESTNUMBER -T 
> > %LOGDIR/test%TESTNUMBER.txt --limit-rate 64K --expect100-timeout 0.001 
> > --tcp-nodelay
> 
> This makes no difference, the test fails the same way.

After much pain I was finally able to install OpenBSD under KVM and reproduce
this problem. I verified that curl was correctly sending the first 65536 bytes
of data to the socket in a call to send(2), but the OS reports that only 32716
bytes were actually sent. The test assumes that the OS will send all 65536
bytes, so the test fails.

I tried adjusting all the relevant sysctl parameters I could find, and couldn't
change this behaviour. Typically, pattern is the first three 64 KiB sends only
actually send 32716 bytes each, then the next couple send 65432 bytes, then
finally the kernel sends all 65536 bytes, as requested (with the occasional
65432 one). I also tried using write(2) instead of send(2) with no effect.

The OS is free to do this of course, but the test depends on the OS sending
all the data at once in order to set up the specific conditions needed to make
the test work. I don't see any reasonable alternative to just disabling this
test on NetBSD and OpenBSD.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: libcurl usage - memory error

2023-09-18 Thread Dan Fandrich via curl-library
On Mon, Sep 18, 2023 at 09:26:36AM -0400, Anass Meskini via curl-library wrote:
> When I run my program in valgrind, I see memory errors. What am I doing wrong?

Both these instances occur in GnuTLS, which deliberately uses some undefined
memory in its operation. It has code to mark these areas as undefined, however,
to stop Valgrind complaining, but this support may not have been enabled in the
GnuTLS library you're using, or it might be inadequate.  You might need to
compile your own to make sure it's enabled (_gnutls_memory_mark_defined is the
function) and then ask on a GnuTLS forum if you can't get it going.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: 8.3.0: test1474 fails every time

2023-09-17 Thread Dan Fandrich via curl-library
On Sun, Sep 17, 2023 at 10:43:17PM +0200, Christian Weisgerber via curl-library 
wrote:
> The comment for test1474 says "This test is quite timing dependent and
> tricky to set up."  On OpenBSD, it fails every time for me.  And this
> is not an overloaded machine.

I've noticed failures on the NetBSD autobuilds as well, but I've been unable to
successfully install NetBSD in a VM to test it. Maybe I'll have more luck with
OpenBSD. What I noticed in the NetBSD logs is that rather than receiving an
initial 64 KiB block of data, it looks like on NetBSD it's only receiving
closer to 48 KiB. I wanted to try this patch on NetBSD to see if it's related
to Nagle's algorithm, but couldn't get to the point where I could try it:

diff --git a/tests/data/test1474 b/tests/data/test1474
index 848f15211..24b349b19 100644
--- a/tests/data/test1474
+++ b/tests/data/test1474
@@ -82,7 +82,7 @@ http
 HTTP PUT with Expect: 100-continue and 417 response during upload
  
  
-http://%HOSTIP:%HTTPPORT/we/want/%TESTNUMBER -T %LOGDIR/test%TESTNUMBER.txt 
--limit-rate 64K --expect100-timeout 0.001
+http://%HOSTIP:%HTTPPORT/we/want/%TESTNUMBER -T %LOGDIR/test%TESTNUMBER.txt 
--limit-rate 64K --expect100-timeout 0.001 --tcp-nodelay
 
 # Must be large enough to trigger curl's automatic 100-continue behaviour
 

It's a finicky test (which is why it's marked flaky) but it should work fine on
an unloaded system.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: LibSSH2 password expire issue

2023-09-08 Thread Dan Fandrich via libssh2-devel
On Thu, Sep 07, 2023 at 01:18:51PM +0600, Amirul Islam via libssh2-devel wrote:
> I am having a little problem, I checked around online, but could not find a 
> reasonable explanation. I am trying to authenticate a session with a linux 
> box. The user password is expired, but the functions 
> "libssh2_userauth_password_ex" or "libssh2_userauth_publickey_fromfile_ex" 
> never fails with LIBSSH2_ERROR_PASSWORD_EXPIRED, following is my code.

There is a specific SSH protocol message code (60) that the server needs to
send for this to happen. Based on the logs, it seems the server does not send
it. Instead, it sends SSH_MSG_USERAUTH_SUCCESS (i.e. it allows the login) then
asks the user to change it interactively instead.

Dan
-- 
libssh2-devel mailing list
libssh2-devel@lists.haxx.se
https://lists.haxx.se/mailman/listinfo/libssh2-devel


Re: Curl Configuration Weirdness for libz.a

2023-09-01 Thread Dan Fandrich via curl-library
On Fri, Sep 01, 2023 at 01:53:27PM -0400, rsbec...@nexbridge.com wrote:
> Slight change, please. The i386 should be x86 (and eventually x86_64 when I
> get the 64-bit builds working).

i386 is a historical tag that basically means 32-bit Intel x86 architecture
these days.  We should probably change them all to say x86 since there aren't
very many actual i386 binaries available in the world any longer.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Curl Configuration Weirdness for libz.a

2023-08-31 Thread Dan Fandrich via curl-library
On Thu, Aug 31, 2023 at 01:50:07PM -0400, rsbec...@nexbridge.com wrote:
> On Thursday, August 31, 2023 1:41 PM, Dan Fandrich wrote:
> >On Thu, Aug 31, 2023 at 11:09:58AM -0400, Jeffrey Walton via curl-library
> wrote:
> >> I think you should change strategies. You should use sed to change
> >> references from -lz to libz.a (and friends).
> >
> >While that would work, devs shouldn't need to do this. curl's configure is
> simply doing
> >the wrong thing. I'll work on a patch.
> 
> Thanks.

https://github.com/curl/curl/pull/11778

I've tested that a zlib.pc file with a static library path now works fine.
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Curl Configuration Weirdness for libz.a

2023-08-31 Thread Dan Fandrich via curl-library
On Thu, Aug 31, 2023 at 11:09:58AM -0400, Jeffrey Walton via curl-library wrote:
> I think you should change strategies. You should use sed to change
> references from -lz to libz.a (and friends).

While that would work, devs shouldn't need to do this. curl's configure is
simply doing the wrong thing. I'll work on a patch.
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Curl Configuration Weirdness for libz.a

2023-08-30 Thread Dan Fandrich via curl-library
On Wed, Aug 30, 2023 at 06:08:38PM -0400, rsbec...@nexbridge.com wrote:
> Unfortunately, the packaging team for the platform did not help on this one. 
> zlib.h is in the /usr/coreutils/include directory, the zlib.a, zlib.so, 
> zlib.so.1.2.11 are in /usr/coreutils/lib (which collide). The zlib.pc file 
> does not help particularly:
> 
> prefix=/usr/coreutils
> exec_prefix=${prefix}
> libdir=${exec_prefix}/lib
> sharedlibdir=${libdir}
> includedir=${prefix}/include
> 
> Name: zlib
> Description: zlib compression library
> Version: 1.2.11
> 
> Requires:
> Libs: -L${libdir} -L${sharedlibdir} -lz
> Cflags: -I${includedir}

You could try hacking a copy of zlib.pc and replace "-lz" with 
"/usr/coreutils/lib/libz.a"
then force configure to use it with PKG_CONFIG_PATH=/path/to/hacked/file, but
I'm pretty sure that even that won't completely get rid of the use of -lz.
Running 'make ZLIB_LIBS=' after the configure should get rid of one lingering
instance of it but there's another one that will still show up.

If you can confirm that behaviour, then IMHO, configure should be changed to 
stop doing
that. If pkg-config has successfully found zlib, then configure shouldn't be
adding its own libraries and link flags to what pkg-config says is correct.

> which really will force zlib.so.1.2.11 being selected, and I cannot use that 
> for packaging curl for the general population as that DLL is only available 
> on the minority of machines. (On that subject, can you change the ref on 
> https://curl.se/download.html from my name to ITUGLIB - which is the 
> volunteer org who would take over if I get hit by a bus - but I'm glad we're 
> listed there and it is otherwise correct).

Done.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Curl Configuration Weirdness for libz.a

2023-08-30 Thread Dan Fandrich via curl-library
On Wed, Aug 30, 2023 at 05:03:34PM -0400, rsbec...@nexbridge.com wrote:
> Actually, there is no libtool on the platform, so upgrading will be
> difficult. No LIB, INCLUDES, or other compile-related environment variables.

Then it will be using the built-in libtool, which should be fairly recent. But,
if there are NonStop-specific changes that aren't upstream, you won't get them.
I think this is unlikely to be the issue here, though.

> For the OpenSSL 3.0 build:
> CFLAGS="-c99" CPPFLAGS="-Wnowarn=2040 -D_XOPEN_SOURCE_EXTENDED=1
> -WIEEE_float -I/usr/coreutils/include -I/usr/local-ssl3.0/openssl/include"
> LDFLAGS="/usr/coreutils/lib/libz.a -L/usr/coreutils/lib
> -L/usr/local-ssl3.0/lib" ./configure --prefix=/usr/local-ssl3.0
> --with-ssl=/usr/local-ssl3.0 --with-ca-path=/usr/local-ssl3.0/ssl/certs
> --disable-pthreads --disable-threaded-resolver --enable-ipv6
> --with-zlib=/usr/coreutils/lib/libz.a

--with-zlib doesn't work this way. It's intended to receive the path to a zlib
installation such as would be created after 'make install' when building zlib.
Specifically, there should be …/include/ and …/lib/ directories underneath this
path.  If there isn't such an install path on your system or it contains both
libz.so and libz.a, it won't work. In that case, use --with-zlib and set
PKG_CONFIG_PATH to a location of a zlib.pc file that only contains information
on a static libz. Failing even that, then you'll likely have to resort to
setting things like LIBS=/path/to/libz.a an CPPFLAGS=-I/path/to/zlib-include/
and hope the existing libz.so doesn't get in the way.
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Curl Configuration Weirdness for libz.a

2023-08-30 Thread Dan Fandrich via curl-library
On Wed, Aug 30, 2023 at 03:27:34PM -0400, Randall via curl-library wrote:
> ln: failed to create hard link '.libs/libcurl.lax/lt1-libz.a' =>
> '/usr/coreutils/lib/libz.a': Cross-device link

This looks like a bad assumption on the part of libtool that a hard link is
possible. I don't know why it's try to do this in the first place.  Have you
tried updating libtool?  What configure options and environment variables are
you giving when this happens?

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Adding a new --option to the tool

2023-08-28 Thread Dan Fandrich via curl-library
On Mon, Aug 28, 2023 at 08:11:14PM +0300, Florents Tselai via curl-library 
wrote:
> Is there any documentation / how to on the process I’d need to follow to add a
> new option to the tool ?
> Particularly the sequence of files / function / macros I’d need to 

There are lots of documentation in the docs/ directory, including general
development tips, contribution guidelines and something more specifically on
adding a new protocol, but I don't think there's something exactly on that.
What I would do is just search for a particular option in the code (--etag-save
is probably a good one because there shouldn't be many false positives) and
following the trail starting with what you find.

> Specifically I’m working on a new  - - warc-file flag which should be similar
> to —outputt or —etag-save .

I suggest bringing up a proposal to this list (or the curl tool list) to get 
some
feedback before spending too much time developing it if your goal is to push it
upstream. I personally think WARC a good idea but it would be a shame if after
finishing it you found pushback because it's not a good fit for the curl tool,
or were asked for major design changes because of reasons (like we'd prefer
WACZ instead, for example).
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Cirrus CI limiting free compute time

2023-08-22 Thread Dan Fandrich via curl-library
On Tue, Aug 22, 2023 at 04:06:17PM +0200, Jimmy Sjölund via curl-library wrote:
> Considering that Cirrus CI lists curl and use the logo on their first page

I didn't notice that before!  Usually, it's the companies that pay to show up
on the curl sponsors page. curl has become such a trusted brand that companies
now want to have it on their own pages!

> they might be open for some kind of sponsorship, if contacted?

I'm guessing if this were an option they would have reached out to us ahead of
time so I don't have high hopes, but it doesn't hurt to ask.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Cirrus CI limiting free compute time

2023-08-21 Thread Dan Fandrich via curl-library
The curl Cirrus CI pages now link[1] to a notice that they're limiting their
free CI tier starting next week. The new limit will be "50 compute credits" per
month, which seems to buy us about 260 hours of compute time.  Unfortunately,
curl has been using about 6000 hours of compute time per month lately[2]. At
that rate, our free time will be used up on the first day.

They have an easy-to-use credit-card entry form for us to buy credits, but it
looks to me like that would cost us almost $3500 per month (presumably USD).

Another option is to rent one or more virtual servers somewhere and hook them up
to Cirrus CI for only $10 per month. To replace our current usage would require
at least 8 virtual servers, though, so still several hundred dollars per month.

Finally, we could migrate all but the FreeBSD jobs to one of the CI services
still offering a reasonable free tier (Azure and GHA). We can almost squeeze in
our FreeBSD builds within our the Cirrus CI monthly credits, and I'm not aware
of another CI service that offers FreeBSD servers, so that's probably the
cheapest way forward. But it probably means more latency and slower build
results as we load the other services even more.

Dan

[1] https://cirrus-ci.org/blog/2023/07/17/limiting-free-usage-of-cirrus-ci/
[2] https://cirrus-ci.com/settings/github/curl (must be logged in)

-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: NSS and gskit are getting the axe

2023-07-18 Thread Dan Fandrich via curl-library
On Wed, Jul 19, 2023 at 12:30:02AM +0200, Patrick Monnerat via curl-library 
wrote:
> No tests on OS400: would need perl among other features. These are available
> under PASE which is an AIX emulation, but certainly not native OS400.

If the tests won't even run there, then maybe you can convince our BDFL that a
simple compile test would be good enough. That could be self-contained on OS490
and trivially feed to the autobuilds (https://curl.se/dev/builds.html) but that
doesn't provide much visibility. It could also tie into another server that
does the GitHub interfacing to bring the results into the GitHub PR interface,
such as how Dagobert uses BuildBot to run Solaris builds (e.g.
https://buildfarm.opencsw.org/buildbot/builders/curl-unthreaded-solaris10-sparc).
You don't need to turn OS400 into a GitHub runner to accomplish this.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Ajuda com instalação AutoGPT -Library não carrega

2023-07-08 Thread Dan Fandrich via curl-library
On Sat, Jul 08, 2023 at 08:18:26PM +, Ligya Fernandes via curl-library 
wrote:
> Desde já agradeço a ajuda, Dan! Você sabe onde eu posso encontrar uma
> biblioteca libcurl que seja compatível? Pode me indicar o local, por meio de 
> um
> link, se não for pedir muito?

My point is that you shouldn't have to find your own library. Whoever supplied
your git binary should have supplied the library, too.  You can find a bunch of
libraries at https://curl.se/download.html#Win64 but it will be hard to find
which one is compatible with your git binary since some important compiler
flags need to be the same between the two.  I would try just reinstalling your
git rather than go this path.
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Ajuda com instalação AutoGPT -Library não carrega

2023-07-08 Thread Dan Fandrich via curl-library
On Fri, Jul 07, 2023 at 08:21:33PM +, Ligya Fernandes via curl-library 
wrote:
> Fatal: failed to load library "libcurl -4.dll

This is an indication that your git installation is corrupt. Whatever way you 
installed
git should have also installed a compatible libcurl library.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Bug#1038608: rust-coretuils: FTBFS: error: no matching package named `remove_dir_all` found

2023-06-19 Thread Dan Fandrich
This dependency was removed upstream in v0.0.18 (and v0.0.19).



Bug#1038608: rust-coretuils: FTBFS: error: no matching package named `remove_dir_all` found

2023-06-19 Thread Dan Fandrich
This dependency was removed upstream in v0.0.18 (and v0.0.19).



Re: Goal: for every setopt, have at least one stand-alone example

2023-06-09 Thread Dan Fandrich via curl-library
On Fri, Jun 09, 2023 at 10:35:45AM +0200, Daniel Stenberg via curl-library 
wrote:
> The idea is simple: for every existing option we have to curl_easy_setopt(),
> there should be at least one full-size stand-alone example (in
> docs/examples/) showing how it could be used.

This would be really useful for some of the more complicated options that
interact with callbacks or other options in order to work properly. But does it
really add anything to have a standalone example for, e.g.
CURLOPT_FTP_USE_EPSV?  It's just a boolean that's either set or it isn't, and a
standalone example program doesn't show what effect it has or how it helps.

Every option's man page already has an "Example" section showing how to use
each option properly that can easily be cut and pasted into a program . I think
a bigger concern is that having all kinds of simple example programs cluttering
up docs/examples/ will make it harder to find the truly useful ones.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Parallel curl tests

2023-06-08 Thread Dan Fandrich via curl-library
On Thu, Jun 08, 2023 at 05:40:04PM +0200, Daniel Stenberg via curl-library 
wrote:
> Just a few days ago, Dan Fandrich merged the necessary commits into master 
> that now lets us try out running the curl tests in parallel as compared to 
> the old serial way.

It's been a long road of refactoring the test suite to get here, but there are 
finally some visible benefits. The biggest remaining issues to fix before 
enabling it by default have to do with general test flakiness.  Running tests 
in parallel changes test timing significantly, so timing-related test failures 
now occur more often. I've made some improvements in this area recently that 
also benefit sequential tests (one of my recent PRs actually went 
green!) and there are more to come. Still, running tests on your local 
system should usually go fine, so try it out!

I'm also interested in finding out what the optimum number of test runners is, 
so if you experiment on your own system and find the number that reduces 
test times the most, let me know what it is and what CPU you're using 
and I'll use that data to set an optimum value when it's time to enable 
it by default.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: multi interface with hsts cache

2023-05-29 Thread Dan Fandrich via curl-library
On Mon, May 29, 2023 at 11:41:04PM +0200, Przemysław Sobala via curl-library 
wrote:
> If I understand the documentation correctly, the HSTS cache is applied to each
> curl easy handle and it's read and written on each easy handle open and close
> action.
> I'd like to use the in-memory cache as reading and writing a cache file on
> every easy handle is redundant in my opinion and can slow down my service.
> 1. How can I configure the in-memory cache for easy handles in Multi 
> Interface?

You're probably looking for the share interface:
https://curl.se/libcurl/c/CURLSHOPT_SHARE.html
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: WebSocket custom port name as in JavaScript websockets

2023-05-25 Thread Dan Fandrich via curl-library
On Thu, May 25, 2023 at 05:23:58PM +0200, Johny Bravo via curl-library wrote:
> I have tried websocket API, but I cannot get it working and receive message.
> If I use the ws in JavaScript, I have:
> 
> var socket = new WebSocket( "wss://some_url", "example");
> 
> However, I dont know, how to set "example" port in libCURL API. I have tried

It sounds like you're trying to use libcurl to talk to a browser. My
understanding is that this will never work, because WebSockets is for
peer-to-server communication and not peer-to-peer and libcurl provides only
a client WebSockets implementation, like a browser does.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: curl tests now use perl module Memoize.pm

2023-05-17 Thread Dan Fandrich via curl-library
On Wed, May 17, 2023 at 09:48:39AM +0200, Rainer Jung via curl-library wrote:
> I just wanted to note, that the test suite now uses the perl module
> Memoize.pm. That module is contained in the perl base package eg. for RHEL
> 7, but for RHEL 8 must be installed as perl-Memoize.

I had assumed this would be available everywhere in a base perl installation 
and therefore safe to use, especially since nobody complained until now.
Memoize just improves test performance and isn't critical, so if it's going to
cause issues it could be made optional. Are there any other perl distributions
that relegate it to a separate package?

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Progress meter always there

2023-04-19 Thread Dan Fandrich via curl-library
On Wed, Apr 19, 2023 at 01:26:13PM +, Arnaud Compan via curl-library wrote:
> Is there a way to silence the internal progress meter ?

There's an opt for that: CURLOPT_NOPROGRESS
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Help using libcurl with HTTP proxy on Android device

2023-04-12 Thread Dan Fandrich via curl-library
On Wed, Apr 12, 2023 at 03:08:02PM -0700, David Castillo via curl-library wrote:
> What permissions does OpenSSL need to read the certificates?

I'm guessing the app would need the READ_EXTERNAL_STORAGE permission.
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: curl/openssl crash

2023-03-24 Thread Dan Fandrich via curl-library
On Fri, Mar 24, 2023 at 05:45:18PM +, Philippe Lefebvre via curl-library 
wrote:
> we are having some crashes when using CURL library. We are in mutltithreaded
> environment, and in these crashes mostly happend on heavy/loaded process (lots
> of data, lots of Get/Post requests).

The first question to ask in a case like this: have you read
https://curl.se/libcurl/c/threadsafe.html ?
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Parallel curl testing project

2023-03-23 Thread Dan Fandrich via curl-library
On Thu, Mar 23, 2023 at 01:24:18PM -0400, Jeffrey Walton wrote:
> You can run particular self tests rather than the entire test suite.

The problem is, I'm changing the test suite itself so I need to run everything
to get a better chance of hitting the edge cases. Stefan is similarly working
on low-level connection code that affects most (if not all) of the protocols,
so he's doing the same. I particularly like the keyword method of choosing
tests when I can, so if I'm modifying the POP3 code, selecting test "POP3"
lets the test harness choose all the relevant tests for me instead of having to
think about it. But, if I'm working fixing a specific bug, giving a test number
is definitely the way to go.
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Parallel curl testing project

2023-03-23 Thread Dan Fandrich via curl-library
On Thu, Mar 23, 2023 at 08:57:20AM +0100, Stefan Eissing wrote:
> very happy that you will work on this. I like to run the test suite locally 
> before a large push and the time it takes on my machine is around 10 minutes. 
> I'd very much appreciate that to go down!

I've been hit pretty hard with this myself lately as I've been working on
implementing this.  I've taken to having a window open that basically
continuously runs the test suite and when it fails, I look back to what I was
doing 10 minutes ago to figure out what went wrong. Surely, there's a better
way! ;-)
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: On more stable curl releases

2023-03-22 Thread Dan Fandrich via curl-library
On Wed, Mar 22, 2023 at 04:10:32PM -0700, bch via curl-library wrote:
> This is a curl binary, or a release tarball

The daily tar balls are available at https://curl.se/snapshots/

> (how much processing *does* go on
> from a repo checkout -> curl-x.y.z.tar.gz?)?

I think it's just running autoreconf and maketgz. The latter does things like
updates make pages, creates the MSVC build files, generates the CHANGES file
and runs "make dist" to build the tar ball.
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Parallel curl testing project

2023-03-22 Thread Dan Fandrich via curl-library
On the long road to 8.0.0, curl has taken on close to 1600 test cases 
[1] that help verify that it stays running correctly. These tests are 
one of the ways that curl stays operating reliably year after year, but 
the downside is that they can take an annoyingly long time to run.  
Normal test runs in the CI builds take between 6 and 25 minutes, and 
that's not including Valgrind or torture runs which take much longer 
than that. The test suite runs tests sequentially, so running them on a 
multi-core CPU makes hardly any difference to the run time.


Several CI services we rely on run builds sequentially, so it can take many 
hours between submitting a PR and seeing the final results. Developers 
working on their own machines are also slowed down when testing adds 10 
minutes to a edit-compile-run cycle. Speeding up a test run would make 
developers' lives that much better.


I looked into running tests in parallel a few years ago as a way to 
speed them up [2], but the testing infrastructure had various 
assumptions baked-in that would have required a commitment to do some 
major refactoring.  Since then, at least one of the hurdles has already 
been been overcome (running servers on random ports [3]) and the number 
of test cases being added keeps increasing. As CPUs advance more by 
increasing the number of cores rather than making each one faster, the 
test suite's serial nature is becoming more of a bottleneck that needs 
to be addressed.


I"m glad that I'm finally going to be able to tackle this problem. I'll 
be working on parallelizing the test suite over the next few weeks, 
funded by the curl project itself.  I've put together an outline of what I 
intend to do [4] and would welcome comments as I dive in. Commits 
will reference this issue [5] if you want to follow along.


Dan Fandrich

[1]: https://curl.se/dashboard1.html#tests
[2]: https://curl.se/mail/lib-2018-10/0004.html
[3]: https://github.com/curl/curl/pull/5247
[4]: 
https://github.com/curl/curl/files/11023995/curl.parallel.testing.proposal.pdf
[5]: https://github.com/curl/curl/issues/10818
--
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: On more stable curl releases

2023-03-22 Thread Dan Fandrich via curl-library
On Wed, Mar 22, 2023 at 09:17:48AM +0100, Daniel Stenberg via curl-library 
wrote:
> So, how about this for adjusted release cycle and release management:
> 
>  - Increase the post-release ("cool down") margin before we open the feature
>window. We currently have it 5 days, we could double it to 10 days. That
>would then reduce the feature window with the same amount of days leaving
>us with 18 days to merge features.
> 
>  - Within the cool down period, we are only allowed to merge bug-fixes.
> 
>  - Lower the bar for a patch-release. Even "less important" regressions should
>be considered reason enough to do follow-up releases. And if they are not
>reported within the cool down period, chances are they are not important
>enough.
> 
>  - After a follow-up release, we start over with a new cool down period of 10
>days.
> 
>  - If we decide to do a patch release due to a regression, we set that release
>day N days into the future so that we can accumulate a few more fixes and
>get our ducks in order before we ship it. N is probably at least 7.

That looks pretty workable to me. The trick is going to be deciding when a
patch release is worthwhile. If we look at git now, 2 days after the last
release, there are already 7 commits. Two are CI adjustments (not
release-worthy), one fixes a problem introduced 9 years ago (not
release-worthy), one improves detection of bugs in debug builds (not
release-worthy) and one improves error detection in testing (not
release-worthy). So far it's pretty easy.

The last two changes fix compile problems in two platforms, OS/400 and Haiku.
Normally, I'd say that would be enough to trigger another release: users can't
build curl when they used to be able to, but these are super marginal
platforms. Are there even a dozen people out there compiling curl for them? If
the situation is such that we could send them all personal e-mails with the
patches then maybe it's not worth doing an entire new release. I really have no
idea how many people there are using this support, though. Haiku's official
curl package is only 6 releases old, so there's some development happening
there.  There have also been at least 5 people contributing to OS/400 support
over the last few years, so maybe it's more than a dozen.

Perhaps the answer will be clear in another 8 days.

Dan
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: On more stable curl releases

2023-03-21 Thread Dan Fandrich via curl-library

On Tue, Mar 21, 2023 at 12:40:28PM -0400, Timothe Litt via curl-library wrote:

I expect that with frequent patch releases, curl would end up in the situation
of most M$ releases whose strategy is- "wait for the other people to find the
bugs and only take the nth patch release."  And with fewer users, the bugs
would take longer to turn up...which is a worsening spiral.


That's true, but it's no worse than the situation we have now.  I'm 
truly hoping that the big distros will continue releasing every version 
and not skip the regular releases or there's much less benefit to doing 
it this way.



To make a more frequent patch release scheme work, you'd need to find a group
of aggressive users/testers who'd find and report bugs early.   And be willing
to take the intermediate releases. 


It would be great if we can find more people to test pre-release curl 
versions, but my proposal essentially turns everyone who runs the 
regularly-scheduled release into such a tester, from a certain point of 
view.  Some projects that do infrequent release might post an rc1 and 
rc2 release to get wider testing before releasing the final version.  
curl, embracing the release early, release often mantra and with its 
fast pace of development, doesn't really need to do that.  Now, I'm 
deliberately mischaracterizing the proposal, but you could look at each 
regular .0 release as an rc1 and the patch release (if there is one) as 
the "real" release.  In our case we truly believe based on our tests 
that the .0 is production ready and not an RC, but stuff happen.



And if you can do that, you can also pull them into the regular release
process...


The kind of things that usually seem to go wrong are the marginal 
features that take a large user base to find. I don't know if there 
would be enough users in the early testing pools to make a big 
difference. If we're only leaving the patch release window open for a 
week, there's also not much time for said testers to do their testing 
thing. Maybe we should label the daily snapshot one week before a 
scheduled release as "rc1" and promote it that way and see what happens?


Dan
--
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: On more stable curl releases

2023-03-21 Thread Dan Fandrich via curl-library

On Wed, Mar 22, 2023 at 12:10:56AM +0100, Daniel Stenberg wrote:
BTW, "regression" is just another word for "test coverage gap", since 
if we had tested the thing we would've detected the problem and the 
bug would not have been shipped. It is important that we learn from 
the regressions and improve the tests every time.


That's a good way of looking at it! On a related note, what's the 
current code coverage?  I haven't tried myself in a looong time, and 
there hasn't been a Coveralls build in 5 years.  That would be a great 
graph to see on https://curl.se/dashboard.html  But with all the 
different build configurations it would be hard to get a single 
meaningful number out of it; maybe that's why it hasn't happened since 
then.


I understand your thinking with your proposal, but I am scared of going that 
route: because either we need to do the patch release management in a 
separate branch, so that we can allow the main branch to merge features for 
the next feature release and then we get a huge test and maintenance 
challenge: meaning we need to do everything that in two branches. Gone are 
the days of simple git history.


My idea was to continue development in master as normal, but if 
something comes up that necessitates a point release, a point branch 
would be created and only the relevant commits would be cherry-picked 
from master into it.  There would be nothing new in such a branch so 
people wouldn't need to look at it to figure out development history.  
But, you're right, you wouldn't be able to just run git log and search 
for the point release tags any more. IIRC, there's at least one point 
release in curl's history like that already.


Or we don't do different branches because the testing would be too 
difficult to handle.


We can configure most (all?) of the CI services to run on specific 
branches, so we should be able to have testing happen automatically on 
point branches.  Worst case, someone would need to create a dummy PR 
before release, wait for the results then delete the PR. Not nice, but 
probably also not necessary.


like we would basically not merge lots of things into the master 
branch for several weeks after a release. Also not ideal.


That would definitely slow the pace of development too much.


I instead propose this much smaller and simpler change:

We lower the bar for doing follow-up patch releases for regressions reported 
already within the period between the release and the re-opening of the 
feature window. For such we can do another release again already within a 
week or two. It is usually good to not stress them too much since then we 
get the chance to also find and fix other issues in the patch release.


That might be enough of a change to improve things. It's a minimal tweak 
to the existing workflow but with the improvement that more people get 
to eventually benefit from the quiet period after a release.


I am a bit worried about the point Timothe brings up that if too many 
people (especially distributions) skip the regular releases and just 
wait for the point release, problems won't be found in time and people 
won't get a more stable point release. Still, that's really no worse 
than the place we're in now so it shouldn't stop us from trying.


Dan
--
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


On more stable curl releases

2023-03-21 Thread Dan Fandrich via curl-library
ust wait for the .1 (or timeout 
waiting).


Dan Fandrich
--
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Drop support for building with mingw v1?

2023-03-02 Thread Dan Fandrich via curl-library
On Thu, Mar 02, 2023 at 05:51:07PM +0100, Daniel Stenberg via curl-library 
wrote:
> Today I fell into an issue with PR #10651 where everything builds fine,
> execpt on Windows with mingw v1.
> 
> There's really nothing unusual with that while working on a PR, but this
> time it struck that I should at least ask the question:
> 
> Can we drop support for building with mingw v1?

Is this from msys (not msys2)? The job configuration hints at it. My
understanding is that msys is well and truly obsolete and I doubt anyone would
notice if it were dropped. Additionally, it uses gcc 9.2 which is 4 years old
and no longer supported upstream. But, I'm far from a Windows toolchain subject
expert.
-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: CURLOPT_XOAUTH2_BEARER use?

2023-02-20 Thread Dan Fandrich via curl-library
On Tue, Feb 21, 2023 at 03:19:12AM +, Matthew Bobowski wrote:
> No cast is necessary.
> 
> #define CURLAUTH_BEARER   (((unsigned long)1)<<6)

Ah, good. Many of the other contants (like CURLSSH_AUTH_* and CURLFTPAUTH_*)
*do* need that cast.
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: CURLOPT_XOAUTH2_BEARER use?

2023-02-20 Thread Dan Fandrich via curl-library
On Tue, Feb 21, 2023 at 03:01:53AM +, Matthew Bobowski via curl-library 
wrote:
> c = curl_easy_setopt(pCurl, CURLOPT_HTTPAUTH, CURLAUTH_BEARER);

Don't forget to cast this to a long; this makes a difference in some
environments.

Dan
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Roadmap 2023 ? -- Enhance security of curl's release

2023-02-09 Thread Dan Fandrich via curl-library
On Thu, Feb 09, 2023 at 05:14:12PM +0100, Daniel Stenberg via curl-library 
wrote:
> On Thu, 9 Feb 2023, Diogo Sant'Anna via curl-library wrote:
> > Checking https://curl.se/dev/release-procedure.html, it seems the
> > project's release is still managed manually. Have you considered
> > migrating it to an automated release — e.g., through GitHub Actions,
> > Google Cloud Build, or any other hosted build environment? This would
> > protect against human error and potentially building with incorrect
> > dependencies.
> 
> Not the strongest argument. I have made 212 curl releases to date. Not once
> have I made a mistake like that in a release. Probably because I make
> releases with the same machine and environment I use to build and develop
> curl on.

The other point to consider is that a "curl release" is not much more than
packaging the source code that's in git into a tar ball. It doesn't involve
gathering multiple library dependencies, compiling against them, then building
an installer that includes all the above, so there is not a lot that can go
wrong. Fully automating the signing step is especially tricky in order to to
maintain an adequate level of security.  You can read about the release
procedure in docs/RELEASE-PROCEDURE.md

That said, there is these days a Windows binary that's released in parallel
with the source code that does involve at least some of those steps, and I
don't know the details of how it's generated. It's certainly not done manually,
even if it might be triggered manually.

Dan
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Close sockets asynchronously when using libdispatch (GCD)

2023-01-31 Thread Dan Fandrich via curl-library
On Tue, Jan 31, 2023 at 09:58:13AM +0100, Frederik Seiffert wrote:
> Could you please explain what you mean by "compiling with a different 
> resolver"? I didn’t see any build options like that. Do you maybe mean 
> building with "CURL_DISABLE_SOCKETPAIR"?

I mean using the --disable-threaded-resolver or --enable-ares configure options
(or whatever the cmake equivalent is) to use a different DNS resolver. I
thought that it was only the default threaded resolver that used a socketpair,
but in looking through the code it seems that curL_multi_poll() also uses it so
that function would need to be avoided to avoid its use.
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: HTTP/2 PING : how can execute curl_easy_perform function only to connect to the server side

2023-01-18 Thread Dan Fandrich via curl-library
Are you looking for CURLOPT_CONNECT_ONLY?
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Close sockets asynchronously when using libdispatch (GCD)

2023-01-16 Thread Dan Fandrich via curl-library
On Mon, Jan 16, 2023 at 04:30:48PM +0100, Frederik Seiffert via curl-library 
wrote:
> When receiving CURL_POLL_REMOVE, I call dispatch_source_cancel() [2] to stop 
> the dispatch source. As this is done asynchronously, it is required to wait 
> for the cancellation handler before closing the socket according to the 
> documentation:
> 
> > The cancellation handler is submitted to the source's target queue when the 
> > source's event handler has finished, indicating that it is safe to close 
> > the source's handle (file descriptor or mach port).
> 
> However libcurl closes the socket immediately after calling the socket 
> function, and at least on Windows this causes GCD to sometimes crash because 
> WSAEventSelect() returns WSAENOTSOCK ("Socket operation on nonsocket") here: 
> [3].
> 
> Does anyone have a suggestion as to how to work around this? The only thing I 
> can think of is to use CURLOPT_CLOSESOCKETFUNCTION and wait for the 
> cancellation handler before closing the socket. Would this be the recommended 
> approach? I’m fairly new to this topic, so I might be missing something 
> obvious.

IMHO, that sounds like a good approach. curl assumes POSIX semantics on sockets
which allow them to be closed at any time. If your environment doesn't allow
that, then hooking in to CURLOPT_CLOSESOCKETFUNCTION sounds like a good way
to maintain those semantics.

> I found that building libcurl with "CURL_DISABLE_SOCKETPAIR" fixes most these 
> crashes, but this seems like a poor workaround and some crashes remain.

I agree. That option just stops one source of socket closes, but others will
remain.

Dan
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Accidental debug-enabled version

2023-01-11 Thread Dan Fandrich via curl-library
On Wed, Jan 11, 2023 at 02:24:57PM +0100, Daniel Stenberg via curl-library 
wrote:
> Like this:
> 
> $ curl -V
> WARNING: this libcurl is Debug-enabled, do not use in production

Writing the same thing as the first line of -v output would double the chance
it's actually seen.
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Time to deprecate gskit

2023-01-06 Thread Dan Fandrich via curl-library
On Fri, Jan 06, 2023 at 01:44:48PM -0400, Calvin Buckley via curl-library wrote:
>   - I have capacity on a shared system intended for open source 
>   development. I should be able to set up some kind of CI runner here.  
>   The very annoying part is Go isn't supported (bar an experimental  
>   port of 1.16), so the stock GH Actions runner et al is likely to 
>   cause trouble. I know Jenkins' runner works fine.

It might be easier to integrate this into the autobuilds at 
https://curl.se/dev/builds.html  There are Solaris and NetBSD builds by 
third parties going on there right now (among others). But, they're not 
integrated into pull requests and you have to manually go looking there 
for failures after a merge, so it's not a great substitute. It's better 
than no builds at all, though.

Dan
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


[tellico] [Bug 463717] New: Internet Search on IMDB always returns a single, bogus result with Title "Find - IMDb" and nothing else

2023-01-02 Thread Dan Fandrich
https://bugs.kde.org/show_bug.cgi?id=463717

Bug ID: 463717
   Summary: Internet Search on IMDB always returns a single, bogus
result with Title "Find - IMDb" and nothing else
Classification: Applications
   Product: tellico
   Version: 3.4.5
  Platform: Flatpak
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: ro...@periapsis.org
  Reporter: d...@coneharvesters.com
  Target Milestone: ---

SUMMARY
Internet Search on IMDB always returns a single, bogus result with Title "Find
- IMDb" and nothing else.

STEPS TO REPRODUCE
1. File→Collection→New Video collection
2. Settings→Configure toolbars, then add an "Internet Search" button to the
toolbar.
3. Press said button.
4. Use anything as a Title search term (e.g. "anything"), select  "Internet
Movie Database" as the source, then press Search.

OBSERVED RESULT
A single result, with title "Find - IMDb". Clicking on the entry shows some
information that looks like some kind of raw search results under "Plot
Summary:", e.g.
","title_main_storyline_link_plotSynopsis":"Plot
synopsis","title_main_storyline_title":"Storyline","title_main_techSpecs_title":"Technical
specs","title_main_techspec_aspectratio":"Aspect
ratio","title_main_techspec_camera":"Camera","title_main_techspec_color":"Color","title_main_techspec_runtime":"Runtime","title_main_techspec_soundmix":"Sound
mix","title_main_tuneInMessage_label_comingSoon":"Coming
soon","title_main_tuneInMessage_label_dateOnly":"{date}","title_main_tuneInMessage_label_expected":"Expected
{date}","title_main_tuneInMessage_label_inTheatersNow":"In Theaters
Now","title_main_tuneInMessage_label_inTheatersThursday":"In Theaters
Thursday","title_main_tuneInMessage_label_inTheatersTomorrow":"In Theaters …

EXPECTED RESULT
Dozen of movies listed in the results table with "anything" in the title, such
as what happens if TMDb is selected as the source.

SOFTWARE/OS VERSIONS
Windows: 
macOS: 
Linux/KDE Plasma: 4.14.38
(available in About System)
KDE Plasma Version: 4.14.38
KDE Frameworks Version: 5.101.0 (in Flatpak)
Qt Version: 5.15.7 (in Flatpak)

ADDITIONAL INFORMATION
Collection→Update Entry→Internet Movie Database on a sample entry also silently
does not update it, whereas Collection→Update Entry→TheMovieDB (English) does.

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: verbose Log from libcurl to a file

2022-12-30 Thread Dan Fandrich via curl-library
On Fri, Dec 30, 2022 at 05:24:46PM +, Samantray Bhuyan, Madhusudan (GE 
Digital) via curl-library wrote:
> How to I redirect libcurl verbose output to a log file . I found https://
> stackoverflow.com/questions/38720602/
> getting-verbose-information-from-libcurl-to-a-file but the log file is empty
> for me.
> 
> Using curl 7.79.1 on windows

Are you compiling libcurl yourself or are you using a pre-built DLL? As I
recall, Windows has a problem when using stdio when two different CRTs are
involved, one for libcurl and one for your application. The two versions may
have incompatible structures, so creating a FILE* in your application with one
CRT and passing it to libcurl that uses it with another might not work.

If you don't want to compile your own libcurl, then you'll probably have to
install your own CURLOPT_DEBUGFUNCTION so you have complete control over which
functions are used to write to the log file.

Dan
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: option to disallow IDN ?

2022-12-16 Thread Dan Fandrich via curl-library
On Fri, Dec 16, 2022 at 01:18:12PM -0500, Timothe Litt via curl-library wrote:
> And/or the callback registration could specify "all domain names", "Just IDN" 
> -

The browsers (at least Firefox) do something subtle but pretty useful for
avoiding spoofing.  Based on the name registration policies of the TLD being
used, they either show the IDN as expected in the URL bar, or just show the
ugly punycode version of the name. TLDs with policies that forbid names that
could lead to confusion (homographic attacks) get the desired behaviour (of
seeing the IDN name) but those without policies, or with policies that could
lead to confusion get the punycode version, making it obvious that some
spoofing may have gone on to get you to that web page. Mozilla's original
policy can be seen here:
https://www-archive.mozilla.org/projects/security/tld-idn-policy-list

They've amended that policy since to allow displaying IDN in some cases even on
those TLDs with bad or nonexistent policies. This only happens if all the
characters in the TLD come from the same script. If a TLD mixes, for example,
Cyrillic and Latin characters, it's displayed as punycode, but all Cyrillic is
shown in all its UNICODE glory. The idea is that people (who can read that
script) will recognize the different characters within that script and be able
to tell them apart, and there won't be any mixing of similar-looking characters
within a single domain name. That policy can be seen at
https://wiki.mozilla.org/IDN_Display_Algorithm

Lots of thought has been given to this problem already (Mozilla seems to have
implemented the first policy 17 years ago), and curl could take advantage of
that. But, since it's not a browser it can't use the same means of notifying
the user (displaying punycode in the URL bar), but some viable alternatives
to that have already been brought up here.

Dan
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: credentials in memory

2022-11-19 Thread Dan Fandrich via curl-library
On Fri, Sep 30, 2022 at 09:43:39AM +0200, Daniel Stenberg via curl-library 
wrote:
> libcurl hold credentials (passwords for servers and proxies) in memory in
> clear text, potentially for a long time. If something goes wrong and that
> memory is accessed by an external party, things would be bad.
> 
> Is it worth doing something about?

It's important to define what exactly the threat is we want to protect against. 
These are the main ones I can think of:

1. local attackers with access to the process & its memory
2. local attackers with access to memory, core dumps, swap space or hibernation
files
3. remote attackers tricking curl into returning secrets from memory (via a
Heartbleed-style attack or by returning uninitialized stack or heap space, for
example)

In the 1. case, if an attacker can access the process' resources and memory,
there's not much left to protect. If the attacker has the same rights as the
process, then everything that curl can access, including passwords, the
attacker can, too. Even if secrets are encrypted, the attacker has access to
the same decryption keys as the real process so he can just decrypt them. I
don't see any way to protect against this in the general case.

However, if the attacker somehow only has access to the memory and not the rest
of the process' assets (case 2.), then use of a hardware security device can 
protect the
keys from directly being stolen, But, there will be some times that curl needs
the raw secrets in order to pass them to other dependencies or write them into
buffers, and when they're in memory, the attacker can still get them. And if
they're in memory, then can end up on disk in a core or hibernation file where
the attacker can read them.

The 3. case is the most interesting one that this proposal could help mitigate.
A Heartbleed-style bug that gives arbitrary memory to an attacker could return
memory containing a secret. If secrets are not stored in plaintext in RAM, then
it becomes much harder to obtain those secrets. But it's still not perfect.

Here are some possible mitigations we could implement in curl:

1. Clearing secrets with memset once their need is over. If the secrets are
out of RAM, they won't end up in core files, swap files or process debuggers.
However, curl needs some secrets for the entire lifetime of the process (e.g.
proxy credentials) so it can't clear all of them all the time. Also, curl needs
them in RAM for a short time to use them before they're cleared, so an attacker
could just grab them at that time.

2. Creating a random session key at startup and encrypting secrets as soon as
possible using that key. This suffers from the same problem mentioned before,
in that the secrets have to be decrypted to use them, even if only for a short
time. Also, encrypting many secrets using the same key makes it easier for an
attacker to guess that key. So, a Heartbleed-style attack that is able to
obtain many encrypted secrets could still obtain the decryption key.

3. Creating a new random key for each secret. This avoids the key reuse problem
in mitigation 2. by creating a new, random key for each and every new secret
that needs to be protected. The overhead is greater but it's more secure.
However, those random keys are still stored in RAM so a Heartbleed-style attack
that obtains the encrypted secrets might at some point also obtain the
decryption keys as well, exposing those secrets.

4. Storing the random keys from 3. in a hardware security module. This avoids
the possibility that the attacker might obtain the decryption keys by storing
them in an HSM. Unfortunately, curl still needs to decrypt them eventually and
they'll be in RAM for at a least a short time while that happens, during which
time they're still subject to being obtained by an attacker.

So, no matter how much we are able to protect those keys, they'll still be
vulnerable to an attacker. All we can do is reduce the time that they're
vulnerable, which just increases the time an attacker needs to get them, not
whether or not they *can* get them. Still, reducing the time could turn a
practical, but slow, attack into an impractical one.

And, unless applications do the same that curl does, these mitigations are of
limited use since the application itself becomes the weak link in the chain.
But, if libcurl does something then we can at least point our fingers elsewhere
if the application doesn't take the same amount of care.

We also need to consider the costs of implementing a solution: implementation
costs, maintenance costs, increased difficulty in debugging, performance
degradation, portability, etc.

This isn't my area of expertise, and I'm sure there have been many PhDs
theses written on exactly what I'm speculating about above. It's a complicated
topic, and we should have a clear idea of the benefits of retrofitting some
kind of protection into curl before doing it.

Dan
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   

Re: Slow Performance with libcurl when changing Ip Address

2022-11-01 Thread Dan Fandrich via curl-library

On Tue, Nov 01, 2022 at 08:42:45PM +0800, frankfreak via curl-library wrote:

curl 7.70.0 (x86_64-pc-linux-gnu) libcurl/7.29.0 NSS/3.53.1 zlib/1.2.7 libidn/
1.28 libssh2/1.8.0


You're using mismatched versions—an over 2-year-old CLI (ver. 7.70.0) with an 
almost 10-year-old library (ver. 7.29.0). While that should technically 
work, you're going to find it hard to find someone willing to help 
debugging that combination here. Try using the latest version (ver. 
7.86.0) and ask again if there are still problems.

--
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Tgz file is not copying properly

2022-10-28 Thread Dan Fandrich via libssh2-devel
On Fri, Oct 28, 2022 at 12:54:55PM +0530, Gautham Kumar via libssh2-devel wrote:
> We have a device using curl with the libssh2 to copy some files from the 
> device
> to the server. When I try to use the curl with SCP option the file gets
> corrupted. This issue happens only with SCP on curl.

Is it a curl bug then? Does it happen with (an appropriately modified)
libssh2/example/scp.c ?

> -bash-4.2$ tar -tf bundle_file.tgz
> 
> gzip: stdin: invalid compressed data--format violated
> tar: Unexpected EOF in archive
> tar: Error is not recoverable: exiting now

How do the bytes of the bad file differ from the good file?

Dan
-- 
libssh2-devel mailing list
libssh2-devel@lists.haxx.se
https://lists.haxx.se/listinfo/libssh2-devel


Re: [RELEASE] curl 7.86.0

2022-10-26 Thread Dan Fandrich via curl-library
On Wed, Oct 26, 2022 at 10:26:40AM -0400, Randall via curl-library wrote:
> If we build under 64-bit, which is scheduled later in the
> year, then there is no need to override the defaults. Do you want a PR for
> this?

Keep in mind that this will cause an ABI break on this platform.
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: On CURLOPT_AUTOREFERER privacy

2022-10-17 Thread Dan Fandrich via curl-library
On Mon, Oct 17, 2022 at 04:34:05PM +0200, Daniel Stenberg via curl-library 
wrote:
> On Mon, 17 Oct 2022, Timothe Litt via curl-library wrote:
> 
> > > My initial PR for this work: https://github.com/curl/curl/pull/9750
> > > 
> > Why change the default behavior?
> 
> For improved privacy. Because the browsers sort of do it like this.

I agree with Timothe that this doesn't seem worthwhile breaking backward
compatibility. I discovered only recently that browsers have changed their
behaviour in this area when a site that was depending on receiving the full URL
broke.  If someone is going to the trouble of enabling this option, then
they're doing so for a good reason and there's a reasonable chance they need
the full URL. I'm all for adding an option to add the host-only behaviour as an
option, but not to make it the default. I could probably be convinced to change
it in curl 8 when there's an expectation of some changes in behaviour.

Dan
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: Undefined reference of a new libcurl function

2022-10-13 Thread Dan Fandrich via curl-library
On Thu, Oct 13, 2022 at 08:46:20AM +, Arnaud Compan via curl-library wrote:
> In details, I've added the function in lib/multi.c:
>   void my_test(struct Curl_easy *data)
>   {
>   }
> And in include/curl/multi.h:
>   CURL_EXTERN void my_test(CURL *curl_handle);

The function signatures do not match. This will cause a problem with some
compilers. Try making the arguments in both cases the same type.
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: App fails when rebuilt with newer library, but only when MTU is small

2022-09-29 Thread Dan Fandrich via curl-library
On Thu, Sep 29, 2022 at 09:58:58PM +, Mark Fanara wrote:
>> Some servers have an issue with 100-continue, and I don't recall which 
>> version
>> of libcurl enabled it by default. You could try disabling it and see what
>> happens. It theoretically shouldn't have anything to do with MTU, though.
>> What happens in this case when the MTU is greater? How do the client and 
>> server
>> handle this 100-continue?
> 
> I don't know how to disable it. Change the timeout to 0ms ?

It's a FAQ: https://curl.se/docs/faq.html#My_HTTP_POST_or_PUT_requests_are

> When the MTU is larger, the TLS packet sequence continues

This seems unlikely to me to be a curl issue. It sounds more likely to be
something lower in the networking stack that's interfering. Try using Wireshark
to sniff the traffic to see if the short application data packets are getting
through, and if they are, if they're being acked.

Dan
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: App fails when rebuilt with newer library, but only when MTU is small

2022-09-29 Thread Dan Fandrich via curl-library
On Thu, Sep 29, 2022 at 08:05:00PM +, Mark Fanara wrote:
> Sorry if my response is not per best practices as far as formatting goes. I 
> will respond to a number of your questions here rather than inline.

See https://curl.se/mail/etiquette.html#Do_Not_Top_Post

> As to MTU - I of course can't increase the MTU on the fielded system, so I 
> have a test system that does not include the wireless link. I have a wired 
> router in place of the wireless router. I have set the MTU of the outside 
> interface to value matching the wireless router. When I do that, the file is 
> not sent. When I simply change the MTU back to default 1500, all works fine. 
> I can use this setup with and old client/sending device and it works 
> regardless of MTU.

Ok, it sounds like you're pretty confident that MTU is the issue. Did the old
OS use the same MTU?

> *  SSL certificate verify result: unable to get local issuer certificate 
> (20), continuing anyway.

I assume you're deliberately not verifying certificates to get around this
issue, but it's bad for security.

> Expect: 100-continue
> 
> * Expire in 1 ms for 0 (transfer 0x10823a80)    I changed this value 
> from 1000ms to 1ms to see if it made any difference
> * Done waiting for 100-continue

Some servers have an issue with 100-continue, and I don't recall which version
of libcurl enabled it by default. You could try disabling it and see what
happens. It theoretically shouldn't have anything to do with MTU, though.
What happens in this case when the MTU is greater? How do the client and server
handle this 100-continue?

> * OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104
> * Closing connection 0
> 
> libcurl: (56) ERROR : OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104

errno 104 is ECONNRESET "Connection reset by peer", which makes it look like
the server is closing the connection while waiting for the rest of the message
after the 100-continue. I might have though it could be bad timing on a slow
wireless link, except that you tried changing that timeout without a difference
in behaviour.

It just occurred to me that the 564 byte MTU your link has is less than the
RFC 791 specified minimum 576 byte datagrams for IPv4. But, that is after
reassembly, and 564 bytes is more than the 68 byte minimum datagram size, so
you should actually be OK here. But, it's possible something is in the path
that assumes 576 byte minimums and is improperly messing things up.

Dan
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: App fails when rebuilt with newer library, but only when MTU is small

2022-09-29 Thread Dan Fandrich via curl-library
On Thu, Sep 29, 2022 at 01:58:23PM +, Mark Fanara via curl-library wrote:
> Recently the device vendor updated the OS image to Debian Buster. 

The subject of this message doesn't match this line. If it's an OS upgrade that
happened, then it's much more than just a newer libcurl that's changed. In
effect, absolutely everything other than the hardware (and possibly your
application) has changed, including the OS, drivers, configuration files, libc,
libcurl as well as every other library in the system. Figuring out which of
those actually made things stop working is your challenge.

> After rebuilding the application, it is no longer able to push the data to
> the endpoint

How is it no longer able? What are the symptoms? Is it a client issue? Server
issue? What is failing and how?

> If I increase the MTU (bypass the wireless link), the push is completed.

Bypassing the wireless link sounds like it would cause a lot more changes than
just altering the MTU. How do you know that the MTU is the part that matters?
Have you tried lowering the MTU on the working networking transport to see if
it stops working in the same way?

> - Upgrading to newest libcurl is not feasible because of reported library 
> dependencies. i.e. newer version is dependent upon newer version of libc 
> which I am unable to update.

You can compile a recent libcurl very easily and link your application against
that.  Even if you decide not to release it that way, it can give you valuable
data for testing. There have been 2481 bug fixes in curl since 7.68.0, after
all.

> Are there any known changes to libcurl (or other dependent libraries) that 
> would be MTU sensitive? 

There were 2503 bug fixes and 162 other changes between versions 7.38.0
and version 7.64.0.  libcurl's dependencies are likely similarly changed. You
are welcome to check yourself if any of those are relevant in your case.

> Any suggestions on where to go from here?

I'd first figure out what exactly is causing the push to fail. Is the
application getting an error code? Is it getting a remote server failure code?
Is it timing out? That will help drive following steps. Next, enable libcurl
logging to see if something useful is coming out of that. It's probably also
worthwhile trying to run the old application (linked to the old libraries) on
the new system. Copy the binary and all the necessary libraries to a directory
on the new system and run the binary using LD_LIBRARY_PATH to point it to the
old libraries. This isn't failsafe, since the old libraries will be reading
configuration files intended for newer versions, but there's a good chance
they'll be compatible enough to run, and it's easy to try.

Dan
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: [EXTERNAL] Re: Feature request: provide ability to set a global callback function telling libcurl if IPv6 works on the system

2022-09-23 Thread Dan Fandrich via curl-library
On Thu, Sep 22, 2022 at 11:48:58PM +, Dmitry Karpov via curl-library wrote:
> And for me the biggest problem is that I just can't change the code of 
> certain curl-based components used in my application.
> They are written by some other developers and closed for any modifications.

That is what I suspected, and it's why I described the underlying issue as a
"software engineering" one. 

Dan
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


Re: [EXTERNAL] Re: Feature request: provide ability to set a global callback function telling libcurl if IPv6 works on the system

2022-09-21 Thread Dan Fandrich via curl-library
On Wed, Sep 21, 2022 at 06:21:08PM -0700, Dan Fandrich wrote:
> On Thu, Sep 22, 2022 at 12:24:57AM +, Dmitry Karpov via curl-library 
> wrote:
> >  > If Curl_ipv6works() were not called in the CURL_IPRESOLVE_V6 case, would
> >  > that solve the issues that are remaining?
> > 
> > Your question was:
> > "So, if libcurl eliminated that call in the CURL_IPRESOLVE_V4 case, would 
> > it fix your problem?"
> 
> That was my previous question. The question above refers to CURL_IPRESOLVE_V6.
> 
> > YES, adding a callback doesn't fix anything on its own. But it allows curl 
> > applications to work around problems/regressions caused by default 
> > Curl_ipv6works() behavior.
> 
> I'm betting that this problem can be fixed without adding a new callback,
> especially one that goes against the way libcurl works, namely that callbacks
> are used to change behaviour during a transfer.

You also didn't answer this question:

> Since you're calling it a regression, then where did that regression occur? 
> Was
> it in libcurl or in the kernel? Maybe this problem needs to be solved
> elsewhere.
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html


  1   2   3   4   5   6   7   8   9   10   >