[update] geo/traccar 5.12

2024-01-30 Thread Renaud Allard

Hello,

Here is a incredibly short diff for traccar 5.12
This solves some bugs which were introduced in 5.11 with the rewrite of 
parts of the code.


Best RegardsIndex: Makefile
===
RCS file: /cvs/ports/geo/traccar/Makefile,v
retrieving revision 1.36
diff -u -p -r1.36 Makefile
--- Makefile	16 Jan 2024 08:58:47 -	1.36
+++ Makefile	30 Jan 2024 08:06:29 -
@@ -1,5 +1,5 @@
 COMMENT =	modern GPS tracking platform
-V =		5.11
+V =		5.12
 PKGNAME =	traccar-${V}
 DISTNAME =	traccar-other-${V}
 EXTRACT_SUFX =	.zip
Index: distinfo
===
RCS file: /cvs/ports/geo/traccar/distinfo,v
retrieving revision 1.23
diff -u -p -r1.23 distinfo
--- distinfo	16 Jan 2024 08:58:47 -	1.23
+++ distinfo	30 Jan 2024 08:06:29 -
@@ -1,2 +1,2 @@
-SHA256 (traccar-other-5.11.zip) = unR2zEmZ8yYwdaBzV64DMFYvwjMjaH/2NggcsLyavCM=
-SIZE (traccar-other-5.11.zip) = 142491091
+SHA256 (traccar-other-5.12.zip) = JaYb6p1G1cGyY8mlP72yg0wMPvHA2qVK6ZV5wKtKzN8=
+SIZE (traccar-other-5.12.zip) = 142494458
Index: pkg/PLIST
===
RCS file: /cvs/ports/geo/traccar/pkg/PLIST,v
retrieving revision 1.26
diff -u -p -r1.26 PLIST
--- pkg/PLIST	16 Jan 2024 08:58:47 -	1.26
+++ pkg/PLIST	30 Jan 2024 08:06:29 -
@@ -1252,7 +1252,7 @@ share/traccar/modern/
 share/traccar/modern/apple-touch-icon-180x180.png
 share/traccar/modern/assets/
 share/traccar/modern/assets/alarm-zNGFGtq_.mp3
-share/traccar/modern/assets/index-3K2c-a3q.js
+share/traccar/modern/assets/index-KtPqO9zj.js
 share/traccar/modern/assets/index-bRgzYBc7.css
 share/traccar/modern/assets/roboto-cyrillic-300-normal--po7MILF.woff2
 share/traccar/modern/assets/roboto-cyrillic-300-normal-FF-TwrnM.woff


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Trying to install Apache 2.4 with OpenSSL 1.1 instead of LibreSSL

2024-01-30 Thread Giovanni Bechis
On Mon, Jan 29, 2024 at 07:45:27PM +, Stuart Henderson wrote:
> On 2024/01/29 09:51, giova...@paclan.it wrote:
> > On 1/26/24 23:11, Tim wrote:
> > > I'm trying to troubleshoot an issue where Chrome/Chromium browsers
> > > randomly fail to correctly use SSL against my web server.
> > > 
> > This is a known issue, see 
> > https://marc.info/?l=openbsd-ports&m=167449054903277&w=2
> > 
> > > So I am trying to compile and install an apache-http port with OpenSSL 1.1
> > > library instead of LibreSSL.
> > > 
> > > I have managed to compile and install this customer port, however, I
> > > don't know if I ultimately succeeded because when it starts it still
> > > says this in the log file:
> > > 
> > > [Fri Jan 26 14:02:57.131803 2024] [mpm_prefork:notice] [pid 67010] 
> > > AH00163: Apache/2.4.58 (Unix) LibreSSL/3.8.2 configured -- resuming 
> > > normal operations
> > > 
> > > Is this message wrong?  Or am I still ending up with an Apache2
> > > compiled against LibreSSL instead of OpenSSL?
> 
> > you can find it by running "ldd /usr/local/lib/apache2/mod_ssl.so".
> 
> That will show the libraries used but not the headers. (It is possible
> to compile with openssl libraries but libressl headers - that will cause
> problems too).
> 
> I didn't check where httpd gets this version number in the log entry
> from, but it can either be a function in one of the libraries
> (libssl/libcrypto), or from the opensslv.h header.
> 
> Even if you get apache-httpd built against the correct libraries, some
> of the other libraries which it pulls in are built using libressl
> libraries. Those will need to be rebuilt using openssl too. This
> includes apr-util and curl - but curl is used widely in the ports tree
> and you're likely to cause problems for other installed packages if you
> change that.
> 
> Basically: building against a non-default version of a widely used
> library is a hard problem and really best avoided.
> 
> If your setup is reasonably simple, you may be able to use the
> workaround of a single cert with a bunch of additional hostnames in
> subjectAltName. In that case, SNI is not needed for the site to work,
> and that will almost certainly be the easiest way...
> 
> Another possible approach (untested)...
> 
what about this one so I can commit it upstream as well ?
 Giovanni

Index: modules/ssl/ssl_private.h
===
--- modules/ssl/ssl_private.h   (revision 1915475)
+++ modules/ssl/ssl_private.h   (working copy)
@@ -249,7 +249,7 @@
 #endif
 
 /* ALPN Protocol Negotiation */
-#if defined(TLSEXT_TYPE_application_layer_protocol_negotiation)
+#if !defined(LIBRESSL_VERSION_NUMBER) && 
defined(TLSEXT_TYPE_application_layer_protocol_negotiation)
 #define HAVE_TLS_ALPN
 #endif
 


Re: Trying to install Apache 2.4 with OpenSSL 1.1 instead of LibreSSL

2024-01-30 Thread Theo Buehler
On Tue, Jan 30, 2024 at 11:01:24AM +0100, Giovanni Bechis wrote:
> On Mon, Jan 29, 2024 at 07:45:27PM +, Stuart Henderson wrote:
> > On 2024/01/29 09:51, giova...@paclan.it wrote:
> > > On 1/26/24 23:11, Tim wrote:
> > > > I'm trying to troubleshoot an issue where Chrome/Chromium browsers
> > > > randomly fail to correctly use SSL against my web server.
> > > > 
> > > This is a known issue, see 
> > > https://marc.info/?l=openbsd-ports&m=167449054903277&w=2
> > > 
> > > > So I am trying to compile and install an apache-http port with OpenSSL 
> > > > 1.1
> > > > library instead of LibreSSL.
> > > > 
> > > > I have managed to compile and install this customer port, however, I
> > > > don't know if I ultimately succeeded because when it starts it still
> > > > says this in the log file:
> > > > 
> > > > [Fri Jan 26 14:02:57.131803 2024] [mpm_prefork:notice] [pid 67010] 
> > > > AH00163: Apache/2.4.58 (Unix) LibreSSL/3.8.2 configured -- resuming 
> > > > normal operations
> > > > 
> > > > Is this message wrong?  Or am I still ending up with an Apache2
> > > > compiled against LibreSSL instead of OpenSSL?
> > 
> > > you can find it by running "ldd /usr/local/lib/apache2/mod_ssl.so".
> > 
> > That will show the libraries used but not the headers. (It is possible
> > to compile with openssl libraries but libressl headers - that will cause
> > problems too).
> > 
> > I didn't check where httpd gets this version number in the log entry
> > from, but it can either be a function in one of the libraries
> > (libssl/libcrypto), or from the opensslv.h header.
> > 
> > Even if you get apache-httpd built against the correct libraries, some
> > of the other libraries which it pulls in are built using libressl
> > libraries. Those will need to be rebuilt using openssl too. This
> > includes apr-util and curl - but curl is used widely in the ports tree
> > and you're likely to cause problems for other installed packages if you
> > change that.
> > 
> > Basically: building against a non-default version of a widely used
> > library is a hard problem and really best avoided.
> > 
> > If your setup is reasonably simple, you may be able to use the
> > workaround of a single cert with a bunch of additional hostnames in
> > subjectAltName. In that case, SNI is not needed for the site to work,
> > and that will almost certainly be the easiest way...
> > 
> > Another possible approach (untested)...
> > 
> what about this one so I can commit it upstream as well ?

Please do not.

>  Giovanni
> 
> Index: modules/ssl/ssl_private.h
> ===
> --- modules/ssl/ssl_private.h (revision 1915475)
> +++ modules/ssl/ssl_private.h (working copy)
> @@ -249,7 +249,7 @@
>  #endif
>  
>  /* ALPN Protocol Negotiation */
> -#if defined(TLSEXT_TYPE_application_layer_protocol_negotiation)
> +#if !defined(LIBRESSL_VERSION_NUMBER) && 
> defined(TLSEXT_TYPE_application_layer_protocol_negotiation)
>  #define HAVE_TLS_ALPN
>  #endif
>  



Re: [Fwd: Re: net/i2pd: move login.conf(5) bits from README to i2pd.login]

2024-01-30 Thread beecdaddict
I see the confusion I made I am sorry, when I said routers crash I meant
actual ISP hardware routers.

not sure how torrenting with i2pd should increase crashing risk as connections
are pre-made with I2P, so qBitorrent I think is just using a proxy and
connections being made shouldn't increase FD count? I am not sure exactly, but
something tells me that the connections being made by torrenting and thus FD
count increased is pre-handled by i2pd already as I2Pd has connections
in-place already? I am not sure but this makes some sense

it should be the ideal OS choice because security shouldn't come with
compromises, and those that do are uncapable of divine intellect creation!
I think OpenBSD should suffice for the use-case, not sure.
if something crashes or is not at 100% potential, something should be fixed.

like I asked and no one answered: where can I check HARD LIMIT of my computer?
what it depends on, on CPU? where is utility that shows max FDs, and
per-running-process FD usage and their max setting?
if this does not exist, I think why not?
I think if user has to manually set FD limits and know potential of programs
and OpenBSD and hardware, where is utility to help with that? I did search on
the internet, all shit..

- best regards and with hope because I thought no one interested

On Mon, January 29, 2024 10:23 pm, open...@systemfailure.net wrote:
> As I implied in another message, this file limit problem, causing your i2pd
> instance to crash, is not related to i2pd itself but to torrenting. I guess
> OpenBSD, with its strict security defaults, may not be the ideal operating
> system for high volume torrenting...
>
>
> On Sunday, January 28th, 2024 at 11:41 AM, beecdaddict at danwin1210.de
>  wrote:
>
>
>> and that doesn't cover routers crashing/rebooting? is there anything to be
>> done about that? router als ocrashes with high normal clernet traffic
>> torrenting.. a little off topic so sorry, perhaps router ran out of file
>> descriptors xd
>>
>
>> On Sat, January 27, 2024 10:34 pm, open...@systemfailure.net wrote:
>>
>>
>
>>> i2pd has always been working fine for me with the port's default values
>>> of openfiles-cur=8192, openfiles-max=8192 and kern.maxfiles=16000. These
>>> values are probably even overkill according to i2pd's documentation.
>>>
>
>>> But I'm not using it for torrenting, and my router is not a floodfill. I
>>> guess that torrenting may exhaust available file descriptors pretty
>>> quickly.
>>>
>
>>> My 2 cents.
>>>
>>>
>
>>> On 2024-01-27 19:29 beecdadd...@danwin1210.de wrote:
>>>
>>>
>
 -- Original Message
 --
 Subject: Re: net/i2pd: move login.conf(5) bits from README to i2pd.login
  From: beecdadd...@danwin1210.de
 Date: Sat, January 27, 2024 7:16 pm
 To: "Stuart Henderson" s...@spacehopper.org
 
 
 --

>>>
>
 this software crashes all lower-bandwidth routers I tried using it on.
 my computer crashed a few times, but probably not because of what you
 said.. I did have kern.maxfiles set to 65565 or something like that,
 which probably was able to cause the crash.. so I ask how can someone
 check how many openfiles are supported? What depends on how many you can
 have?
>>>
>
 i2pd is something similar to torrenting, but anonymous meaning it
 protects us from anyone including abusive governments and people you
 make connection to routers(other peers runing I2P software like i2pd)
 and do it so many times how many connections you make depends on how
 many tunnels you allow (default
 5000) and probably speed bandwidth

>>>
>
 It can use as much as someone allows it.. which be tricky on openbsd
 because user has to set openfiles, cannot be flexible at runtime. and no
 idea what counts as openfile in i2pd, tunnels? routers maybe, too? so by
 default if tunnels 5000 unchange from i2pd.conf, could up to 15k
 openfiles, who knows? But default speed is I think 32 KB/sec, which is
 very low, so almost everyone increases it.
>>>
>
 would love to know how to find out what best number your computer can
 handle openfiles, what about shminfo? maxproc? maxvnodes? somaxconn?
>>>
>
 how can find out max connections my router can handle? maybe router
 overheat? he does same with qbittorrent, internet connection goes
 goodbye
>>>
>
 i2pd very very good project, worked on by Russians, they have no
 freedom of speech
>>>
>
 I updated to -current and I still have to set /etc/login.conf.d/i2pd
 manually, otherwise I2Pd status is "no descriptors"
>>>
>
 so yes 8192 seems low, not excessive, is similar to running webserver
 maybe
>>>
>
 and if OpenBSD crashes because of whoops no openfiles to give, CRASH,
 that is bad need fix
>>>
>
 hope this helps, thanks for maintenance.
>>
>
>>
>
>> [ REDACTED ]



Re: [Fwd: Re: net/i2pd: move login.conf(5) bits from README to i2pd.login]

2024-01-30 Thread Stuart Henderson
On 2024/01/30 10:53, beecdadd...@danwin1210.de wrote:
> I see the confusion I made I am sorry, when I said routers crash I meant
> actual ISP hardware routers.

For an ISP "customer premises equipment" router (home/officr router)?
That often means you made too many connections and exceeded the size of
NAT/firewall state table that they can cope with. Also for ISPs with
CGN, you might have a limited port-range that you're allowed to use and
can't make more connections once that has been exceeded.

> like I asked and no one answered: where can I check HARD LIMIT of my computer?

you can't really. you can try increasing until you run into problems and
back off a bit, but it probably depends on what else the kernel is
doing. usual approach is to restrict the software to using the resources
that you expect it to actually need and restrict it from making more
demands than that to orotect the rest of the system.

> what it depends on, on CPU? where is utility that shows max FDs, and
> per-running-process FD usage and their max setting?
> if this does not exist, I think why not?
> I think if user has to manually set FD limits and know potential of programs
> and OpenBSD and hardware, where is utility to help with that? I did search on
> the internet, all shit..

fstat shows per-process FD use, but the kernel backend for it is a bit
buggy and can sometimes crash the kernel, so it is best to avoid running
it on an important system.



Re: Trying to install Apache 2.4 with OpenSSL 1.1 instead of LibreSSL

2024-01-30 Thread Stuart Henderson
On 2024/01/30 11:09, Theo Buehler wrote:
> > what about this one so I can commit it upstream as well ?
> 
> Please do not.

Agreed, it is very much a quick hack to sidestep the problem, I do not
recommend committing upstream, and am a bit unsure about even just
putting it in ports (it disables ALPN, needed by h2).

It's nice that this experimental code in Chrome found a bug, but
it would have been nicer if rather than WONTFIX they had adapted it
slightly to enforce ordering of SNI and ALPN to bypass the problem and
work with others to get the server code fixed...

> > Index: modules/ssl/ssl_private.h
> > ===
> > --- modules/ssl/ssl_private.h   (revision 1915475)
> > +++ modules/ssl/ssl_private.h   (working copy)
> > @@ -249,7 +249,7 @@
> >  #endif
> >  
> >  /* ALPN Protocol Negotiation */
> > -#if defined(TLSEXT_TYPE_application_layer_protocol_negotiation)
> > +#if !defined(LIBRESSL_VERSION_NUMBER) && 
> > defined(TLSEXT_TYPE_application_layer_protocol_negotiation)
> >  #define HAVE_TLS_ALPN
> >  #endif
> >  
> 



(changed subject) Re: net/i2pd: FD talk and limits and ISP routers too weak maybe

2024-01-30 Thread beecdaddict
On Tue, January 30, 2024 11:23 am, Stuart Henderson wrote:
> On 2024/01/30 10:53, beecdadd...@danwin1210.de wrote:
>
>> I see the confusion I made I am sorry, when I said routers crash I meant
>> actual ISP hardware routers.
>
> For an ISP "customer premises equipment" router (home/officr router)?
> That often means you made too many connections and exceeded the size of
> NAT/firewall state table that they can cope with. Also for ISPs with
> CGN, you might have a limited port-range that you're allowed to use and
> can't make more connections once that has been exceeded.

is there way to verify it's the 1st thing, which can be fixed by custom
router, yes?
any computer with 2 NICs can be a OpenBSD router, yes? I seen people do that,
is cool

>
>> like I asked and no one answered: where can I check HARD LIMIT of my
>> computer?
>
> you can't really. you can try increasing until you run into problems and back
> off a bit, but it probably depends on what else the kernel is doing. usual
> approach is to restrict the software to using the resources that you expect it
> to actually need and restrict it from making more demands than that to orotect
> the rest of the system.

this sounds like a bug to me
hard limit must be known, else is like playing cards, you never know when you
lose (you crash)
and no one answered my question yet about i2pd's connections to other routhers
with can well surpass 8192 up to +3 connections, and if I am right then
each connection needs a FD? I worked with networking and programming a little,
so this makes sense to me can anyone verify?
if yes, then yes this is a bug and I am disappointed that the only way is to
run blindly and trust before crash

>
>> what it depends on, on CPU? where is utility that shows max FDs, and
>> per-running-process FD usage and their max setting? if this does not exist,
>> I think why not?
>> I think if user has to manually set FD limits and know potential of programs
>>  and OpenBSD and hardware, where is utility to help with that? I did search
>> on the internet, all shit..
>
> fstat shows per-process FD use, but the kernel backend for it is a bit buggy
> and can sometimes crash the kernel, so it is best to avoid running it on an
> important system.
>
>

oh really
I probably cannot verify the usage of I2Pd if it exceeds 8192 because my
router goes stupid and crashes, can you?
if you can't I'll give it a try, please tell me if you can.. I would try
increasing bandwidth speed to X and transit tunnels to maybe 10k, try with a
floodfill maybe, too.. because even many tunnels - there can be many to 1 i2pd
peer(i2pd router) which translates to 1 FD, right?
and if you go to web console of i2pd and go to Transit Tunnels tab, you can see
=> [some number like ID] 5.0 KiB, and then you see more of same, but the arrow
'=>' is not there, so that maybe indicates it's the same peer/i2pd router that
the following tunnels are to/from.. most have 1 tunnel, some have 6 tunnels, a
lot have 2 tunnels

but I am not getting FD count with fstat, the number is not the same with
'Routers' in web console of i2pd, so maybe I was wrong
or maybe i2pd recycles FDs to be much better at efficiency
so it has Routers stored addresses somewhere, and makes connections only if
needed (which take up FD spots)




- best regards, I like talking to you, you care about this and want to help,
it can be seen



Re: net/i2pd: FD talk and limits and ISP routers too weak maybe

2024-01-30 Thread beecdaddict
> I probably cannot verify the usage of I2Pd if it exceeds 8192 because my
router goes stupid and crashes, can you?

sorry I meant hardware router crashes, is stupid i2p term 'router' which means
'i2p router'




Re: Trying to install Apache 2.4 with OpenSSL 1.1 instead of LibreSSL

2024-01-30 Thread Theo Buehler
On Fri, Jan 26, 2024 at 02:11:52PM -0800, Tim wrote:
> I'm trying to troubleshoot an issue where Chrome/Chromium browsers
> randomly fail to correctly use SSL against my web server.

This version of a diff from jsing for libssl (it applies with slight
offsets to 7.4-stable) should fix this issue.

Could you please try this with an unpached apache-httpd?

Index: s3_lib.c
===
RCS file: /cvs/src/lib/libssl/s3_lib.c,v
diff -u -p -r1.248 s3_lib.c
--- s3_lib.c29 Nov 2023 13:39:34 -  1.248
+++ s3_lib.c30 Jan 2024 11:34:10 -
@@ -1594,6 +1594,7 @@ ssl3_free(SSL *s)
tls1_transcript_hash_free(s);
 
free(s->s3->alpn_selected);
+   free(s->s3->alpn_wire_data);
 
freezero(s->s3->peer_quic_transport_params,
s->s3->peer_quic_transport_params_len);
@@ -1659,6 +1660,9 @@ ssl3_clear(SSL *s)
free(s->s3->alpn_selected);
s->s3->alpn_selected = NULL;
s->s3->alpn_selected_len = 0;
+   free(s->s3->alpn_wire_data);
+   s->s3->alpn_wire_data = NULL;
+   s->s3->alpn_wire_data_len = 0;
 
freezero(s->s3->peer_quic_transport_params,
s->s3->peer_quic_transport_params_len);
Index: ssl_local.h
===
RCS file: /cvs/src/lib/libssl/ssl_local.h,v
diff -u -p -r1.12 ssl_local.h
--- ssl_local.h 29 Dec 2023 12:24:33 -  1.12
+++ ssl_local.h 30 Jan 2024 11:34:10 -
@@ -1209,6 +1209,8 @@ typedef struct ssl3_state_st {
 */
uint8_t *alpn_selected;
size_t alpn_selected_len;
+   uint8_t *alpn_wire_data;
+   size_t alpn_wire_data_len;
 
/* Contains the QUIC transport params received from our peer. */
uint8_t *peer_quic_transport_params;
Index: ssl_tlsext.c
===
RCS file: /cvs/src/lib/libssl/ssl_tlsext.c,v
diff -u -p -r1.137 ssl_tlsext.c
--- ssl_tlsext.c28 Apr 2023 18:14:59 -  1.137
+++ ssl_tlsext.c30 Jan 2024 11:34:10 -
@@ -86,33 +86,48 @@ tlsext_alpn_check_format(CBS *cbs)
 }
 
 static int
-tlsext_alpn_server_parse(SSL *s, uint16_t msg_types, CBS *cbs, int *alert)
+tlsext_alpn_server_parse(SSL *s, uint16_t msg_type, CBS *cbs, int *alert)
 {
-   CBS alpn, selected_cbs;
-   const unsigned char *selected;
-   unsigned char selected_len;
-   int r;
+   CBS alpn;
 
if (!CBS_get_u16_length_prefixed(cbs, &alpn))
return 0;
-
if (!tlsext_alpn_check_format(&alpn))
return 0;
+   if (!CBS_stow(&alpn, &s->s3->alpn_wire_data, 
&s->s3->alpn_wire_data_len))
+   return 0;
+
+   return 1;
+}
+
+static int
+tlsext_alpn_server_process(SSL *s, uint16_t msg_type, int *alert)
+{
+   const unsigned char *selected;
+   unsigned char selected_len;
+   CBS alpn, selected_cbs;
+   int cb_ret;
 
if (s->ctx->alpn_select_cb == NULL)
return 1;
 
+   if (s->s3->alpn_wire_data == NULL) {
+   *alert = SSL_AD_INTERNAL_ERROR;
+   return 0;
+   }
+   CBS_init(&alpn, s->s3->alpn_wire_data, s->s3->alpn_wire_data_len);
+
/*
 * XXX - A few things should be considered here:
 * 1. Ensure that the same protocol is selected on session resumption.
 * 2. Should the callback be called even if no ALPN extension was sent?
 * 3. TLSv1.2 and earlier: ensure that SNI has already been processed.
 */
-   r = s->ctx->alpn_select_cb(s, &selected, &selected_len,
+   cb_ret = s->ctx->alpn_select_cb(s, &selected, &selected_len,
CBS_data(&alpn), CBS_len(&alpn),
s->ctx->alpn_select_cb_arg);
 
-   if (r == SSL_TLSEXT_ERR_OK) {
+   if (cb_ret == SSL_TLSEXT_ERR_OK) {
CBS_init(&selected_cbs, selected, selected_len);
 
if (!CBS_stow(&selected_cbs, &s->s3->alpn_selected,
@@ -125,7 +140,7 @@ tlsext_alpn_server_parse(SSL *s, uint16_
}
 
/* On SSL_TLSEXT_ERR_NOACK behave as if no callback was present. */
-   if (r == SSL_TLSEXT_ERR_NOACK)
+   if (cb_ret == SSL_TLSEXT_ERR_NOACK)
return 1;
 
*alert = SSL_AD_NO_APPLICATION_PROTOCOL;
@@ -1972,6 +1987,7 @@ struct tls_extension_funcs {
int (*needs)(SSL *s, uint16_t msg_type);
int (*build)(SSL *s, uint16_t msg_type, CBB *cbb);
int (*parse)(SSL *s, uint16_t msg_type, CBS *cbs, int *alert);
+   int (*process)(SSL *s, uint16_t msg_type, int *alert);
 };
 
 struct tls_extension {
@@ -2123,6 +2139,7 @@ static const struct tls_extension tls_ex
.needs = tlsext_alpn_server_needs,
.build = tlsext_alpn_server_build,
.parse = tlsext_alpn_server_parse,
+   .process = tlsext_alpn_server_process,
},
},
{
@@ -2391,6 +2408,14 @@ tlsext_cl

Re: (changed subject) Re: net/i2pd: FD talk and limits and ISP routers too weak maybe

2024-01-30 Thread Bruce Jagid
>>>
>>> like I asked and no one answered: where >>>can I check HARD LIMIT of my
>>> computer?
>>
>> you can't really. you can try increasing >>until you run into problems
and back
>> off a bit, but it probably depends on what >>else the kernel is doing.
usual
>> approach is to restrict the software to >>using the resources that you
expect it
>> to actually need and restrict it from making >>more demands than that to
orotect
>> the rest of the system.

>this sounds like a bug to me
>hard limit must be known, else is like playing >cards, you never know when
you
>lose (you crash)
>and no one answered my question yet about >i2pd's connections to other
routhers
>with can well surpass 8192 up to +3 >connections, and if I am right
then
>each connection needs a FD? I worked with >networking and programming a
little,
>so this makes sense to me can anyone >verify?
>if yes, then yes this is a bug and I am >disappointed that the only way is
to
>run blindly and trust before crash

I might be out of line here since I’m new to OS dev stuff, but what you’re
asking doesn’t really make sense to me. A file descriptor is a software
abstraction built onto the hardware and the exact implementation changes
from case to case dependent on hardware. It’s like if I asked my doctor
“give me the exact limit of bicep curls I can do in an hour.” In the same
way the body has no conception of a bicep curl(only the fatigue from
moving), the hardware doesn’t know what you mean by a file descriptor(only
the residual resources needed to maintain one), and there’s like 20 ways of
doing a bicep curl, so demanding such a concrete hard limit number makes no
sense.

- Bruce

On Tue, Jan 30, 2024 at 6:52 AM  wrote:

> On Tue, January 30, 2024 11:23 am, Stuart Henderson wrote:
> > On 2024/01/30 10:53, beecdadd...@danwin1210.de wrote:
> >
> >> I see the confusion I made I am sorry, when I said routers crash I meant
> >> actual ISP hardware routers.
> >
> > For an ISP "customer premises equipment" router (home/officr router)?
> > That often means you made too many connections and exceeded the size of
> > NAT/firewall state table that they can cope with. Also for ISPs with
> > CGN, you might have a limited port-range that you're allowed to use and
> > can't make more connections once that has been exceeded.
>
> is there way to verify it's the 1st thing, which can be fixed by custom
> router, yes?
> any computer with 2 NICs can be a OpenBSD router, yes? I seen people do
> that,
> is cool
>
> >
> >> like I asked and no one answered: where can I check HARD LIMIT of my
> >> computer?
> >
> > you can't really. you can try increasing until you run into problems and
> back
> > off a bit, but it probably depends on what else the kernel is doing.
> usual
> > approach is to restrict the software to using the resources that you
> expect it
> > to actually need and restrict it from making more demands than that to
> orotect
> > the rest of the system.
>
> this sounds like a bug to me
> hard limit must be known, else is like playing cards, you never know when
> you
> lose (you crash)
> and no one answered my question yet about i2pd's connections to other
> routhers
> with can well surpass 8192 up to +3 connections, and if I am right then
> each connection needs a FD? I worked with networking and programming a
> little,
> so this makes sense to me can anyone verify?
> if yes, then yes this is a bug and I am disappointed that the only way is
> to
> run blindly and trust before crash
>
> >
> >> what it depends on, on CPU? where is utility that shows max FDs, and
> >> per-running-process FD usage and their max setting? if this does not
> exist,
> >> I think why not?
> >> I think if user has to manually set FD limits and know potential of
> programs
> >>  and OpenBSD and hardware, where is utility to help with that? I did
> search
> >> on the internet, all shit..
> >
> > fstat shows per-process FD use, but the kernel backend for it is a bit
> buggy
> > and can sometimes crash the kernel, so it is best to avoid running it on
> an
> > important system.
> >
> >
>
> oh really
> I probably cannot verify the usage of I2Pd if it exceeds 8192 because my
> router goes stupid and crashes, can you?
> if you can't I'll give it a try, please tell me if you can.. I would try
> increasing bandwidth speed to X and transit tunnels to maybe 10k, try with
> a
> floodfill maybe, too.. because even many tunnels - there can be many to 1
> i2pd
> peer(i2pd router) which translates to 1 FD, right?
> and if you go to web console of i2pd and go to Transit Tunnels tab, you
> can see
> => [some number like ID] 5.0 KiB, and then you see more of same, but the
> arrow
> '=>' is not there, so that maybe indicates it's the same peer/i2pd router
> that
> the following tunnels are to/from.. most have 1 tunnel, some have 6
> tunnels, a
> lot have 2 tunnels
>
> but I am not getting FD count with fstat, the number is not the same with
> 'Routers' in web console of i2pd, so maybe I was wrong
> or may

Re: (changed subject) Re: net/i2pd: FD talk and limits and ISP routers too weak maybe

2024-01-30 Thread beecdaddict
I'm also not a OS dev
cannot the OS do some testing/benchmarking to get a grasp on what the limit
could be?
YOU are the OS in your example, and you would know the limit when you would do
curls slower and maybe you would get more and more pain..
and crash in your example would be your muscle being in such pain you wouldn't
be able to do anything with your arm/whatever

so you're saying the only fucking way to know a true hardware limit is the
worst that could be - a crash???
what if crash doesn't happen right away? in my case hardware ISP router could
be limiting the potential of i2pd software or torrenting software
boom corrupted data, processes, uncompleted important work, lost important
work, pain in ass, etc
literally couldn't that corrupt the entire system, a crash?

tell me I am worrying too much, but even then a crash is the worst thing
someone can rely on, I think it's unprofessional that the OS allows for that
sort of insecurity
if all you said and I said is correct, I consider that to be a security
vulnerability at least, not to mention other vulnerabilities

On Tue, January 30, 2024 1:32 pm, Bruce Jagid wrote:


 like I asked and no one answered: where >>>can I check HARD LIMIT of my
  computer?
>>>
>>> you can't really. you can try increasing >>until you run into problems
> and back
>>> off a bit, but it probably depends on what >>else the kernel is doing.
> usual
>>> approach is to restrict the software to >>using the resources that you
> expect it
>>> to actually need and restrict it from making >>more demands than that to
> orotect
>>> the rest of the system.
>
>> this sounds like a bug to me hard limit must be known, else is like playing
>> >cards, you never know when
> you
>> lose (you crash) and no one answered my question yet about >i2pd's
>> connections to other
> routhers
>> with can well surpass 8192 up to +3 >connections, and if I am right
> then
>> each connection needs a FD? I worked with >networking and programming a
> little,
>> so this makes sense to me can anyone >verify? if yes, then yes this is a bug
>> and I am >disappointed that the only way is
> to
>> run blindly and trust before crash
>
> I might be out of line here since I’m new to OS dev stuff, but what you’re
> asking doesn’t really make sense to me. A file descriptor is a software
> abstraction built onto the hardware and the exact implementation changes from
> case to case dependent on hardware. It’s like if I asked my doctor “give me
> the exact limit of bicep curls I can do in an hour.” In the same way the body
> has no conception of a bicep curl(only the fatigue from moving), the hardware
> doesn’t know what you mean by a file descriptor(only the residual resources
> needed to maintain one), and there’s like 20 ways of doing a bicep curl, so
> demanding such a concrete hard limit number makes no sense.
>
> - Bruce
>
>
> On Tue, Jan 30, 2024 at 6:52 AM  wrote:
>
>
>> On Tue, January 30, 2024 11:23 am, Stuart Henderson wrote:
>>
>>> On 2024/01/30 10:53, beecdadd...@danwin1210.de wrote:
>>>
>>>
 I see the confusion I made I am sorry, when I said routers crash I
 meant actual ISP hardware routers.
>>>
>>> For an ISP "customer premises equipment" router (home/officr router)?
>>> That often means you made too many connections and exceeded the size of
>>> NAT/firewall state table that they can cope with. Also for ISPs with
>>> CGN, you might have a limited port-range that you're allowed to use and
>>> can't make more connections once that has been exceeded.
>>
>> is there way to verify it's the 1st thing, which can be fixed by custom
>> router, yes? any computer with 2 NICs can be a OpenBSD router, yes? I seen
>> people do that, is cool
>>
>>>
 like I asked and no one answered: where can I check HARD LIMIT of my
 computer?
>>>
>>> you can't really. you can try increasing until you run into problems and
>> back
>>> off a bit, but it probably depends on what else the kernel is doing.
>> usual
>>> approach is to restrict the software to using the resources that you
>> expect it
>>> to actually need and restrict it from making more demands than that to
>> orotect
>>> the rest of the system.
>>
>> this sounds like a bug to me hard limit must be known, else is like playing
>> cards, you never know when you lose (you crash) and no one answered my
>> question yet about i2pd's connections to other routhers with can well surpass
>> 8192 up to +3 connections, and if I am right then
>> each connection needs a FD? I worked with networking and programming a
>> little, so this makes sense to me can anyone verify? if yes, then yes this is
>> a bug and I am disappointed that the only way is to run blindly and trust
>> before crash
>>
>>>
 what it depends on, on CPU? where is utility that shows max FDs, and
 per-running-process FD usage and their max setting? if this does not
>> exist,
 I think why not?
 I think if user has to manually set FD limits and know potential of

>> programs

Re: (changed subject) Re: net/i2pd: FD talk and limits and ISP routers too weak maybe

2024-01-30 Thread Bruce Jagid
>I'm also not a OS dev
>cannot the OS do some testing/benchmarking >to get a grasp on what the
limit
>could be?
>YOU are the OS in your example, and you >would know the limit when you
would do
>curls slower and maybe you would get more >and more pain..
>and crash in your example would be your >muscle being in such pain you
wouldn't
>be able to do anything with your >arm/whatever

So your body automatically benchmarks how many bicep curls you can do in an
hour without you having to think about it? You use your body to measure the
bicep curls it can do, it doesn’t automatically do that. You can use your
OS to perform the benchmark, but to expect the OS to designate resources
automatically to benchmark itself is equal portions naïve and obtuse. You
have a very specific use-case, you should do the work to find your answer.


On Tue, Jan 30, 2024 at 10:20 AM  wrote:

> I'm also not a OS dev
> cannot the OS do some testing/benchmarking to get a grasp on what the limit
> could be?
> YOU are the OS in your example, and you would know the limit when you
> would do
> curls slower and maybe you would get more and more pain..
> and crash in your example would be your muscle being in such pain you
> wouldn't
> be able to do anything with your arm/whatever
>
> so you're saying the only fucking way to know a true hardware limit is the
> worst that could be - a crash???
> what if crash doesn't happen right away? in my case hardware ISP router
> could
> be limiting the potential of i2pd software or torrenting software
> boom corrupted data, processes, uncompleted important work, lost important
> work, pain in ass, etc
> literally couldn't that corrupt the entire system, a crash?
>
> tell me I am worrying too much, but even then a crash is the worst thing
> someone can rely on, I think it's unprofessional that the OS allows for
> that
> sort of insecurity
> if all you said and I said is correct, I consider that to be a security
> vulnerability at least, not to mention other vulnerabilities
>
> On Tue, January 30, 2024 1:32 pm, Bruce Jagid wrote:
> 
>
>  like I asked and no one answered: where >>>can I check HARD LIMIT of
> my
>   computer?
> >>>
> >>> you can't really. you can try increasing >>until you run into problems
> > and back
> >>> off a bit, but it probably depends on what >>else the kernel is doing.
> > usual
> >>> approach is to restrict the software to >>using the resources that you
> > expect it
> >>> to actually need and restrict it from making >>more demands than that
> to
> > orotect
> >>> the rest of the system.
> >
> >> this sounds like a bug to me hard limit must be known, else is like
> playing
> >> >cards, you never know when
> > you
> >> lose (you crash) and no one answered my question yet about >i2pd's
> >> connections to other
> > routhers
> >> with can well surpass 8192 up to +3 >connections, and if I am right
> > then
> >> each connection needs a FD? I worked with >networking and programming a
> > little,
> >> so this makes sense to me can anyone >verify? if yes, then yes this is
> a bug
> >> and I am >disappointed that the only way is
> > to
> >> run blindly and trust before crash
> >
> > I might be out of line here since I’m new to OS dev stuff, but what
> you’re
> > asking doesn’t really make sense to me. A file descriptor is a software
> > abstraction built onto the hardware and the exact implementation changes
> from
> > case to case dependent on hardware. It’s like if I asked my doctor “give
> me
> > the exact limit of bicep curls I can do in an hour.” In the same way the
> body
> > has no conception of a bicep curl(only the fatigue from moving), the
> hardware
> > doesn’t know what you mean by a file descriptor(only the residual
> resources
> > needed to maintain one), and there’s like 20 ways of doing a bicep curl,
> so
> > demanding such a concrete hard limit number makes no sense.
> >
> > - Bruce
> >
> >
> > On Tue, Jan 30, 2024 at 6:52 AM  wrote:
> >
> >
> >> On Tue, January 30, 2024 11:23 am, Stuart Henderson wrote:
> >>
> >>> On 2024/01/30 10:53, beecdadd...@danwin1210.de wrote:
> >>>
> >>>
>  I see the confusion I made I am sorry, when I said routers crash I
>  meant actual ISP hardware routers.
> >>>
> >>> For an ISP "customer premises equipment" router (home/officr router)?
> >>> That often means you made too many connections and exceeded the size of
> >>> NAT/firewall state table that they can cope with. Also for ISPs with
> >>> CGN, you might have a limited port-range that you're allowed to use and
> >>> can't make more connections once that has been exceeded.
> >>
> >> is there way to verify it's the 1st thing, which can be fixed by custom
> >> router, yes? any computer with 2 NICs can be a OpenBSD router, yes? I
> seen
> >> people do that, is cool
> >>
> >>>
>  like I asked and no one answered: where can I check HARD LIMIT of my
>  computer?
> >>>
> >>> you can't really. you can try increasing until you run into problems
> and
> >> back
> >>> off 

Re: NEW: games/cromagrally

2024-01-30 Thread Thomas Frohwein
On Tue, Jan 30, 2024 at 01:46:49AM -0600, izder456 wrote:
> 
> Hey ports@ w//ckies,
> 
> If it wasn't clear enough already, I love these games. Given that (in
> theory) OpenBSD/macppc has 3D-Acceleration on the r128(4) driver, it
> would be wonderful to run this on an era-accurate PPC iMac.
> 
> TL;DR:
> I want to import my port of CroMagRally, which is yet another Pangea
> Software title originally for the PPC macs. I think it's been three
> I've submitted now... :)
> 
> the 3.0.0 GH_RELASE has a bug with byteswapping terrain textures, so i
> just pointed this port against the latest commit hash. unsure if I can
> still refer to this as "3.0.0", thoughts?
> 
> As normal, I did some patchwork to allow the binary to be ran from
> anywhere so core files can be properly dumped again. (referencing
> Omar's patch of Nanosaur2)
> 
> Attached is the port, OK to import?
> 
> -- 
> 
> -iz

TLDR:
Thanks, looks generally good, builds and runs. Now supertuxcart has
some competition. Attached port with small modifications, ok thfr@.

Longer reply... Regarding the versioning:

See packages-specs(7) for guidance on picking a version. There isn't a
100% established way when there are upstream improvements without a new
release. The one aspect that seems certain is to not ignore the last
(or next) release version number. After that, there are the following
options up for debate from what I have seen and what packages-specs(2)
offers:

1. Add patch-level to version number (3.0.0pl0).

2. Add REVISION (3.0.0p0).

3. Treat it as a precursor to the next release (e.g. 3.0.1alpha0).

The risk with 1 and 3 is that it could collide with upstream's
numbering of future versions. Option 2 goes a bit against the grain
that REVISION is usually for when the port is changed (change in
build options etc.).

I am personally favor of option 1, but open to hear if there are
arguments for a different default approach to this common situation. I
have updated cromagrally accordingly and attached it.

I replaced your Makefile alignment with tabs as this is most commonly
used in ports in my experience (VARIABLE=value).


cromagrally-3.0.0pl0.tgz
Description: application/tar-gz


Re: (changed subject) Re: net/i2pd: FD talk and limits and ISP routers too weak maybe

2024-01-30 Thread beecdaddict
On Tue, January 30, 2024 3:25 pm, Bruce Jagid wrote:
>> I'm also not a OS dev
>> cannot the OS do some testing/benchmarking >to get a grasp on what the
> limit
>> could be? YOU are the OS in your example, and you >would know the limit when
>> you
> would do
>> curls slower and maybe you would get more >and more pain.. and crash in your
>> example would be your >muscle being in such pain you
> wouldn't
>> be able to do anything with your >arm/whatever
>
> So your body automatically benchmarks how many bicep curls you can do in an
> hour without you having to think about it? You use your body to measure the
> bicep curls it can do, it doesn’t automatically do that. You can use your OS
> to perform the benchmark, but to expect the OS to designate resources
> automatically to benchmark itself is equal portions naïve and obtuse. You have
> a very specific use-case, you should do the work to find your answer.

it can know limit more-less, yes, based on earlier curls

maybe not automatically, but having a utility that does this for you and you
can run it once after each hardare change to find out, but I am not sure you
say it depends on use-case, I do not understand what you mean

if you read my earlier replies, you would find out that I said I already tried
searching online for like 1 hour, there is some sort of crazy formula one dude
did a lot of math, snipets from code, is that what you mean?
because what you say sound like there are multiple types of FDs, maybe network
FDs and normal FDs?

- best regards

>
>
> On Tue, Jan 30, 2024 at 10:20 AM  wrote:
>
>
>> I'm also not a OS dev
>> cannot the OS do some testing/benchmarking to get a grasp on what the limit
>> could be? YOU are the OS in your example, and you would know the limit when
>> you would do curls slower and maybe you would get more and more pain.. and
>> crash in your example would be your muscle being in such pain you wouldn't be
>> able to do anything with your arm/whatever
>>
>> so you're saying the only fucking way to know a true hardware limit is the
>> worst that could be - a crash??? what if crash doesn't happen right away? in
>> my case hardware ISP router could be limiting the potential of i2pd software
>> or torrenting software boom corrupted data, processes, uncompleted important
>> work, lost important work, pain in ass, etc literally couldn't that corrupt
>> the entire system, a crash?
>>
>> tell me I am worrying too much, but even then a crash is the worst thing
>> someone can rely on, I think it's unprofessional that the OS allows for that
>>  sort of insecurity if all you said and I said is correct, I consider that
>> to be a security vulnerability at least, not to mention other
>> vulnerabilities
>>
>> On Tue, January 30, 2024 1:32 pm, Bruce Jagid wrote:
>>
>>
>>
>> like I asked and no one answered: where >>>can I check HARD LIMIT
>> of
>> my
>> computer?
>
> you can't really. you can try increasing >>until you run into
> problems
>>> and back
> off a bit, but it probably depends on what >>else the kernel is
> doing.
>>> usual
> approach is to restrict the software to >>using the resources that
> you
>>> expect it
> to actually need and restrict it from making >>more demands than that
>
>> to
>>> orotect
> the rest of the system.
>>>
 this sounds like a bug to me hard limit must be known, else is like
>> playing
> cards, you never know when
>>> you
 lose (you crash) and no one answered my question yet about >i2pd's
 connections to other
>>> routhers
 with can well surpass 8192 up to +3 >connections, and if I am right

>>> then
 each connection needs a FD? I worked with >networking and programming a

>>> little,
 so this makes sense to me can anyone >verify? if yes, then yes this is
>> a bug
 and I am >disappointed that the only way is
>>> to
 run blindly and trust before crash
>>>
>>> I might be out of line here since I’m new to OS dev stuff, but what
>>>
>> you’re
>>> asking doesn’t really make sense to me. A file descriptor is a software
>>> abstraction built onto the hardware and the exact implementation changes
>> from
>>> case to case dependent on hardware. It’s like if I asked my doctor “give
>> me
>>> the exact limit of bicep curls I can do in an hour.” In the same way the
>> body
>>> has no conception of a bicep curl(only the fatigue from moving), the
>> hardware
>>> doesn’t know what you mean by a file descriptor(only the residual
>> resources
>>> needed to maintain one), and there’s like 20 ways of doing a bicep curl,
>> so
>>> demanding such a concrete hard limit number makes no sense.
>>>
>>> - Bruce
>>>
>>>
>>>
>>> On Tue, Jan 30, 2024 at 6:52 AM  wrote:
>>>
>>>
>>>
 On Tue, January 30, 2024 11:23 am, Stuart Henderson wrote:


> On 2024/01/30 10:53, beecdadd...@danwin1210.de wrote:
>
>
>
>> I see the confusion I made I am sorry, when I said routers crash I
>> meant actual ISP hardwa

Re: (changed subject) Re: net/i2pd: FD talk and limits and ISP routers too weak maybe

2024-01-30 Thread beecdaddict
I'm sorry, it felt applicable reasons outside of OpenBSD
I got no problem with swearing back at me

I felt kernel crashes are off-topic, I thought it would be fine because I
didn't know it would go for so long this topic

of course it is not your problem me crashing non-OpenBSD el-cheapo home
router, but OpenBSD guys know networking and maybe routers the best, and maybe
benefit others, do I do this on misc@ ?

On Tue, January 30, 2024 3:26 pm, Ian Darwin wrote:
> On 1/30/24 10:20, beecdadd...@danwin1210.de wrote:
>
>> so you're saying the only fucking way to know a true hardware limit is the
>> worst that could be - a crash???
>
> Once you start swearing, most people will tune you out. Others will
> swear back at you.
>
> Neither is very productive.
>
>
> Anyway, discussion of kernel crashes belongs on tech@, and discussion of
> crashing your non-OpenBSD el-cheapo home router is not our problem anyway.
>




Re: (changed subject) Re: net/i2pd: FD talk and limits and ISP routers too weak maybe

2024-01-30 Thread beecdaddict
human body changes: different energy levels, tiredness, soar muscle,
andrenaline, weight of curls, type of curl like you said
computer has same exact hardware every time unless changed like I mentioned
nothing changes
most servers have different and changing software programs on it, yes
but we are talking about system hard limit, not soft limits, the hard limit
should stay the same

of course you're done, you make no sense to me maybe because you know more or
you maybe misunderstand me

I think this is far too off-topic and not for ports@ but let's end this topic
so I can go maybe to tech@ and misc@

On Tue, January 30, 2024 3:39 pm, Bruce Jagid wrote:
> no, YOU know more or less based on earlier curls, just like YOU know more or
> less based on other programs you’ve run on your OS. And that guess would be
> incredibly inaccurate. You can’t just ask for a concrete hard limit and then
> relax the conditions such that it becomes a guesstimate. You don’t even
> believe your own bs, I’m done arguing.
>
> On Tue, Jan 30, 2024 at 10:33 AM  wrote:
>
>
>> On Tue, January 30, 2024 3:25 pm, Bruce Jagid wrote:
>>
 I'm also not a OS dev
 cannot the OS do some testing/benchmarking >to get a grasp on what the
>>> limit
 could be? YOU are the OS in your example, and you >would know the limit

>> when
 you
>>> would do
 curls slower and maybe you would get more >and more pain.. and crash in

>> your
 example would be your >muscle being in such pain you
>>> wouldn't
 be able to do anything with your >arm/whatever
>>>
>>> So your body automatically benchmarks how many bicep curls you can do in
>>>
>> an
>>> hour without you having to think about it? You use your body to measure
>> the
>>> bicep curls it can do, it doesn’t automatically do that. You can use
>> your OS
>>> to perform the benchmark, but to expect the OS to designate resources
>>> automatically to benchmark itself is equal portions naïve and obtuse.
>> You have
>>
>>> a very specific use-case, you should do the work to find your answer.
>>
>> it can know limit more-less, yes, based on earlier curls
>>
>> maybe not automatically, but having a utility that does this for you and you
>>  can run it once after each hardare change to find out, but I am not sure
>> you say it depends on use-case, I do not understand what you mean
>>
>> if you read my earlier replies, you would find out that I said I already
>> tried searching online for like 1 hour, there is some sort of crazy formula
>> one dude did a lot of math, snipets from code, is that what you mean? because
>> what you say sound like there are multiple types of FDs, maybe network FDs
>> and normal FDs?
>>
>> - best regards
>>
>>
>>>
>>>
>>> On Tue, Jan 30, 2024 at 10:20 AM  wrote:
>>>
>>>
>>>
 I'm also not a OS dev
 cannot the OS do some testing/benchmarking to get a grasp on what the
>> limit
 could be? YOU are the OS in your example, and you would know the limit
>> when
 you would do curls slower and maybe you would get more and more pain..
>> and
 crash in your example would be your muscle being in such pain you
>> wouldn't be
 able to do anything with your arm/whatever

 so you're saying the only fucking way to know a true hardware limit is
>> the
 worst that could be - a crash??? what if crash doesn't happen right
>> away? in
 my case hardware ISP router could be limiting the potential of i2pd
>> software
 or torrenting software boom corrupted data, processes, uncompleted
>> important
 work, lost important work, pain in ass, etc literally couldn't that
>> corrupt
 the entire system, a crash?

 tell me I am worrying too much, but even then a crash is the worst
 thing someone can rely on, I think it's unprofessional that the OS
 allows for
>> that
 sort of insecurity if all you said and I said is correct, I consider
>> that
 to be a security vulnerability at least, not to mention other
 vulnerabilities

 On Tue, January 30, 2024 1:32 pm, Bruce Jagid wrote:




 like I asked and no one answered: where >>>can I check HARD
 LIMIT
 of
 my
 computer?
>>>
>>> you can't really. you can try increasing >>until you run into
>>> problems
> and back
>>> off a bit, but it probably depends on what >>else the kernel is
>>> doing.
> usual
>>> approach is to restrict the software to >>using the resources
>>> that you
> expect it
>>> to actually need and restrict it from making >>more demands than
>>> that
>>>
 to
> orotect
>>> the rest of the system.
>
>> this sounds like a bug to me hard limit must be known, else is like
>>
 playing
>>> cards, you never know when
> you
>> lose (you crash) and no one answered my question yet about >i2pd's
>> connections to other
> routhers
>> with can well surpass 8192 up to +3 >connections, and if I am
>> ri

Re: (changed subject) Re: net/i2pd: FD talk and limits and ISP routers too weak maybe

2024-01-30 Thread Theo de Raadt
beecdadd...@danwin1210.de wrote:

> maybe not automatically, but having a utility that does this for you and you
> can run it once after each hardare change to find out, but I am not sure you
> say it depends on use-case, I do not understand what you mean
> 
> if you read my earlier replies, you would find out that I said I already tried
> searching online for like 1 hour, there is some sort of crazy formula one dude
> did a lot of math, snipets from code, is that what you mean?
> because what you say sound like there are multiple types of FDs, maybe network
> FDs and normal FDs?


You are failing to understand the operating system is intending to be a
"sharing" environment -- it is sharing limited resources among multiple 
consumers.

A large number of heuristics exist to defend this sharing, rather than
making resources available to just the 1 piece of software you want.

What you want isn't how it works.





Re: (changed subject) Re: net/i2pd: FD talk and limits and ISP routers too weak maybe

2024-01-30 Thread beecdaddict
I know system shares all resources including FDs
as far as I know there's what kernel/OS needs and is using and the rest of
users including but not limited to staff and daemon users/programs like i2pd
all I was wondering is the limit or amount of FDs and other resources the rest
of users of daemon can use
in my head is a total amount which apparently is unknown (I have been told
why, but how can anyone work with that? it's like relying on someone mentally
unstable) which is then devided, kernel/OS gets all that it needs, users and
daemons get the rest which IS DIVIDED (in my head) until there is no more to
divide/give away/share
am I close?

okay maybe not make all available resources to 1 program is not how it works
but why not if that's the only programs that's running?
I do not understand if it's even possible to do what I'm asking or
questioning, I am not a OS dev because of reasons, but I like discussing such
because I like OS-dev

and just because what I ask isn't how it works doesn't mean it's bad? it could
mean

- best regards, my man

On Tue, January 30, 2024 3:45 pm, Theo de Raadt wrote:
> beecdadd...@danwin1210.de wrote:
>
>> maybe not automatically, but having a utility that does this for you and
>> you can run it once after each hardare change to find out, but I am not sure
>> you say it depends on use-case, I do not understand what you mean
>>
>> if you read my earlier replies, you would find out that I said I already
>> tried searching online for like 1 hour, there is some sort of crazy formula
>> one dude did a lot of math, snipets from code, is that what you mean? because
>> what you say sound like there are multiple types of FDs, maybe network FDs
>> and normal FDs?
>
>
> You are failing to understand the operating system is intending to be a
> "sharing" environment -- it is sharing limited resources among multiple
> consumers.
>
> A large number of heuristics exist to defend this sharing, rather than
> making resources available to just the 1 piece of software you want.
>
> What you want isn't how it works.
>
>
>
>
>




Re: (changed subject) Re: net/i2pd: FD talk and limits and ISP routers too weak maybe

2024-01-30 Thread Theo de Raadt
beecdadd...@danwin1210.de wrote:

> I know system shares all resources including FDs
> as far as I know there's what kernel/OS needs and is using and the rest of
> users including but not limited to staff and daemon users/programs like i2pd
> all I was wondering is the limit or amount of FDs and other resources the rest
> of users of daemon can use
> in my head is a total amount which apparently is unknown (I have been told
> why, but how can anyone work with that? it's like relying on someone mentally
> unstable) which is then devided, kernel/OS gets all that it needs, users and
> daemons get the rest which IS DIVIDED (in my head) until there is no more to
> divide/give away/share
> am I close?
> 
> okay maybe not make all available resources to 1 program is not how it works
> but why not if that's the only programs that's running?
> I do not understand if it's even possible to do what I'm asking or
> questioning, I am not a OS dev because of reasons, but I like discussing such
> because I like OS-dev
> 
> and just because what I ask isn't how it works doesn't mean it's bad? it could
> mean

You've been provided with all the source code.

Where is your attempt to change things?



mail/opensmtpd-extras: use imsg_get_fd()

2024-01-30 Thread Omar Polo
This should make opensmtpd-extras work with a future imsg.fd removal.

m_forward() is not used at all in -extras, so I could have also used -1
there, it doesn't matter.

The queues are doing imsg passing, so the second hunk is actually
needed, even if I doubt anyone is using them?  anyway, the diff is
simple enough that I'm confident I'm not breaking anything.

There is still one hit of 'imsg->fd' in api/filter_api.c, but that file
is not used anymore, and so I haven't touched it.

Index: Makefile
===
RCS file: /home/cvs/ports/mail/opensmtpd-extras/Makefile,v
diff -u -p -r1.37 Makefile
--- Makefile26 Sep 2023 12:28:13 -  1.37
+++ Makefile30 Jan 2024 15:55:03 -
@@ -11,8 +11,11 @@ PKGNAME-mysql=   opensmtpd-extras-mysql-$
 PKGNAME-pgsql= opensmtpd-extras-pgsql-${V}
 PKGNAME-python=opensmtpd-extras-python-${V}
 PKGNAME-redis= opensmtpd-extras-redis-${V}
-REVISION-mysql=0
-REVISION-pgsql=0
+REVISION-main= 0
+REVISION-mysql=1
+REVISION-pgsql=1
+REVISION-python=   0
+REVISION-redis=0
 EPOCH= 0
 
 CATEGORIES=mail
Index: patches/patch-api_mproc_c
===
RCS file: patches/patch-api_mproc_c
diff -N patches/patch-api_mproc_c
--- /dev/null   1 Jan 1970 00:00:00 -
+++ patches/patch-api_mproc_c   30 Jan 2024 15:57:35 -
@@ -0,0 +1,14 @@
+use imsg_get_fd()
+
+Index: api/mproc.c
+--- api/mproc.c.orig
 api/mproc.c
+@@ -306,7 +306,7 @@ void
+ m_forward(struct mproc *p, struct imsg *imsg)
+ {
+   imsg_compose(&p->imsgbuf, imsg->hdr.type, imsg->hdr.peerid,
+-  imsg->hdr.pid, imsg->fd, imsg->data,
++  imsg->hdr.pid, imsg_get_fd(imsg), imsg->data,
+   imsg->hdr.len - sizeof(imsg->hdr));
+ 
+   log_trace(TRACE_MPROC, "mproc: %s -> %s : %zu %s (forward)",
Index: patches/patch-api_queue_api_c
===
RCS file: patches/patch-api_queue_api_c
diff -N patches/patch-api_queue_api_c
--- /dev/null   1 Jan 1970 00:00:00 -
+++ patches/patch-api_queue_api_c   30 Jan 2024 15:57:35 -
@@ -0,0 +1,14 @@
+use imsg_get_fd
+
+Index: api/queue_api.c
+--- api/queue_api.c.orig
 api/queue_api.c
+@@ -171,7 +171,7 @@ queue_msg_dispatch(void)
+   log_warn("warn: queue-api: mkstemp");
+   }
+   else {
+-  ifile = fdopen(imsg.fd, "r");
++  ifile = fdopen(imsg_get_fd(&imsg), "r");
+   ofile = fdopen(fd, "w");
+   m = n = 0;
+   if (ifile && ofile) {



Re: (changed subject) Re: net/i2pd: FD talk and limits and ISP routers too weak maybe

2024-01-30 Thread beecdaddict
oh, Theo, if I were to start changing thing to the perfect OS security-wise,
it wouldn't even look like OpenBSD code anymore, but OpenBSD still best what
world have to offer

so do you agree with my logic? at least give me that
at least tell me is this how things stand FD-wise/limit-wise/whatever, you
probably know the best out of all I e-mailed with

no attempt yet,in future I hope,it's not a I am too lazy reason

On Tue, January 30, 2024 3:58 pm, Theo de Raadt wrote:
> beecdadd...@danwin1210.de wrote:
>
>> I know system shares all resources including FDs
>> as far as I know there's what kernel/OS needs and is using and the rest of
>> users including but not limited to staff and daemon users/programs like
>> i2pd all I was wondering is the limit or amount of FDs and other resources
>> the rest of users of daemon can use in my head is a total amount which
>> apparently is unknown (I have been told why, but how can anyone work with
>> that? it's like relying on someone mentally unstable) which is then devided,
>> kernel/OS gets all that it needs, users and daemons get the rest which IS
>> DIVIDED (in my head) until there is no more to
>> divide/give away/share am I close?
>>
>> okay maybe not make all available resources to 1 program is not how it
>> works but why not if that's the only programs that's running? I do not
>> understand if it's even possible to do what I'm asking or questioning, I am
>> not a OS dev because of reasons, but I like discussing such because I like
>> OS-dev
>>
>>
>> and just because what I ask isn't how it works doesn't mean it's bad? it
>> could mean
>
> You've been provided with all the source code.
>
>
> Where is your attempt to change things?
>
>
>




Re: (changed subject) Re: net/i2pd: FD talk and limits and ISP routers too weak maybe

2024-01-30 Thread Theo de Raadt
I'm out of here.

beecdadd...@danwin1210.de wrote:

> oh, Theo, if I were to start changing thing to the perfect OS security-wise,
> it wouldn't even look like OpenBSD code anymore, but OpenBSD still best what
> world have to offer
> 
> so do you agree with my logic? at least give me that
> at least tell me is this how things stand FD-wise/limit-wise/whatever, you
> probably know the best out of all I e-mailed with
> 
> no attempt yet,in future I hope,it's not a I am too lazy reason
> 
> On Tue, January 30, 2024 3:58 pm, Theo de Raadt wrote:
> > beecdadd...@danwin1210.de wrote:
> >
> >> I know system shares all resources including FDs
> >> as far as I know there's what kernel/OS needs and is using and the rest of
> >> users including but not limited to staff and daemon users/programs like
> >> i2pd all I was wondering is the limit or amount of FDs and other resources
> >> the rest of users of daemon can use in my head is a total amount which
> >> apparently is unknown (I have been told why, but how can anyone work with
> >> that? it's like relying on someone mentally unstable) which is then 
> >> devided,
> >> kernel/OS gets all that it needs, users and daemons get the rest which IS
> >> DIVIDED (in my head) until there is no more to
> >> divide/give away/share am I close?
> >>
> >> okay maybe not make all available resources to 1 program is not how it
> >> works but why not if that's the only programs that's running? I do not
> >> understand if it's even possible to do what I'm asking or questioning, I am
> >> not a OS dev because of reasons, but I like discussing such because I like
> >> OS-dev
> >>
> >>
> >> and just because what I ask isn't how it works doesn't mean it's bad? it
> >> could mean
> >
> > You've been provided with all the source code.
> >
> >
> > Where is your attempt to change things?
> >
> >
> >
> 
> 



Re: (changed subject) Re: net/i2pd: FD talk and limits and ISP routers too weak maybe

2024-01-30 Thread Bruce Jagid
If you actually thought you knew what you were talking about, you wouldn’t
feel the need to insert “I’m not an OS Dev” after everything you say

On Tue, Jan 30, 2024 at 11:05 AM  wrote:

> oh, Theo, if I were to start changing thing to the perfect OS
> security-wise,
> it wouldn't even look like OpenBSD code anymore, but OpenBSD still best
> what
> world have to offer
>
> so do you agree with my logic? at least give me that
> at least tell me is this how things stand FD-wise/limit-wise/whatever, you
> probably know the best out of all I e-mailed with
>
> no attempt yet,in future I hope,it's not a I am too lazy reason
>
> On Tue, January 30, 2024 3:58 pm, Theo de Raadt wrote:
> > beecdadd...@danwin1210.de wrote:
> >
> >> I know system shares all resources including FDs
> >> as far as I know there's what kernel/OS needs and is using and the rest
> of
> >> users including but not limited to staff and daemon users/programs like
> >> i2pd all I was wondering is the limit or amount of FDs and other
> resources
> >> the rest of users of daemon can use in my head is a total amount which
> >> apparently is unknown (I have been told why, but how can anyone work
> with
> >> that? it's like relying on someone mentally unstable) which is then
> devided,
> >> kernel/OS gets all that it needs, users and daemons get the rest which
> IS
> >> DIVIDED (in my head) until there is no more to
> >> divide/give away/share am I close?
> >>
> >> okay maybe not make all available resources to 1 program is not how it
> >> works but why not if that's the only programs that's running? I do not
> >> understand if it's even possible to do what I'm asking or questioning,
> I am
> >> not a OS dev because of reasons, but I like discussing such because I
> like
> >> OS-dev
> >>
> >>
> >> and just because what I ask isn't how it works doesn't mean it's bad? it
> >> could mean
> >
> > You've been provided with all the source code.
> >
> >
> > Where is your attempt to change things?
> >
> >
> >
>
>
>


Re: (changed subject) Re: net/i2pd: FD talk and limits and ISP routers too weak maybe

2024-01-30 Thread beecdaddict
I don't know, I told you, all I worked with are what you guys told me, what
made sense and what I could found online
I did not read code because I am not OS dev and don't have as much time as I
would like, so this is the best I could do, Theo didn't tell me if I was wrong
or right, he told me to make changes to source code and he is out.. does that
mean I am right or to just read source code?

I maybe make no sense always, but neither do you guys
I am as friendly as I can be, I said what I tried and didn't try, and who I am
and who I am not

- best regards, I hope we can make friendships regardless of out differences
and knowledge, enemies are easy to make

On Tue, January 30, 2024 4:08 pm, Bruce Jagid wrote:
> If you actually thought you knew what you were talking about, you wouldn’t
> feel the need to insert “I’m not an OS Dev” after everything you say
>
> On Tue, Jan 30, 2024 at 11:05 AM  wrote:
>
>
>> oh, Theo, if I were to start changing thing to the perfect OS security-wise,
>>  it wouldn't even look like OpenBSD code anymore, but OpenBSD still best
>> what world have to offer
>>
>> so do you agree with my logic? at least give me that at least tell me is
>> this how things stand FD-wise/limit-wise/whatever, you probably know the
>> best out of all I e-mailed with
>>
>> no attempt yet,in future I hope,it's not a I am too lazy reason
>>
>> On Tue, January 30, 2024 3:58 pm, Theo de Raadt wrote:
>>
>>> beecdadd...@danwin1210.de wrote:
>>>
 I know system shares all resources including FDs
 as far as I know there's what kernel/OS needs and is using and the rest
>> of
 users including but not limited to staff and daemon users/programs like
  i2pd all I was wondering is the limit or amount of FDs and other
>> resources
 the rest of users of daemon can use in my head is a total amount which
 apparently is unknown (I have been told why, but how can anyone work
>> with
 that? it's like relying on someone mentally unstable) which is then
>> devided,
 kernel/OS gets all that it needs, users and daemons get the rest which
>> IS
>>
 DIVIDED (in my head) until there is no more to
 divide/give away/share am I close?

 okay maybe not make all available resources to 1 program is not how it
 works but why not if that's the only programs that's running? I do not
 understand if it's even possible to do what I'm asking or questioning,
>> I am
>>
 not a OS dev because of reasons, but I like discussing such because I
>> like
 OS-dev



 and just because what I ask isn't how it works doesn't mean it's bad?
 it could mean
>>>
>>> You've been provided with all the source code.
>>>
>>>
>>>
>>> Where is your attempt to change things?
>>>
>>>
>>>
>>>
>>
>>
>>
>




Re: (changed subject) Re: net/i2pd: FD talk and limits and ISP routers too weak maybe

2024-01-30 Thread Raul Miller
Probably worth mentioning here, since it's apparently not obvious enough:

Changing everything all at once can never be progress - and
"discussions" with that aim are noise, at best (wholesale destruction
if attempted).

-- 
Raul


On Tue, Jan 30, 2024 at 11:09 AM Bruce Jagid  wrote:
>
> If you actually thought you knew what you were talking about, you wouldn’t
> feel the need to insert “I’m not an OS Dev” after everything you say
>
> On Tue, Jan 30, 2024 at 11:05 AM  wrote:
>
> > oh, Theo, if I were to start changing thing to the perfect OS
> > security-wise,
> > it wouldn't even look like OpenBSD code anymore, but OpenBSD still best
> > what
> > world have to offer
> >
> > so do you agree with my logic? at least give me that
> > at least tell me is this how things stand FD-wise/limit-wise/whatever, you
> > probably know the best out of all I e-mailed with
> >
> > no attempt yet,in future I hope,it's not a I am too lazy reason
> >
> > On Tue, January 30, 2024 3:58 pm, Theo de Raadt wrote:
> > > beecdadd...@danwin1210.de wrote:
> > >
> > >> I know system shares all resources including FDs
> > >> as far as I know there's what kernel/OS needs and is using and the rest
> > of
> > >> users including but not limited to staff and daemon users/programs like
> > >> i2pd all I was wondering is the limit or amount of FDs and other
> > resources
> > >> the rest of users of daemon can use in my head is a total amount which
> > >> apparently is unknown (I have been told why, but how can anyone work
> > with
> > >> that? it's like relying on someone mentally unstable) which is then
> > devided,
> > >> kernel/OS gets all that it needs, users and daemons get the rest which
> > IS
> > >> DIVIDED (in my head) until there is no more to
> > >> divide/give away/share am I close?
> > >>
> > >> okay maybe not make all available resources to 1 program is not how it
> > >> works but why not if that's the only programs that's running? I do not
> > >> understand if it's even possible to do what I'm asking or questioning,
> > I am
> > >> not a OS dev because of reasons, but I like discussing such because I
> > like
> > >> OS-dev
> > >>
> > >>
> > >> and just because what I ask isn't how it works doesn't mean it's bad? it
> > >> could mean
> > >
> > > You've been provided with all the source code.
> > >
> > >
> > > Where is your attempt to change things?
> > >
> > >
> > >
> >
> >
> >



Re: NEW: games/cromagrally

2024-01-30 Thread Omar Polo
On 2024/01/30 10:26:05 -0500, Thomas Frohwein  wrote:
> On Tue, Jan 30, 2024 at 01:46:49AM -0600, izder456 wrote:
> > 
> > Hey ports@ w//ckies,
> > 
> > If it wasn't clear enough already, I love these games. Given that (in
> > theory) OpenBSD/macppc has 3D-Acceleration on the r128(4) driver, it
> > would be wonderful to run this on an era-accurate PPC iMac.
> > 
> > TL;DR:
> > I want to import my port of CroMagRally, which is yet another Pangea
> > Software title originally for the PPC macs. I think it's been three
> > I've submitted now... :)
> > 
> > the 3.0.0 GH_RELASE has a bug with byteswapping terrain textures, so i
> > just pointed this port against the latest commit hash. unsure if I can
> > still refer to this as "3.0.0", thoughts?
> > 
> > As normal, I did some patchwork to allow the binary to be ran from
> > anywhere so core files can be properly dumped again. (referencing
> > Omar's patch of Nanosaur2)
> > 
> > Attached is the port, OK to import?
> > 
> > -- 
> > 
> > -iz
> 
> TLDR:
> Thanks, looks generally good, builds and runs. Now supertuxcart has
> some competition. Attached port with small modifications, ok thfr@.
> 
> Longer reply... Regarding the versioning:
> 
> See packages-specs(7) for guidance on picking a version. There isn't a
> 100% established way when there are upstream improvements without a new
> release. The one aspect that seems certain is to not ignore the last
> (or next) release version number. After that, there are the following
> options up for debate from what I have seen and what packages-specs(2)
> offers:
> 
> 1. Add patch-level to version number (3.0.0pl0).
> 
> 2. Add REVISION (3.0.0p0).
> 
> 3. Treat it as a precursor to the next release (e.g. 3.0.1alpha0).
> 
> The risk with 1 and 3 is that it could collide with upstream's
> numbering of future versions. Option 2 goes a bit against the grain
> that REVISION is usually for when the port is changed (change in
> build options etc.).
> 
> I am personally favor of option 1, but open to hear if there are
> arguments for a different default approach to this common situation. I
> have updated cromagrally accordingly and attached it.
> 
> I replaced your Makefile alignment with tabs as this is most commonly
> used in ports in my experience (VARIABLE=value).

ok op@ with NO_TEST removed (it is needed for when `make test' would
fail due to the absence of a regress suite, in this case it just prints
'no tests', so it is fine) and with libsamplerate removed


Thanks,

Omar Polo

--- Makefile.orig   Tue Jan 30 18:25:04 2024
+++ MakefileTue Jan 30 18:25:31 2024
@@ -24,13 +24,9 @@
 
 MODULES =  devel/cmake
 
-BUILD_DEPENDS =audio/libsamplerate
 LIB_DEPENDS =  devel/sdl2
-RUN_DEPENDS =  audio/libsamplerate \
-   devel/desktop-file-utils \
+RUN_DEPENDS =  devel/desktop-file-utils \
x11/gtk+4,-guic
-
-NO_TEST =  Yes
 
 CFLAGS +=  -I${X11BASE}/include
 CXXFLAGS +=-I${X11BASE}/include



Re: [maintainer update] fdupes 2.2.1 -> 2.3.0

2024-01-30 Thread Björn Ketelaars
On Sun 28/01/2024 07:18, Martin Ziemer wrote:
> This patch updates fdupes from 2.2.1 to 2.3.0.
> 
> Tested on amd64.

This update picks up sqlite3 [0]. If this is intended then you need to
add sqlite3 to WANTLIB and databases/sqlite3 to LIB_DEPENDS.
Alternatively you could set CONFIGURE_ARGS+= --without-sqlite.

[0] 
https://github.com/adrianlopezroche/fdupes/commit/ab5ef95e2b2633d0ca1a5ae0b8ac41abde160100



Re: [maintainer update] fdupes 2.2.1 -> 2.3.0

2024-01-30 Thread Stuart Henderson
On 2024/01/30 19:05, Björn Ketelaars wrote:
> On Sun 28/01/2024 07:18, Martin Ziemer wrote:
> > This patch updates fdupes from 2.2.1 to 2.3.0.
> > 
> > Tested on amd64.
> 
> This update picks up sqlite3 [0]. If this is intended then you need to
> add sqlite3 to WANTLIB and databases/sqlite3 to LIB_DEPENDS.
> Alternatively you could set CONFIGURE_ARGS+= --without-sqlite.
> 
> [0] 
> https://github.com/adrianlopezroche/fdupes/commit/ab5ef95e2b2633d0ca1a5ae0b8ac41abde160100

The sqlite cache seems useful to me.



[MAINTAINER UPDATE] www/azorius 0.3.2 -> 0.3.3

2024-01-30 Thread Horia Racoviceanu
Upgrade to v0.3.3
- Unbreak
- Remove modules.inc (not needed leftover)
- Add pkg/MESSAGE (for database format upgrade hint)

changelog

### 0.3.3 Terrific Triplicate

+ Fix 32 bit support.

+ Close database to give the wal file a chance to checkpoint.

+ Reply notif links to comment.

+ Collapse and expand threads.

+ Dedupe posts across groups.
Index: Makefile
===
RCS file: /cvs/ports/www/azorius/Makefile,v
diff -u -p -r1.6 Makefile
--- Makefile3 Jan 2024 14:14:15 -   1.6
+++ Makefile30 Jan 2024 17:53:14 -
@@ -1,9 +1,6 @@
-# "cannot use uint64(dir.Fd()) (value of type uint64) as uint32 value in 
struct literal" in vendor/humungus.tedunangst.com/r/gonix/kqueue.go
-ONLY_FOR_ARCHS =   ${LP64_ARCHS}
-
 COMMENT =  link aggregator and comment site via ActivityPub
 
-DISTNAME = azorius-0.3.2
+DISTNAME = azorius-0.3.3
 CATEGORIES =   www
 
 HOMEPAGE = https://humungus.tedunangst.com/r/azorius
Index: distinfo
===
RCS file: /cvs/ports/www/azorius/distinfo,v
diff -u -p -r1.3 distinfo
--- distinfo3 Jan 2024 09:13:46 -   1.3
+++ distinfo30 Jan 2024 17:53:14 -
@@ -1,2 +1,2 @@
-SHA256 (azorius-0.3.2.tgz) = PWIO9xLZ3hYGEDnJJ8tzpvVLriKk3GnU+GvXEb82/KI=
-SIZE (azorius-0.3.2.tgz) = 311179
+SHA256 (azorius-0.3.3.tgz) = q2bl2bRbCnPqfG4IYKtyGD3AsKhpOGd8daTKW64kzKw=
+SIZE (azorius-0.3.3.tgz) = 311536
Index: modules.inc
===
RCS file: modules.inc
diff -N modules.inc
Index: pkg/MESSAGE
===
RCS file: pkg/MESSAGE
diff -N pkg/MESSAGE
--- /dev/null   1 Jan 1970 00:00:00 -
+++ pkg/MESSAGE 30 Jan 2024 17:53:14 -
@@ -0,0 +1 @@
+The database has changed since version 0.3.2. See the pkg-readme.
Index: pkg/PLIST
===
RCS file: /cvs/ports/www/azorius/pkg/PLIST,v
diff -u -p -r1.4 PLIST
--- pkg/PLIST   3 Jan 2024 09:13:47 -   1.4
+++ pkg/PLIST   30 Jan 2024 17:53:14 -
@@ -115,6 +115,8 @@ share/examples/azorius/views/reply.html
 @sample ${LOCALSTATEDIR}/azorius/views/reply.html
 share/examples/azorius/views/report.html
 @sample ${LOCALSTATEDIR}/azorius/views/report.html
+share/examples/azorius/views/script.js
+@sample ${LOCALSTATEDIR}/azorius/views/script.js
 share/examples/azorius/views/searchhelp.html
 @sample ${LOCALSTATEDIR}/azorius/views/searchhelp.html
 share/examples/azorius/views/style.css
Index: pkg/README
===
RCS file: /cvs/ports/www/azorius/pkg/README,v
diff -u -p -r1.2 README
--- pkg/README  3 Sep 2023 05:49:40 -   1.2
+++ pkg/README  30 Jan 2024 17:53:14 -
@@ -34,7 +34,7 @@ Azorius at https://azorius.example.com
 Database Upgrade
 
 
-If you are upgrading from a version before 0.2.0, you will need to upgrade
+If you are upgrading from a version before 0.3.2, you will need to upgrade
 the database format:
 
 Stop the old azorius process.


Re: [maintainer update] fdupes 2.2.1 -> 2.3.0

2024-01-30 Thread Martin Ziemer
Am Tue, Jan 30, 2024 at 06:07:49PM + schrieb Stuart Henderson:
> On 2024/01/30 19:05, Björn Ketelaars wrote:
> > On Sun 28/01/2024 07:18, Martin Ziemer wrote:
> > > This patch updates fdupes from 2.2.1 to 2.3.0.
> > > 
> > > Tested on amd64.
> > 
> > This update picks up sqlite3 [0]. If this is intended then you need to
> > add sqlite3 to WANTLIB and databases/sqlite3 to LIB_DEPENDS.
> > Alternatively you could set CONFIGURE_ARGS+= --without-sqlite.
> > 
> > [0] 
> > https://github.com/adrianlopezroche/fdupes/commit/ab5ef95e2b2633d0ca1a5ae0b8ac41abde160100
> 
> The sqlite cache seems useful to me.
I agree.

The diff below has the two changes from Björn Ketelaars included.

Index: Makefile
===
RCS file: /cvs/ports/sysutils/fdupes/Makefile,v
retrieving revision 1.17
diff -u -p -r1.17 Makefile
--- Makefile27 Sep 2023 17:16:25 -  1.17
+++ Makefile30 Jan 2024 19:01:28 -
@@ -1,6 +1,6 @@
 COMMENT=   identify or delete duplicate files
 
-V= 2.2.1
+V= 2.3.0
 DISTNAME=  fdupes-$V
 CATEGORIES=sysutils
 
@@ -12,11 +12,12 @@ MAINTAINER =Martin Ziemer 
Index: distinfo
===
RCS file: /cvs/ports/sysutils/fdupes/distinfo,v
retrieving revision 1.7
diff -u -p -r1.7 distinfo
--- distinfo9 Sep 2022 12:11:19 -   1.7
+++ distinfo30 Jan 2024 19:01:28 -
@@ -1,2 +1,2 @@
-SHA256 (fdupes-2.2.1.tar.gz) = hGu3nKPwFXhWqpPtULSSF/62jhs1ImGTtrxXi+DFaY0=
-SIZE (fdupes-2.2.1.tar.gz) = 144719
+SHA256 (fdupes-2.3.0.tar.gz) = YXDWSn5WXuMUzKTdJaEj5gqh4/67EeVweL67nB2n4Bk=
+SIZE (fdupes-2.3.0.tar.gz) = 154700



UPDATE: net/nextcloudclient-3.11.1

2024-01-30 Thread Adriano Barbosa
Hi.
Update for net/nextcloudclient v3.11.1
Changelog:
https://github.com/nextcloud/desktop/releases/v3.11.1
https://github.com/nextcloud/desktop/releases/v3.11.0

Obrigado!
--
Adriano


Index: Makefile
===
RCS file: /cvs/ports/net/nextcloudclient/Makefile,v
retrieving revision 1.57
diff -u -p -r1.57 Makefile
--- Makefile9 Dec 2023 15:39:07 -   1.57
+++ Makefile30 Jan 2024 19:46:01 -
@@ -2,7 +2,7 @@ USE_WXNEEDED =  Yes
 
 COMMENT =  desktop sync client for Nextcloud
 
-V =3.10.2
+V =3.11.1
 DISTNAME = nextcloudclient-${V}
 
 GH_ACCOUNT =   nextcloud
@@ -13,8 +13,8 @@ CATEGORIES =  net
 
 HOMEPAGE = https://nextcloud.com
 
-SHARED_LIBS +=  nextcloudsync 15.0  # 3.10.2
-SHARED_LIBS +=  nextcloud_csync   7.0   # 3.10.2
+SHARED_LIBS +=  nextcloudsync 16.0  # 3.11.1
+SHARED_LIBS +=  nextcloud_csync   8.0   # 3.11.1
 SHARED_LIBS +=  nextcloudsync_vfs_suffix  2.0   # 3.10.2
 
 MAINTAINER =   Adriano Barbosa 
Index: distinfo
===
RCS file: /cvs/ports/net/nextcloudclient/distinfo,v
retrieving revision 1.46
diff -u -p -r1.46 distinfo
--- distinfo9 Dec 2023 15:39:07 -   1.46
+++ distinfo30 Jan 2024 19:46:01 -
@@ -1,2 +1,2 @@
-SHA256 (nextcloudclient-3.10.2.tar.gz) = 
6BmYAZf6UFmaQo4oFegwEE0OfHJFcGWvoTV+bSZ/w8A=
-SIZE (nextcloudclient-3.10.2.tar.gz) = 13523354
+SHA256 (nextcloudclient-3.11.1.tar.gz) = 
n2CmcH0x6CNXeA3/ERaO3ojR+FlIGiLy0EyCfmVvP7o=
+SIZE (nextcloudclient-3.11.1.tar.gz) = 13598448
Index: patches/patch-CMakeLists_txt
===
RCS file: /cvs/ports/net/nextcloudclient/patches/patch-CMakeLists_txt,v
retrieving revision 1.12
diff -u -p -r1.12 patch-CMakeLists_txt
--- patches/patch-CMakeLists_txt15 Jun 2023 07:33:44 -  1.12
+++ patches/patch-CMakeLists_txt30 Jan 2024 19:46:01 -
@@ -10,7 +10,7 @@ Index: CMakeLists.txt
  
  include(ECMCoverageOption)
  
-@@ -293,4 +293,4 @@ elseif(BUILD_CLIENT)
+@@ -300,4 +300,4 @@ elseif(BUILD_CLIENT)
  configure_file(sync-exclude.lst bin/sync-exclude.lst COPYONLY)
  endif()
  



Re: NEW: games/cromagrally

2024-01-30 Thread Thomas Frohwein
On Tue, Jan 30, 2024 at 06:34:52PM +0100, Omar Polo wrote:

[...]

> > I replaced your Makefile alignment with tabs as this is most commonly
> > used in ports in my experience (VARIABLE=value).
> 
> ok op@ with NO_TEST removed (it is needed for when `make test' would
> fail due to the absence of a regress suite, in this case it just prints
> 'no tests', so it is fine) and with libsamplerate removed
> 
> 
> Thanks,
> 
> Omar Polo
> 
> --- Makefile.orig Tue Jan 30 18:25:04 2024
> +++ Makefile  Tue Jan 30 18:25:31 2024
> @@ -24,13 +24,9 @@
>  
>  MODULES =devel/cmake
>  
> -BUILD_DEPENDS =  audio/libsamplerate
>  LIB_DEPENDS =devel/sdl2
> -RUN_DEPENDS =audio/libsamplerate \
> - devel/desktop-file-utils \
> +RUN_DEPENDS =devel/desktop-file-utils \
>   x11/gtk+4,-guic
> -
> -NO_TEST =Yes
>  
>  CFLAGS +=-I${X11BASE}/include
>  CXXFLAGS +=  -I${X11BASE}/include
> 

I committed it with those changes, thanks!



Re: [MAINTAINER UPDATE] www/azorius 0.3.2 -> 0.3.3

2024-01-30 Thread Horia Racoviceanu
- Clean some nonsense, thank you Josh Rickmar

On 1/30/24, Horia Racoviceanu  wrote:
> Upgrade to v0.3.3
> - Unbreak
> - Remove modules.inc (not needed leftover)
> - Add pkg/MESSAGE (for database format upgrade hint)
>
> changelog
>
> ### 0.3.3 Terrific Triplicate
>
> + Fix 32 bit support.
>
> + Close database to give the wal file a chance to checkpoint.
>
> + Reply notif links to comment.
>
> + Collapse and expand threads.
>
> + Dedupe posts across groups.
>
Index: Makefile
===
RCS file: /cvs/ports/www/azorius/Makefile,v
diff -u -p -r1.6 Makefile
--- Makefile3 Jan 2024 14:14:15 -   1.6
+++ Makefile30 Jan 2024 20:29:53 -
@@ -1,9 +1,6 @@
-# "cannot use uint64(dir.Fd()) (value of type uint64) as uint32 value in 
struct literal" in vendor/humungus.tedunangst.com/r/gonix/kqueue.go
-ONLY_FOR_ARCHS =   ${LP64_ARCHS}
-
 COMMENT =  link aggregator and comment site via ActivityPub
 
-DISTNAME = azorius-0.3.2
+DISTNAME = azorius-0.3.3
 CATEGORIES =   www
 
 HOMEPAGE = https://humungus.tedunangst.com/r/azorius
@@ -31,12 +28,9 @@ EXAMPLESDIR =${PREFIX}/share/examples/
 post-install:
${INSTALL_MAN} ${MODGO_WORKSPACE}/src/${ALL_TARGET}/docs/azorius.8 \
${PREFIX}/man/man8
-.for p in 1 7 8
-   rm ${MODGO_WORKSPACE}/src/${ALL_TARGET}/docs/*.${p}
-.endfor
${INSTALL_DATA_DIR} ${DOCDIR}
${INSTALL_DATA} \
-   ${MODGO_WORKSPACE}/src/${ALL_TARGET}/{LICENSE,README,docs/*} \
+   
${MODGO_WORKSPACE}/src/${ALL_TARGET}/{LICENSE,README,docs/*.html} \
${DOCDIR}/
${INSTALL_DATA_DIR} ${EXAMPLESDIR}/views
${INSTALL_DATA} ${MODGO_WORKSPACE}/src/${ALL_TARGET}/views/* \
Index: distinfo
===
RCS file: /cvs/ports/www/azorius/distinfo,v
diff -u -p -r1.3 distinfo
--- distinfo3 Jan 2024 09:13:46 -   1.3
+++ distinfo30 Jan 2024 20:29:53 -
@@ -1,2 +1,2 @@
-SHA256 (azorius-0.3.2.tgz) = PWIO9xLZ3hYGEDnJJ8tzpvVLriKk3GnU+GvXEb82/KI=
-SIZE (azorius-0.3.2.tgz) = 311179
+SHA256 (azorius-0.3.3.tgz) = q2bl2bRbCnPqfG4IYKtyGD3AsKhpOGd8daTKW64kzKw=
+SIZE (azorius-0.3.3.tgz) = 311536
Index: modules.inc
===
RCS file: modules.inc
diff -N modules.inc
Index: pkg/MESSAGE
===
RCS file: pkg/MESSAGE
diff -N pkg/MESSAGE
--- /dev/null   1 Jan 1970 00:00:00 -
+++ pkg/MESSAGE 30 Jan 2024 20:29:53 -
@@ -0,0 +1 @@
+The database has changed since version 0.3.2. See the pkg-readme.
Index: pkg/PLIST
===
RCS file: /cvs/ports/www/azorius/pkg/PLIST,v
diff -u -p -r1.4 PLIST
--- pkg/PLIST   3 Jan 2024 09:13:47 -   1.4
+++ pkg/PLIST   30 Jan 2024 20:29:53 -
@@ -115,6 +115,8 @@ share/examples/azorius/views/reply.html
 @sample ${LOCALSTATEDIR}/azorius/views/reply.html
 share/examples/azorius/views/report.html
 @sample ${LOCALSTATEDIR}/azorius/views/report.html
+share/examples/azorius/views/script.js
+@sample ${LOCALSTATEDIR}/azorius/views/script.js
 share/examples/azorius/views/searchhelp.html
 @sample ${LOCALSTATEDIR}/azorius/views/searchhelp.html
 share/examples/azorius/views/style.css
Index: pkg/README
===
RCS file: /cvs/ports/www/azorius/pkg/README,v
diff -u -p -r1.2 README
--- pkg/README  3 Sep 2023 05:49:40 -   1.2
+++ pkg/README  30 Jan 2024 20:29:53 -
@@ -34,7 +34,7 @@ Azorius at https://azorius.example.com
 Database Upgrade
 
 
-If you are upgrading from a version before 0.2.0, you will need to upgrade
+If you are upgrading from a version before 0.3.2, you will need to upgrade
 the database format:
 
 Stop the old azorius process.


Using git mirror instead of CVS for working with ports?

2024-01-30 Thread Johannes Thyssen Tishman
Subject says it all. I'm wondering if using the git conversion of the ports 
tree[0] is regarded as a good alternative to CVS for working with ports. Are 
the conversion updates frequent enough to not cause any issues? Do any of you 
porters use it instead of CVS? Any issues?

For the record, I've been using CVS just fine without any problems. I just feel 
more comfortable with git.

[0] https://github.com/openbsd/ports

-- 
Johannes Thyssen Tishman
https://www.thyssentishman.com


Re: Using git mirror instead of CVS for working with ports?

2024-01-30 Thread Stuart Henderson
On 2024/01/30 22:58, Johannes Thyssen Tishman wrote:
> Subject says it all. I'm wondering if using the git conversion of the ports 
> tree[0] is regarded as a good alternative to CVS for working with ports. Are 
> the conversion updates frequent enough to not cause any issues? Do any of you 
> porters use it instead of CVS? Any issues?
> 
> For the record, I've been using CVS just fine without any problems. I just 
> feel more comfortable with git.
> 
> [0] https://github.com/openbsd/ports
> 
> -- 
> Johannes Thyssen Tishman
> https://www.thyssentishman.com

They are fairly frequent (currently run hourly, though this may change
if they start taking too long to run), but don't include the most recent
commit (CVS commits are not atomic, and the conversion tool is looking
for a different commit before it will treat the previous one as done)
so at certain times (especially during tree locks for release) you can
be waiting a while for a commit to show up.

Also there are no tools which successfully managed to convert branches
and tags in the OpenBSD CVS repo (we tried everything we could find
at the time when it was set up, everything which handles them had
some problem or other, and the range of software has not really expanded
since) - so the git conversion is limited to dealing with -current only
and there's no way to work with -stable or releases.



Re: Trying to install Apache 2.4 with OpenSSL 1.1 instead of LibreSSL

2024-01-30 Thread Theo Buehler
On Tue, Jan 30, 2024 at 01:30:32PM +0100, Theo Buehler wrote:
> On Fri, Jan 26, 2024 at 02:11:52PM -0800, Tim wrote:
> > I'm trying to troubleshoot an issue where Chrome/Chromium browsers
> > randomly fail to correctly use SSL against my web server.
> 
> This version of a diff from jsing for libssl (it applies with slight
> offsets to 7.4-stable) should fix this issue.
> 
> Could you please try this with an unpached apache-httpd?

I got a positive test report for this off-list.

We will land a version of this fix in libssl in the coming weeks, so
there should be no need to patch apache-httpd at all.



Re: Trying to install Apache 2.4 with OpenSSL 1.1 instead of LibreSSL

2024-01-30 Thread TimH
On Tue, 30 Jan 2024 13:30:32 +0100
Theo Buehler  wrote:

> This version of a diff from jsing for libssl (it applies with slight
> offsets to 7.4-stable) should fix this issue.
> 
> Could you please try this with an unpached apache-httpd?
> 
> Index: s3_lib.c
> ===
> RCS file: /cvs/src/lib/libssl/s3_lib.c,v
> diff -u -p -r1.248 s3_lib.c
> --- s3_lib.c  29 Nov 2023 13:39:34 -  1.248
> +++ s3_lib.c  30 Jan 2024 11:34:10 -
> @@ -1594,6 +1594,7 @@ ssl3_free(SSL *s)
>   tls1_transcript_hash_free(s);
>  
>   free(s->s3->alpn_selected);
> + free(s->s3->alpn_wire_data);
>  
>   freezero(s->s3->peer_quic_transport_params,
>   s->s3->peer_quic_transport_params_len);
> @@ -1659,6 +1660,9 @@ ssl3_clear(SSL *s)
>   free(s->s3->alpn_selected);
>   s->s3->alpn_selected = NULL;
>   s->s3->alpn_selected_len = 0;
> + free(s->s3->alpn_wire_data);
> + s->s3->alpn_wire_data = NULL;
> + s->s3->alpn_wire_data_len = 0;
>  
>   freezero(s->s3->peer_quic_transport_params,
>   s->s3->peer_quic_transport_params_len);
> Index: ssl_local.h
> ===
> RCS file: /cvs/src/lib/libssl/ssl_local.h,v
> diff -u -p -r1.12 ssl_local.h
> --- ssl_local.h   29 Dec 2023 12:24:33 -  1.12
> +++ ssl_local.h   30 Jan 2024 11:34:10 -
> @@ -1209,6 +1209,8 @@ typedef struct ssl3_state_st {
>*/
>   uint8_t *alpn_selected;
>   size_t alpn_selected_len;
> + uint8_t *alpn_wire_data;
> + size_t alpn_wire_data_len;
>  
>   /* Contains the QUIC transport params received from our peer. */
>   uint8_t *peer_quic_transport_params;
> Index: ssl_tlsext.c
> ===
> RCS file: /cvs/src/lib/libssl/ssl_tlsext.c,v
> diff -u -p -r1.137 ssl_tlsext.c
> --- ssl_tlsext.c  28 Apr 2023 18:14:59 -  1.137
> +++ ssl_tlsext.c  30 Jan 2024 11:34:10 -
> @@ -86,33 +86,48 @@ tlsext_alpn_check_format(CBS *cbs)
>  }
>  
>  static int
> -tlsext_alpn_server_parse(SSL *s, uint16_t msg_types, CBS *cbs, int *alert)
> +tlsext_alpn_server_parse(SSL *s, uint16_t msg_type, CBS *cbs, int *alert)
>  {
> - CBS alpn, selected_cbs;
> - const unsigned char *selected;
> - unsigned char selected_len;
> - int r;
> + CBS alpn;
>  
>   if (!CBS_get_u16_length_prefixed(cbs, &alpn))
>   return 0;
> -
>   if (!tlsext_alpn_check_format(&alpn))
>   return 0;
> + if (!CBS_stow(&alpn, &s->s3->alpn_wire_data, 
> &s->s3->alpn_wire_data_len))
> + return 0;
> +
> + return 1;
> +}
> +
> +static int
> +tlsext_alpn_server_process(SSL *s, uint16_t msg_type, int *alert)
> +{
> + const unsigned char *selected;
> + unsigned char selected_len;
> + CBS alpn, selected_cbs;
> + int cb_ret;
>  
>   if (s->ctx->alpn_select_cb == NULL)
>   return 1;
>  
> + if (s->s3->alpn_wire_data == NULL) {
> + *alert = SSL_AD_INTERNAL_ERROR;
> + return 0;
> + }
> + CBS_init(&alpn, s->s3->alpn_wire_data, s->s3->alpn_wire_data_len);
> +
>   /*
>* XXX - A few things should be considered here:
>* 1. Ensure that the same protocol is selected on session resumption.
>* 2. Should the callback be called even if no ALPN extension was sent?
>* 3. TLSv1.2 and earlier: ensure that SNI has already been processed.
>*/
> - r = s->ctx->alpn_select_cb(s, &selected, &selected_len,
> + cb_ret = s->ctx->alpn_select_cb(s, &selected, &selected_len,
>   CBS_data(&alpn), CBS_len(&alpn),
>   s->ctx->alpn_select_cb_arg);
>  
> - if (r == SSL_TLSEXT_ERR_OK) {
> + if (cb_ret == SSL_TLSEXT_ERR_OK) {
>   CBS_init(&selected_cbs, selected, selected_len);
>  
>   if (!CBS_stow(&selected_cbs, &s->s3->alpn_selected,
> @@ -125,7 +140,7 @@ tlsext_alpn_server_parse(SSL *s, uint16_
>   }
>  
>   /* On SSL_TLSEXT_ERR_NOACK behave as if no callback was present. */
> - if (r == SSL_TLSEXT_ERR_NOACK)
> + if (cb_ret == SSL_TLSEXT_ERR_NOACK)
>   return 1;
>  
>   *alert = SSL_AD_NO_APPLICATION_PROTOCOL;
> @@ -1972,6 +1987,7 @@ struct tls_extension_funcs {
>   int (*needs)(SSL *s, uint16_t msg_type);
>   int (*build)(SSL *s, uint16_t msg_type, CBB *cbb);
>   int (*parse)(SSL *s, uint16_t msg_type, CBS *cbs, int *alert);
> + int (*process)(SSL *s, uint16_t msg_type, int *alert);
>  };
>  
>  struct tls_extension {
> @@ -2123,6 +2139,7 @@ static const struct tls_extension tls_ex
>   .needs = tlsext_alpn_server_needs,
>   .build = tlsext_alpn_server_build,
>   .parse = tlsext_alpn_server_parse,
> + .process = tlsext_alpn_server_process,
>   },
>   },
>   {
> @@ -2391,6 +2408,14 @@ tlsext_clie

Re: Using git mirror instead of CVS for working with ports?

2024-01-30 Thread Niklas Hallqvist



On 2024-01-31 00:20, Stuart Henderson wrote:

On 2024/01/30 22:58, Johannes Thyssen Tishman wrote:

Subject says it all. I'm wondering if using the git conversion of the ports 
tree[0] is regarded as a good alternative to CVS for working with ports. Are 
the conversion updates frequent enough to not cause any issues? Do any of you 
porters use it instead of CVS? Any issues?

For the record, I've been using CVS just fine without any problems. I just feel 
more comfortable with git.

[0] https://github.com/openbsd/ports

--
Johannes Thyssen Tishman
https://www.thyssentishman.com

They are fairly frequent (currently run hourly, though this may change
if they start taking too long to run), but don't include the most recent
commit (CVS commits are not atomic, and the conversion tool is looking
for a different commit before it will treat the previous one as done)
so at certain times (especially during tree locks for release) you can
be waiting a while for a commit to show up.

Also there are no tools which successfully managed to convert branches
and tags in the OpenBSD CVS repo (we tried everything we could find
at the time when it was set up, everything which handles them had
some problem or other, and the range of software has not really expanded
since) - so the git conversion is limited to dealing with -current only
and there's no way to work with -stable or releases.

I have been manually tagging and branching the stable branches for a 
couple of years, as a basis to my personal fork of src, xenocara and 
ports.  And then I have some scripts trying to carry over the commits 
made to the stable branches, but they are not perfect.  I guess I could 
push the tags and branches to my github, currently they are in a private 
gitlab only.
I won't commit to support these branches though, but if someone would 
make use of them, in the state they are, I will push them.


/Niklas




Re: net/torsocks: update to 2.4.0

2024-01-30 Thread Klemens Nanni
On Thu, Jan 18, 2024 at 10:57:58PM +, Klemens Nanni wrote:
> Upstream changed sites, release is from may 2022, that one fclose(3) is
> effectively merged, others remain.
> 
> https://gitlab.torproject.org/tpo/core/torsocks/-/releases
> 
> While here, bump AUTO*_VERSION, sync DESCR and capitalise COMMENT.
> 
> No shared lib, WANTLIB or PLIST change.
> 100% amd64 tests pass, works for me.
> Feedback? OK?

Ping.

Index: Makefile
===
RCS file: /cvs/ports/net/torsocks/Makefile,v
diff -u -p -r1.18 Makefile
--- Makefile11 Nov 2023 11:51:22 -  1.18
+++ Makefile18 Jan 2024 22:47:05 -
@@ -1,13 +1,14 @@
-COMMENT =  socks proxy for use with tor
+COMMENT =  SOCKS proxy for use with Tor
 
-DISTNAME = torsocks-2.3.0
-REVISION = 0
+V =2.4.0
+DISTNAME = torsocks-v${V}
+PKGNAME =  ${DISTNAME:S/v//}
 
 SHARED_LIBS =  torsocks2.0 # 0.0
 
 CATEGORIES =   net
 
-HOMEPAGE = https://gitweb.torproject.org/torsocks.git/
+HOMEPAGE = https://gitlab.torproject.org/tpo/core/torsocks
 
 MAINTAINER =   Pascal Stumpf 
 
@@ -16,10 +17,10 @@ PERMIT_PACKAGE =Yes
 
 WANTLIB += pthread
 
-SITES= https://gitweb.torproject.org/torsocks.git/snapshot/
+SITES= 
https://gitlab.torproject.org/tpo/core/torsocks/-/archive/v${V}/
 
-AUTOCONF_VERSION=  2.69
-AUTOMAKE_VERSION=  1.15
+AUTOCONF_VERSION=  2.71
+AUTOMAKE_VERSION=  1.16
 
 USE_LIBTOOL =  gnu
 
@@ -31,6 +32,5 @@ CONFIGURE_STYLE = autoreconf autoheader
 
 pre-configure:
${SUBST_CMD} ${WRKSRC}/src/bin/torsocks.in
-
 
 .include 
Index: distinfo
===
RCS file: /cvs/ports/net/torsocks/distinfo,v
diff -u -p -r1.4 distinfo
--- distinfo1 Jun 2022 12:35:11 -   1.4
+++ distinfo18 Jan 2024 22:35:05 -
@@ -1,2 +1,2 @@
-SHA256 (torsocks-2.3.0.tar.gz) = gXwUPoqdIX9BoiOoUTnGyijhuZVWxUf820xy28Fwtsk=
-SIZE (torsocks-2.3.0.tar.gz) = 118033
+SHA256 (torsocks-v2.4.0.tar.gz) = wBtHHYntqfPI3LhaRI6AZmktBwf5/4sqx+ZlpgIpG4c=
+SIZE (torsocks-v2.4.0.tar.gz) = 118991
Index: patches/patch-src_common_compat_h
===
RCS file: /cvs/ports/net/torsocks/patches/patch-src_common_compat_h,v
diff -u -p -r1.3 patch-src_common_compat_h
--- patches/patch-src_common_compat_h   1 Jun 2022 12:35:11 -   1.3
+++ patches/patch-src_common_compat_h   18 Jan 2024 22:38:40 -
@@ -20,7 +20,7 @@ Index: src/common/compat.h
  
  #if defined(__linux__)
  #include 
-@@ -196,7 +197,8 @@ void tsocks_once(tsocks_once_t *o, void (*init_routine
+@@ -204,7 +205,8 @@ void tsocks_once(tsocks_once_t *o, void (*init_routine
  
  #endif /* __linux__ */
  
@@ -30,7 +30,7 @@ Index: src/common/compat.h
  
  #include 
  #include 
-@@ -215,7 +217,7 @@ void tsocks_once(tsocks_once_t *o, void (*init_routine
+@@ -223,7 +225,7 @@ void tsocks_once(tsocks_once_t *o, void (*init_routine
  #define TSOCKS_NR_LISTENSYS_listen
  #define TSOCKS_NR_RECVMSG   SYS_recvmsg
  
Index: patches/patch-src_lib_fclose_c
===
RCS file: patches/patch-src_lib_fclose_c
diff -N patches/patch-src_lib_fclose_c
--- patches/patch-src_lib_fclose_c  11 Mar 2022 19:47:53 -  1.2
+++ /dev/null   1 Jan 1970 00:00:00 -
@@ -1,19 +0,0 @@
-Unbreak funopen usage with libtorsocks - always call the libc fclose
-function, even when fd < 0.
-
-Index: src/lib/fclose.c
 src/lib/fclose.c.orig
-+++ src/lib/fclose.c
-@@ -64,11 +64,9 @@ LIBC_FCLOSE_RET_TYPE tsocks_fclose(LIBC_FCLOSE_SIG)
-   connection_put_ref(conn);
-   }
- 
-+error:
-   /* Return the original libc fclose. */
-   return tsocks_libc_fclose(fp);
--
--error:
--  return -1;
- }
- 
- /*
Index: patches/patch-src_lib_syscall_c
===
RCS file: /cvs/ports/net/torsocks/patches/patch-src_lib_syscall_c,v
diff -u -p -r1.4 patch-src_lib_syscall_c
--- patches/patch-src_lib_syscall_c 11 Nov 2023 11:51:22 -  1.4
+++ patches/patch-src_lib_syscall_c 18 Jan 2024 22:38:40 -
@@ -3,7 +3,7 @@ Don't attempt to intercept syscall(2) if
 Index: src/lib/syscall.c
 --- src/lib/syscall.c.orig
 +++ src/lib/syscall.c
-@@ -442,6 +442,7 @@ static LIBC_SYSCALL_RET_TYPE handle_memfd_create(va_li
+@@ -483,6 +483,7 @@ static LIBC_SYSCALL_RET_TYPE handle_passthrough(long n
  /*
   * Torsocks call for syscall(2)
   */
@@ -11,7 +11,7 @@ Index: src/lib/syscall.c
  LIBC_SYSCALL_RET_TYPE tsocks_syscall(long int number, va_list args)
  {
LIBC_SYSCALL_RET_TYPE ret;
-@@ -594,7 +595,9 @@ LIBC_SYSCALL_DECL
+@@ -636,7 +637,9 @@ LIBC_SYSCALL_DECL
  
return ret;
  }
@@ -21,7 +21,7 @@ Index: src/lib/syscall.c
  /* Only used for *BSD systems

UPDATE vaultwarden-1.30.2 and vaultwarden-web-2024.1.2

2024-01-30 Thread Bjorn Ketelaars
Diff below updates security/vaultwarden to 1,30.2 and
www/vaultwarden-web to 2024.1.2. Overview on changes can be found at
https://github.com/dani-garcia/vaultwarden/releases/tag/1.30.2 and
https://github.com/dani-garcia/bw_web_builds/releases/tag/v2024.1.2.

Changes to port of vaultwarden:
- No need any more for using a vendored tarball of rocket as upstream
  moved to rocket-0.5.0
- Switched to DIST_TUPLE
- Reordered Makefile a bit (use Makefile.template as guide)

Changes to port of vaultwarden-web:
- Added HOMEPAGE
- Reordered Makefile a bit

Run tested on amd64.

Comments/OK?


diff --git security/vaultwarden/Makefile security/vaultwarden/Makefile
index 486e3e0c616..27520b15b2d 100644
--- security/vaultwarden/Makefile
+++ security/vaultwarden/Makefile
@@ -5,10 +5,7 @@ BROKEN-i386 =  raw-cpuid-10.2.0/src/lib.rs:80:37 
"could not find `arch` in `self
 
 COMMENT =  unofficial bitwarden compatible server
 
-GH_ACCOUNT =   dani-garcia
-GH_PROJECT =   vaultwarden
-GH_TAGNAME =   1.30.1
-REVISION = 0
+DIST_TUPLE =   github dani-garcia vaultwarden 1.30.2 .
 
 CATEGORIES =   security
 
@@ -17,26 +14,22 @@ MAINTAINER =Aisha Tammy 
 # AGPLv3 only
 PERMIT_PACKAGE =   Yes
 
-FLAVORS =  mysql postgresql
-FLAVOR ?=
-
-WANTLIB += ${MODCARGO_WANTLIB} crypto m ssl
-
-SITES.deps =   https://files.bsd.ac/openbsd-distfiles/
-DISTFILES.deps +=  vaultwarden-deps-${GH_TAGNAME}.tgz
+WANTLIB =  ${MODCARGO_WANTLIB} crypto m ssl
 
 MODULES =  devel/cargo
+MODCARGO_CRATES_KEEP = libsqlite3-sys
+MODCARGO_FEATURES =sqlite
 
-CONFIGURE_STYLE =  cargo
+BUILD_DEPENDS =security/rust-ring
+RUN_DEPENDS =  www/vaultwarden-web
 
 SEPARATE_BUILD =   Yes
 
-BUILD_DEPENDS =security/rust-ring
+CONFIGURE_STYLE =  cargo
 
-RUN_DEPENDS =  www/vaultwarden-web
+FLAVORS =  mysql postgresql
+FLAVOR ?=
 
-MODCARGO_CRATES_KEEP +=libsqlite3-sys
-MODCARGO_FEATURES =sqlite
 .if ${FLAVOR:Mmysql}
 MODCARGO_FEATURES +=   mysql
 WANTLIB += mariadb
@@ -48,13 +41,6 @@ WANTLIB +=   pq
 LIB_DEPENDS += databases/postgresql
 .endif
 
-SUBST_VARS +=  WRKSRC
-
-post-configure:
-   mv ${WRKDIR}/myvendordir ${WRKSRC}
-   ${SUBST_CMD} -m 644 -c ${FILESDIR}/config.vendor ${WRKDIR}/config.vendor
-   cat ${WRKDIR}/config.vendor >> ${WRKDIR}/.cargo/config
-
 do-install:
${INSTALL_DATA_DIR} ${PREFIX}/share/doc/vaultwarden
${INSTALL_DATA} ${WRKSRC}/.env.template ${PREFIX}/share/doc/vaultwarden
diff --git security/vaultwarden/crates.inc security/vaultwarden/crates.inc
index f1bca2a4e4f..7718075f069 100644
--- security/vaultwarden/crates.inc
+++ security/vaultwarden/crates.inc
@@ -1,77 +1,78 @@
 MODCARGO_CRATES += addr2line   0.21.0  # Apache-2.0 OR MIT
 MODCARGO_CRATES += adler   1.0.2   # 0BSD OR MIT OR Apache-2.0
-MODCARGO_CRATES += ahash   0.8.6   # MIT OR Apache-2.0
+MODCARGO_CRATES += ahash   0.8.7   # MIT OR Apache-2.0
 MODCARGO_CRATES += aho-corasick1.1.2   # Unlicense OR MIT
 MODCARGO_CRATES += alloc-no-stdlib 2.0.4   # BSD-3-Clause
 MODCARGO_CRATES += alloc-stdlib0.2.2   # BSD-3-Clause
 MODCARGO_CRATES += allocator-api2  0.2.16  # MIT OR Apache-2.0
 MODCARGO_CRATES += android-tzdata  0.1.1   # MIT OR Apache-2.0
 MODCARGO_CRATES += android_system_properties   0.1.5   # MIT/Apache-2.0
-MODCARGO_CRATES += argon2  0.5.2   # MIT OR Apache-2.0
+MODCARGO_CRATES += argon2  0.5.3   # MIT OR Apache-2.0
 MODCARGO_CRATES += async-channel   1.9.0   # Apache-2.0 OR MIT
-MODCARGO_CRATES += async-channel   2.1.0   # Apache-2.0 OR MIT
-MODCARGO_CRATES += async-compression   0.4.4   # MIT OR Apache-2.0
-MODCARGO_CRATES += async-executor  1.6.0   # Apache-2.0 OR MIT
-MODCARGO_CRATES += async-global-executor   2.3.1   # Apache-2.0 OR MIT
+MODCARGO_CRATES += async-channel   2.1.1   # Apache-2.0 OR MIT
+MODCARGO_CRATES += async-compression   0.4.6   # MIT OR Apache-2.0
+MODCARGO_CRATES += async-executor  1.8.0   # Apache-2.0 OR MIT
+MODCARGO_CRATES += async-global-executor   2.4.1   # Apache-2.0 OR MIT
 MODCARGO_CRATES += async-io1.13.0  # Apache-2.0 OR MIT
-MODCARGO_CRATES += async-io2.2.0   # Apache-2.0 OR MIT
+MODCARGO_CRATES += async-io2.3.0   # Apache-2.0 OR MIT
 MODCARGO_CRATES += async-lock  2.8.0   # Apache-2.0 OR MIT
-MODCARGO_CRATES += async-lock  3.1.0   # Apache-2.0 OR MIT
+MODCARGO_CRATES += async-lock  3.3.0   # Apache-2.0 OR MIT
 MODCARGO_CRATES += async-process   1.8.1   # Apache-2.0 OR MIT
 MODCARGO_CRATES += async-signal0.2.5   # Apache-2.0 OR MIT
 MODCARGO_CRATES += async-std   1.12.0  # Apache-2.0/MIT
 MODCARGO_CRATES += async-stream0.3.5   # MIT
 MODCARGO_CRATES += async-str