Re: [gentoo-user] Re: python 3.12 update

2024-06-05 Thread Rich Freeman
On Wed, Jun 5, 2024 at 3:33 PM Wols Lists  wrote:
>
> On 05/06/2024 20:15, Meowie Gamer wrote:
> > I must've taken too long to join the mailing list because I missed the 
> > first part of whatever's happening here. How did this turn from python 3.12 
> > to a conversation about USE?
> >
>
> Because they're using USE or whatever to force packages to stay on 3.11,
> because they won't build with 3.12.
>
> So it's not necessarily about USE, but about the tactics people use to
> make emerge work the way they want. It might be MASK, or any of the
> other package... directories.

It is a bit more than that.  Even if you never touch
/etc/portage/package.use, you effectively will have USE flags set on
packages that involve python simply due to the profile (and the change
therein).  That is why the news item has you put -* at the start of
the setting if you're overriding it - otherwise you'll just be
appending to the profile setting.

While these do end up setting USE flags, you should still set the USE
expand variables as directed in the news item and documentation, and
not manipulate the USE flags directly.

-- 
Rich



Re: [gentoo-user] Re: python 3.12 update

2024-06-05 Thread Rich Freeman
On Wed, Jun 5, 2024 at 6:41 AM Dr Rainer Woitok  wrote:
>
> And no,  I don't buy  the point of view  that it's the responsibility of
> the developers  when my personal set  of USE flags suddenly  causes con-
> flicts.
>

Agree, but keep in mind that having personal sets of USE flags is
basically a necessity in Gentoo, and not a choice, because Gentoo does
not dynamically manage USE dependencies.

On any distro, including Gentoo, if you manually installed the
dependencies of every package you use explicitly, you'd probably end
up with a hopeless mess of dependency conflicts within a few months.
Everybody who has used Linux for any period of time understands that
this is a bad practice, and you should explicitly install the packages
you directly use, and let the package manager dynamically manage their
dependencies.  Then when a dependency becomes obsolete and is
replaced, the updates will happen automatically without the user
needing to worry about them.

Gentoo does this with package/version dependencies, but not with USE
flag dependencies.  If a package requires some other package to be
built with a particular USE flag, then portage will output an error,
and you will need to put an entry in package.use to manually specify
the USE configuration, and that will resolve the conflict. Then 5
years later you'll get some mysterious error due to that USE flag
setting being obsolete, and you have no idea why you even had it in
the first place unless you took meticulous notes, because the setting
doesn't reflect your own preference, but the requirements of some
package, which might be so far down the dependency tree that you don't
even know what it does.

Python version settings are just fancy ways of expressing USE
dependencies.  Unless you develop things in Python, you probably don't
care what versions of Python you have installed, and it is reasonable
to expect that the package manager or distro just takes care of this
for you.  Gentoo does not.

Implementing dynamic USE management would take somebody a fair bit of
effort, and for all I know it would make every emerge you run take an
hour to recompute the dependency tree.  The ability to configure USE
flags, along with the ability to dynamically decide the version of
dynamically linked packages, makes Gentoo have a dependency tree that
is MUCH larger than basically any other distro out there.  This is why
portage takes so long to decide what to install compared to basically
everything else.

It is this clash of expectations vs reality that causes many
frustration, and this is understandable.  That said, improving the
situation is a lot of work, whether this is in the form of a lot of
coordination to deal with the lack of dynamic USE dependencies, or the
effort to implement this feature in the package manager (which has
been discussed here and there for a decade or so).  You can't fault
volunteers for not working on things that they aren't interested in
working on.  That said, I do appreciate the frustration people have,
personally.  This is just one of those things you need to understand
about Gentoo, and then weigh the pros vs the cons when you choose what
distro to use.  If you want a distro that will just accept daily
updates with zero fuss, that isn't Gentoo.

-- 
Rich



Re: [gentoo-user] python 3.12 update

2024-06-05 Thread Rich Freeman
On Tue, Jun 4, 2024 at 11:41 PM Marco Minutoli  wrote:
>
> What I believe is in the realm of reasonable is to ask to be notified when 
> important (as in popular) packages that are currently missing support get 
> updated with a stable version supporting python 3.12 so that we can take 
> action on our side.

Well, anything system-critical (as in you won't be able to boot or use
the package manager/etc) would have been updated before the change.
Actually, that stuff would have been checked before it was even
possible to install 3.12 most likely.

Popular is another matter.  Part of the issue is that the people who
maintain python do not necessarily maintain or use the stuff that is
popular.  The stuff that is popular might not even be maintained at
all in some cases.

Probably wouldn't have hurt to file bugs before everything started
breaking, but I get that even automated bug filing takes work, and it
isn't like I was volunteering to do it...

-- 
Rich



Re: [gentoo-user] python 3.12 update

2024-06-05 Thread Rich Freeman
On Wed, Jun 5, 2024 at 4:27 AM hitachi303
 wrote:
>
> the news item lists in the save upgrade section those lines:
>  */* PYTHON_TARGETS: -* python3_11 python3_12
>  */* PYTHON_SINGLE_TARGET: -* python3_11
>
> This is mixing python 3.11 and 3.12 on the same system. This should be
> working as well, shouldn't it?

Not only should this work, but it should be less likely to cause
problems than the default upgrade process.  The news item says as
much.  This is the configuration I'm currently running (except I did
the next step and am setting python_single_taret to 3.12).  When I get
back from travelling in a week I will probably see what happens when I
remove my package.use overrides.

> Shouldn't portage be able to deal with
> this, at least until python 3.11 gets deprecated?

This should mostly work fine until 3.11 is removed.  The issue is with
python_single_target packages.  Any particular package can only use
one version of this, and if it has deps that use this setting those
must match.

If you're having issues, I would suggest either just waiting a few
weeks and trying again, or following the gradual migration
instructions.  When switching the single target to 3.12 if you have
some packages that don't work, then just override those packages in
package.use.

For example, I had this line in package.use:
<=sys-devel/distcc-3.4-r3 PYTHON_SINGLE_TARGET: -* python3_11

distcc hadn't been updated when I did the migration a few weeks ago.
So I told that package to use 3.11, but you'll note I put a version in
the atom so that when it got updated it would automatically try the
new version.  A few packages had errors when they updated and in those
cases I just updated the version in the override so that they would
continue to work.  In this case distcc -r4 added 3.12 support, and so
now that package is on 3.12.

This isn't a desktop-oriented system and I don't have a ton of python
packages on it, but I had 5 overrides at one point.  Now I'm down to
zero.  I haven't dropped 3.11 yet - I'm about to travel so that will
wait until I get back.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Rich Freeman
On Tue, Jun 4, 2024 at 2:34 PM Dale  wrote:
>
> https://www.ebay.com/itm/116051832382
>
> How many SATA drives that allow?  It says 8 ports.  I think each port is
> 4 drives.  8 x 4 = 32.  Am I right?

It has two SAS ports, which support 8 SATA drives.  Marketing...

I can't vouch for compatibility - I'd do some google research, but if
you don't find bad news this would fit your purposes.  A 4x slot
should handle 8 HDDs just fine.

-- 
Rich



Re: [gentoo-user] python 3.12 update

2024-06-04 Thread Rich Freeman
On Tue, Jun 4, 2024 at 11:40 AM Joost Roeleveld  wrote:
>
> Those steps do not just work.
> The news item actually specifically states that portage will "just do
> the update" if you have not set any python_targets stuff.
> I have those not set, but it fails on ALL my systems.
>
> There are also still over 280 packages that are STILL not supporting
> python 3.12 according to bugs.gentoo.org.

That probably isn't even all of them.  On Sat somebody pointed out bug
933383 which isn't even linked to the tracker.  I'm sure it would have
been, but I probably closed it before it got that far.

I don't want to be too picky as the core language maintainers have A
LOT of work to do, but part of the challenge is that it isn't very
obvious to a package maintainer (or anybody else) what packages
actually have problems.  It is obvious if you actually look at a
package, but many maintainers have a large number of packages and some
are co-maintained, and knowing that one of your 100 packages wasn't
updated yet isn't easy.  I'm not sure if somebody has a tool for
finding these packages in the tree, but some kind of reporting on that
would help.  Such reports need to be indexed to maintainers as well,
because getting a list of 1000 package names that need updating
doesn't help a maintainer to notice that they happen to maintain the
one on line 387.

--
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Rich Freeman
On Tue, Jun 4, 2024 at 8:03 AM Dale  wrote:
>
> I wish I could get one that comes with cables
> that go from the card to SATA drives as a set.  That way I know that
> part is right.

I'm pretty sure there are only two standards - one for external, and
one for internal.  Maybe there are different connector sizes.  Just
look up the datasheet on the HBA and it will tell you what connectors
it has.  Then google for the breakout to SATA cable.

> At this point, it seems any LSI card will likely work.

I wouldn't make that claim.  They're definitely one of the biggest
brand but some of their fancier products can be less oriented towards
what you're doing.  Some might require weird disk formats even if
you're exposing individual disks, making the disks unreadable with
other controllers.

> I still wish that mobo had more PCIe slots. It just gives more options
> down the road.  The speed and memory will be nice tho.  When I finally
> hit the order or pay button.  :/

Unfortunately, DIMMs and PCIe slots have become the hallmark of
workstation and server motherboards.  I think this is due to the
adoption of the SOC model on the CPUs.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Rich Freeman
On Tue, Jun 4, 2024 at 7:56 AM Dale  wrote:
>
> Rich Freeman wrote:
> > On Tue, Jun 4, 2024 at 2:44 AM Dale  wrote:
> >
> >> The little SATA controllers I currently use tend to only need PCIe x1.
> >> That is slower but at least it works.
> > The LSI cards will work just as well in a 1x.  That is, assuming you
> > only plug as many drives into it as you do one of those SATA HBAs and
> > don't expect miraculous bandwidth out of them.
>
> So the LSI controllers are a option, just a little slower.  Cool.

No, they're just as fast if not faster than the SATA controllers.

The LSI HBA in a 1x slot will be slower than the LSI HBA in a 4x slot,
which will be slower than an LSI HBA in an 8x slot.  That is, assuming
you're bandwidth-limited.

The SATA controller you're used to can't put any more data through a
1x slot than an LSI HBA.  The reason the HBA has an 8x slot is so that
it can move larger amounts of data.  The SATA board is marketed to
consumers who care more about cramming more drives into their system
than whether those drives operate at full performance.  They also tend
to have fewer ports-  if you only are adding 2 SATA ports, then a 1x
slot isn't a huge bottleneck.

The bottom line is that if you are using an older v2 HBA, then it can
transfer 500MB/s per PCIe lane.  If you're only transferring 100MB/s
then it doesn't matter how many lanes you have.  If you're
transferring 4GB/s then the number of lanes becomes critical if you
need to sustain that bandwidth.  If you're using the HBA to cram 4
5400 RPM HDDs into your system that is a very different demand
compared to adding 16 Micron enterprise SSDs.

> The
> main reason I wanted to go with the SAS to SATA controller, number of
> drives I can connect.  Keep in mind, I need to get to almost 20 drives
> but with only one card.

Yup, getting 8/16 SATA disks on one HBA isn't a problem.  Depending on
how fast those drives are and your data access patterns, the number of
PCIe lanes might or might not be an issue.  Just add up your total
data transfer rate and look up the PCIe specs.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Rich Freeman
On Tue, Jun 4, 2024 at 6:19 AM Dale  wrote:
>
> You ever seen one of these?
>
> https://www.ebay.com/itm/274651287952
>
> Is that taking one of those fast USB ports and converting it to a PCIe
> x1 slot?  Am I seeing that right?
>

No, it is taking a PCIe 1x slot, using a USB cable, and converting it
into 4 PCIe 16x slots (likely wired at 1x).  I doubt that it is using
standard USB though it might be.  Thunderbolt can be used to do PCIe
over a USB-C form factor I think, and there is some chance this device
is making use of that.

I've used a similar device to connect an 8x LSI HBA to a 1x PCIe slot
on an rk3399 ARM SBC (I needed the riser to provide additional power).
In that case it was just one slot to one slot, so it didn't need a
switch.  To run 4 slots off of a single 1x would require a PCIe
switch, which that board no-doubt uses.

PCIe is not unlike ethernet - there is quite a bit you can do with it.
The main problem is that there just isn't much consumer-oriented
hardware floating around - lots of specialized solutions with chips
embedded in them, which are harder to adapt to other problems.
Another issue is that cases/etc rarely provide convenient mounting
solutions when you start using this stuff.

Take the motherboard you're using.  That has PCIe v5 in one of the M.2
slots, and PCIe v4 in most of the rest of the interfaces.  There is no
reason you couldn't run all that into a switch and have a TON of 8x
PCIe v2 slots to use with older HBAs and such.  That one M.2 v5 slot
could run 4 8x PCIe v2 slots just on its own.  You just need the right
adapter, and those are hard to find from reputable companies.  There
is all manner of stuff on ali express and so on, but now you're mining
forums to figure out if they actually work before you buy them.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-04 Thread Rich Freeman
On Tue, Jun 4, 2024 at 2:44 AM Dale  wrote:
>
> I did some more digging.  It seems that all the LSI SAS cards I found
> need a PCIe x8 slot.  The only slot available is the one intended for
> video.

The board you linked has 2 4x slots that are physically 16x, so the
card should work fine in those, just at 4x speed.

> I'd rather not
> use it on the new build because I've thought about having another
> monitor added for desktop use so I would need three ports at least.

You actually could put the video card in one of those 4x slots if you
wanted to prioritize IO for the HBA, though I would only do that if
you weren't playing games, and had a lot of SSDs plugged into the HBA.
Crypto miners routinely run GPUs in 1x slots.

> The little SATA controllers I currently use tend to only need PCIe x1.
> That is slower but at least it works.

The LSI cards will work just as well in a 1x.  That is, assuming you
only plug as many drives into it as you do one of those SATA HBAs and
don't expect miraculous bandwidth out of them.

The only gotcha is that on the board you linked the 1x slot doesn't
say it can accomodate cards larger than 1x, so you might need a riser
and someplace to put the card.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread Rich Freeman
On Mon, Jun 3, 2024 at 11:17 AM Dale  wrote:
>
> When you say HBA.  Is this what you mean?
>
> https://www.ebay.com/itm/125486868824
>

Yes.  Typically they have mini-SAS interfaces, and you can get a
breakout cable that will attach one of those to 4x SATA ports.

Some things to keep in mind when shopping for HBAs:
1. Check for linux compatibility.  Not every card has great support.
2. Flashing the firmware may require windows, and this may be
necessary to switch a card between RAID mode and IT mode, the latter
being what you almost certainly want, and the former being what most
enterprise admins tend to have them flashed as.  IT mode basically
exposes all the drives that are attached as a bunch of standalone
drivers, while RAID mode will just expose a limited number of virtual
interfaces and the card bundles the disks into arrays (and if the card
dies, good luck ever reading those disks again until you reformat
them).
3. Be aware they often use a ton of power.
4. Take note of internal vs external ports.  You can get either.  They
need different cables, and if your disks are inside the case having
the ports on the outside isn't technically a show-stopper but isn't
exactly convenient.
5. Take note of the interface speed and size.  The card you linked is
(I think) an 8x v2 card.  PCIe will auto-negotiate down, so if you
plug that card into your v4 4x slot it will run at v2 4x, which is
2GB/s bandwidth.  That's half of what it is capable of, but probably
not a big issue.  If you want to plug 16 enterprise SSDs into it then
you'll definitely hit the PCIe bottleneck, but if you plug 16 consumer
7200RPM HDDs into it you're only going to hit 2GB/s under fairly ideal
circumstances, and with fewer HDDs you couldn't hit it at all.  If you
pay more you'll get a newer PCIe revision, which means more bandwidth
for a given number of lanes.
6. Check for hardware compatibility too.  Stuff from 1st parties like
Dell/etc might be fussy about wanting to be in a Dell server with
weird firmware interactions with the motherboard.  A 3rd party card
like LSI probably is less of an issue here, but check.

Honestly, part of why I went the distributed filesystem route (Ceph
these days) is to avoid dealing with this sort of nonsense.  Granted,
now I'm looking to use more NVMe and if you want high capacity NVMe
that tends to mean U.2, and dealing with bifurcation and PCIe
switches, and just a different sort of nonsense

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread Rich Freeman
On Mon, Jun 3, 2024 at 7:06 AM Dale  wrote:
>
> I still wish it had more PCIe slots.  I'm considering switching to a SAS
> card and then with cables change that to SATA.  I think I can get one
> card and have most if not all of the drives the Fractal case will hold
> hooked to it.
> ...
> Honestly, I wouldn't mind one m.2 for the OS.  I could just as well use
> a SATA SSD to do that tho.

First, I will point out that an M.2 gen5 (or even gen4) NVMe will
perform VASTLY better than a SATA SSD.  That 990 Evo (which isn't an
enterprise drive) boasts 800k IOPS.  The fastest SATA SSD I could find
tops out at around 90k IOPS.  There is simply no comparison between
SATA and NVMe, though whether that IOPS performance matters to you is
another matter.

As far as IO goes, your motherboard has the following PCIe interfaces:
16x PCIe v4
2 4x PCIe v4
1 M.2 v5
2 M.2 v4

4x should be enough for an HBA if you're running hard drives, so with
bifurcation, risers, and so on, you could get 9 HBAs into that system,
with 8-16 SATA ports on each, and with hard drives I imagine they'd
perform as good as hard drives possibly can.  It would be a mess of
adapters and cables, but you can certainly do it if you have the room
in the case for that mess.

Those 3 PCIe slots are all 16x physically, so you could easily get 3
HBAs into the system without even having to resort to risers.  That's
already 24-48 SATA ports.

Sure, it isn't quite as convenient as the IO options of the past, but
it isn't like you can't get PCIe in a system that has all those lanes.
The motherboard has already switched most of the v5 down to v4 in
exchange for more lanes, which honestly is a better option for you
anyway as pretty much only gaming GPUs can use v5 effectively anyway.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-03 Thread Rich Freeman
On Mon, Jun 3, 2024 at 5:12 AM Dale  wrote:
>
> Graphics Capabilities
>
> Graphics Model  AMD Radeon™ Graphics
> Graphics Core Count 2
> Graphics Frequency 2200 MHz
>
> That said, I have a little 4 port graphics card I'd like to use anyway.

The CPU you picked indeed has integrated graphics.  I didn't check,
but I suspect the integrated graphics are way better in every way than
the little 4-port graphics card you'd prefer.  Unless you really need
those extra outputs, I'd use the integrated graphics, and then you
have a 16x slot you can use for IO.

> The more I think on this, the more I don't like spending this much money
> on a mobo I don't really like at all.  It seems all the mobo makers want
> is flashy crap.  I want a newer and faster machine but with options to
> expand like my current rig.

Ok, what EXACTLY are you looking for, as you didn't really elaborate
on what this board is missing.  It sounds like the ability to
interface more drives?  You have a free 16x slot.  Stick and HBA in it
and you can interface a whole bunch of SATA drives.  With the right
board you could even put NVMe in there.

Any board you buy is going to be expensive.  They went the LGA route
which makes the boards more expensive, and for whatever reason the
prices have all been creeping up.

Most of the IO on consumer CPUs these days tends to be focused on USB3
and maybe a few M.2 drives.  They're very expandable, but not for the
sorts of things you want to use them for.

You might be happier with a server/workstation motherboard, but
prepare to pay a small fortune for those unless you buy used, and the
marketing is a bit cryptic as they tend to be sold through
integrators.

-- 
Rich



Re: [gentoo-user] Mobo, CPU, memory and a m.2 thingy. This work together?

2024-06-02 Thread Rich Freeman
On Sun, Jun 2, 2024 at 9:27 AM Dale  wrote:
>
> I thought of something on the m.2 thing.  I plan to put my OS on it.  I
> usually use tmpfs and compile in memory anyway but do have some set to
> use spinning rust. Once I get 128GB installed, I should be able to do
> that with all packages anyway but still, I had a question.  Should I put
> the portage work directory on a spinning rust drive to save wear and
> tear on the SSD or have they got to the point now that doesn't matter
> anymore?  I know all the SSD devices have improved a lot since the first
> ones came out.

So, as with most things the answer is it depends.

The drive you're using is a consumer drive, rated for 600TB in writes.
Now, small random writes will probably wear it out faster, and large
sequential ones will probably wear it out slower, but that's basically
what you're working with.  That's about 0.3DWPD, which isn't a great
endurance level.

Often these drives can be over-provisioned to significantly increase
their life - if you're using discard/trim properly and keep maybe
1/3rd of the drive empty you'll get a lot more life out of it.  In
fact, the difference between different models of drives with different
write endurances are often nothing more than the drive having more
internal storage than advertised and doing the same thing behind the
scenes.

Obviously temp file use is going to eat into your endurance, but it
will GREATLY improve your build performance as well, so you should
probably do the math on just how much writing we're talking about.  If
a package has 20GB in temp files, you have to build it 30k times to
wear out your disk by the official numbers.

Of course, proper use of discard/trim requires setting your config
files correctly, and it might reduce performance on consumer drives.
When you buy enterprise NVMe you're paying for a couple of things that
are relevant to you:
1. A higher endurance rating.
2. Firmware that doesn't do dumb things when you trim/discard properly
3. Power loss protection (you didn't bring this topic up, but flash
storage is not kind to power loss and often performance is sacrificed
to make writes safer with internal journals).
4. Sustained write performance.  If you do sustained writes to a
consumer drive you'll see the write speed fall off a cliff after a
time, and this won't happen on an enterprise drive - the cache/etc is
optimized for sustained write loads.

Of course enterprise flash is pretty expensive unless you buy it used,
and obviously if you do that try to get something whose health is
known at time of purchase.

-- 
Rich



[gentoo-commits] repo/gentoo:master commit in: net-misc/s3cmd/

2024-06-01 Thread Richard Freeman
commit: a11cf9af5177ae3a70eabb154d114ff7f2844eb2
Author: Hank Leininger  korelogic  com>
AuthorDate: Sat Jun  1 22:12:59 2024 +
Commit: Richard Freeman  gentoo  org>
CommitDate: Sat Jun  1 22:36:33 2024 +
URL:https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=a11cf9af

net-misc/s3cmd: enable py3.12

Closes: https://bugs.gentoo.org/933383
Closes: https://github.com/gentoo/gentoo/pull/36961
Signed-off-by: Hank Leininger  korelogic.com>
Signed-off-by: Richard Freeman  gentoo.org>

 net-misc/s3cmd/s3cmd-2.4.0.ebuild | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net-misc/s3cmd/s3cmd-2.4.0.ebuild 
b/net-misc/s3cmd/s3cmd-2.4.0.ebuild
index 4dcbaecc0089..a3e56368e7f9 100644
--- a/net-misc/s3cmd/s3cmd-2.4.0.ebuild
+++ b/net-misc/s3cmd/s3cmd-2.4.0.ebuild
@@ -3,7 +3,7 @@
 
 EAPI=8
 
-PYTHON_COMPAT=( python3_{10..11} )
+PYTHON_COMPAT=( python3_{10..12} )
 PYTHON_REQ_USE="xml(+)"
 DISTUTILS_USE_PEP517=setuptools
 



[Bug 2067307] Re: Sound skips when mouse is moved

2024-05-30 Thread Freeman Montgomery
I'm a bit of a novice, and it would have taken a bit more advanced
terminal skills than I have, but I did find out it was using the older
of two (0,31 vs 0,35) lowlatency kernels. I did a reboot, and in
advanced settings switched to the newest kernel, and that seems to have
fixed the issue. Thank you for the suggestion.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2067307

Title:
  Sound skips when mouse is moved

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2067307/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-29 Thread Rob Freeman
On Wed, May 29, 2024 at 9:37 AM Matt Mahoney  wrote:
>
> On Tue, May 28, 2024 at 7:46 AM Rob Freeman  
> wrote:
>
> > Now, let's try to get some more detail. How do compressors handle the
> > case where you get {A,C} on the basis of AB, CB, but you don't get,
> > say AX, CX? Which is to say, the rules contradict.
>
> Compressors handle contradictory predictions by averaging them

That's what I thought.

> > "Halle (1959, 1962) and especially Chomsky (1964) subjected
> > Bloomfieldian phonemics to a devastating critique."
> >
> > Generative Phonology
> > Michael Kenstowicz
> > http://lingphil.mit.edu/papers/kenstowicz/generative_phonology.pdf
> >
> > But really it's totally ignored. Machine learning does not address
> > this to my knowledge. I'd welcome references to anyone talking about
> > its relevance for machine learning.
>
> Phonology is mostly irrelevant to text prediction.

The point was it invalidated the method of learning linguistic
structure by distributional analysis at any level. If your rules for
phonemes contradict, what doesn't contradict?

Which is a pity. Because we still don't have a clue what governs
language structure. The best we've been able to come up with is crude
hacks like dragging a chunk of important context behind like a ball
and chain in LSTM, or multiplexing pre-guessed "tokens" together in a
big matrix, with "self-attention".

Anyway, your disinterest doesn't invalidate my claim that this result,
pointing to contradiction produced by distributional analysis learning
procedures for natural language, is totally ignored by current machine
learning, which implicitly or otherwise uses those distributional
analysis learning procedures.

> Language evolved to be learnable on neural networks faster than our
> brains evolved to learn language. So understanding our algorithm is
> important.
>
> Hutter prize entrants have to prebuild a lot of the model because
> computation is severely constrained (50 hours in a single thread with
> 10 GB memory). That includes a prebuilt dictionary. The human brain
> takes 20 years to learn language on a 10 petaflop, 1 petabyte neural
> network. So we are asking quite a bit.

Neural networks may have finally gained close to human performance at
prediction. A problem where you can cover a multitude of sins with raw
memory. Something at which computers trivially exceed humans by as
many orders of magnitude as you can stack server farms. You can just
remember each contradiction including the context which selects it. No
superior algorithm required, and certainly none in evidence. (Chinese
makes similar trade-offs, swapping internal mnemonic sound structure
within tokens, with prodigious memory requirements for the tokens
themselves.) Comparing 10 GB with 1 petabyte seems ingenuous. I
strongly doubt any human can recall as much as 10GB of text. (All of
Wikipedia currently ~22GB compressed, without media? Even to read it
all is estimated at 47 years, including 8hrs sleep a night
https://www.reddit.com/r/theydidthemath/comments/80fi3w/self_how_long_would_it_take_to_read_all_of/.
So forget 20 years to learn it, it would take 20 years to read all the
memory you give Prize entrants.) But I would argue our prediction
algorithms totally fail to do any sort of job with language structure.
Whereas you say babies start to structure language before they can
walk? (Walking being something else computers still have problems
with.) And far from stopping at word segmentation, babies go on to
build quite complex structures, including new ones all the time.

Current models do nothing with structure, not at human "data years"
8-10 months, not 77 years (680k hours of audio to train "Whisper" ~77
years? 
https://www.thealgorithmicbridge.com/p/8-features-make-openais-whisper-the.
Perhaps some phoneme structure might help there...) The only structure
is "tokens". I don't even think current algorithms do max entropy to
find words. They just start out with "tokens". Guessed at
pre-training. Here's Karpathy and LeCun talking about it:

Yann LeCun
@ylecun·Feb 21
Text tokenization is almost as much of an abomination for text as it
is for images. Not mentioning video.
...
Replying to @karpathy
We will see that a lot of weird behaviors and problems of LLMs
actually trace back to tokenization. We'll go through a number of
these issues, discuss why tokenization is at fault, and why someone
out there ideally finds a way to delete this stage entirely.

https://x.com/ylecun/status/1760315812345176343

By the way, talking about words. That's another thing which seems to
have contradictory structure in humans, e.g. native Chinese speakers
agree what constitutes a "word" less than 70% of the time:

"Sproat et. al. (1996) give empirical results showing that native
speaker

[Bug 2067307] Re: Sound skips when mouse is moved

2024-05-28 Thread Freeman Montgomery
the mouse is wireless. The audio is HDMI to the monitor.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2067307

Title:
  Sound skips when mouse is moved

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/2067307/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-28 Thread Rob Freeman
Matt,

Nice break down. You've actually worked with language models, which
makes it easier to bring it back to concrete examples.

On Tue, May 28, 2024 at 2:36 AM Matt Mahoney  wrote:
>
> ...For grammar, AB predicts AB (n-grams),

Yes, this looks like what we call "words". Repeated structure. No
novelty. And nothing internal we can equate to "meaning" either. Only
meaning by association.

> and AB, CB, CD, predicts AD (learning the rule
> {A,C}{B,D}).

This is the interesting one. It actually kind of creates new meaning.
You can think of "meaning" as a way of grouping things which makes
good predictions. And, indeed, those gap filler sets {A,C} do pull
together sets of words that we intuitively associate with similar
meaning. These are also the sets that the HNet paper identifies as
having "meaning" independent of any fixed pattern. A pattern can be
new, and so long as it makes similar predictions {B,D}, for any set
{B,D...}, {X,Y...}..., we can think of it as having "meaning", based
on the fact that arranging the world that way, makes those shared
predictions. (Even moving beyond language, you can say the atoms of a
ball, share the meaning of a "ball", based on the fact they fly
through the air together, and bounce off walls together. It's a way of
defining what it "means" to be a "ball".)

Now, let's try to get some more detail. How do compressors handle the
case where you get {A,C} on the basis of AB, CB, but you don't get,
say AX, CX? Which is to say, the rules contradict. Sometimes A and C
are the same, but not other times. You want to trigger the "rule" so
you can capture the symmetries. But you can't make a fixed "rule",
saying {A,C}, because the symmetries only apply to particular sub-sets
of contexts.

You get a lot of this in natural language. There are many such shared
context symmetries in language, but they contradict. Or they're
"entangled". You get one by ordering contexts one way, and another by
ordering contexts another way, but you can't get both at once, because
you can't order contexts both ways at once.

I later learned these contradictions were observed even at the level
of phonemes, and this was crucial to Chomsky's argument that grammar
could not be "learned", back in the '50s. That this essentially broke
consensus in the field of linguistics. Which remains in squabbling
sub-fields over this result, to this day. That's why theoretical
linguistics contributes essentially nothing to contemporary machine
learning. Has anyone ever wondered? Why don't linguists tell us how to
build language models? Even the Chomsky hierarchy cited by James'
DeepMind paper from the "learning" point of view is essentially a
misapprehension of what Chomsky concluded (that observable grammar
contradicts, so formal grammar can't be learned.)

A reference available on the Web I've been able to find is this one:

"Halle (1959, 1962) and especially Chomsky (1964) subjected
Bloomfieldian phonemics to a devastating critique."

Generative Phonology
Michael Kenstowicz
http://lingphil.mit.edu/papers/kenstowicz/generative_phonology.pdf

But really it's totally ignored. Machine learning does not address
this to my knowledge. I'd welcome references to anyone talking about
its relevance for machine learning.

I'm sure all the compression algorithms submitted to the Hutter Prize
ignore this. Maybe I'm wrong. Have any addressed it? They probably
just regress to some optimal compromise, and don't think about it too
much.

If we choose not to ignore this, what do we do? Well, we might try to
"learn" all these contradictions, indexed on context. I think this is
what LLMs do. By accident. That was the big jump, right, "attention",
to index context. Then they just enumerate vast numbers of (an
essentially infinite number of?) predictive patterns in one enormous
training time.That's why they get so large.

No-one knows, or wonders, why neural nets work for this, and symbols
don't, viz. the topic post of this thread. But this will be the
reason.

In practice LLMs learn predictive patterns, and index them on context
using "attention", and it turns out there are a lot of those different
predictive "embeddings", indexed on context. There is no theory.
Everything is a surprise. But if you go back in the literature, there
are these results about contradictions to suggest why it might be so.
And the conclusion is still either Chomsky's one, that language can't
be learned, consistent rules exist, but must be innate. Or, what
Chomsky didn't consider, that complexity of novel patterns defying
abstraction, might be part of the solution. It was before the
discovery of chaos when Chomsky was looking at this, so perhaps it's
not fair to blame him for not considering it.

But then it becomes a complexity issue. Just how many unique orderings
of contexts with useful predictive symmetries are there? Are you ever
at an end of finding different orderings of contexts, which specify
some useful new predictive symmetry or other? The example of

[Bug 2067307] [NEW] Sound skips when mouse is moved

2024-05-27 Thread Freeman Montgomery
Public bug reported:

When you are playing a video or audio stream in a browser, if you move
your mouse, is makes audio playback "choppy" and requires not moving the
mouse for several seconds for it to go away. It makes doing anything
else while watching youtube or listening to a radio station stream
nearly impossible. I have to use my table and bluetooth while using
google for other websites. It started after I upgraded from 23.10 to
24.04

ProblemType: Bug
DistroRelease: Ubuntu 24.04
Package: xorg 1:7.7+23ubuntu3
ProcVersionSignature: Ubuntu 6.8.0-31.31.1-lowlatency 6.8.1
Uname: Linux 6.8.0-31-lowlatency x86_64
ApportVersion: 2.28.1-0ubuntu3
Architecture: amd64
BootLog: Error: [Errno 13] Permission denied: '/var/log/boot.log'
CasperMD5CheckMismatches: 
./pool/main/l/linux-firmware/linux-firmware_20230919.git3672ccab-0ubuntu2.1_amd64.deb
CasperMD5CheckResult: fail
CompositorRunning: None
CurrentDesktop: ubuntu:GNOME
Date: Mon May 27 13:25:33 2024
DistUpgraded: 2024-05-22 10:28:03,976 DEBUG migrateToDeb822Sources()
DistroCodename: noble
DistroVariant: ubuntu
ExtraDebuggingInterest: Yes, if not too technical
GraphicsCard:
 Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics 
Controller [8086:0412] (rev 06) (prog-if 00 [VGA controller])
   Subsystem: Lenovo Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics 
Controller [17aa:309e]
InstallationDate: Installed on 2024-02-03 (115 days ago)
InstallationMedia: Ubuntu 23.10.1 "Mantic Minotaur" - Release amd64 (20231016.1)
MachineType: LENOVO 10AY001RUS
ProcEnviron:
 LANG=en_US.UTF-8
 PATH=(custom, no user)
 SHELL=/bin/bash
 XDG_RUNTIME_DIR=
ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-6.8.0-31-lowlatency 
root=UUID=89aa50b5-b941-4598-8994-fab7aad177cd ro quiet splash threadirqs 
vt.handoff=7
SourcePackage: xorg
Symptom: display
UpgradeStatus: Upgraded to noble on 2024-05-22 (5 days ago)
dmi.bios.date: 10/24/2014
dmi.bios.release: 1.51
dmi.bios.vendor: LENOVO
dmi.bios.version: FHKT51AUS
dmi.board.name: SHARKBAY
dmi.board.vendor: LENOVO
dmi.board.version: 0B98401 WIN
dmi.chassis.type: 3
dmi.chassis.vendor: To Be Filled By O.E.M.
dmi.chassis.version: To Be Filled By O.E.M.
dmi.ec.firmware.release: 1.36
dmi.modalias: 
dmi:bvnLENOVO:bvrFHKT51AUS:bd10/24/2014:br1.51:efr1.36:svnLENOVO:pn10AY001RUS:pvrThinkCentreM73:rvnLENOVO:rnSHARKBAY:rvr0B98401WIN:cvnToBeFilledByO.E.M.:ct3:cvrToBeFilledByO.E.M.:skuLENOVO_MT_10AY:
dmi.product.family: To be filled by O.E.M.
dmi.product.name: 10AY001RUS
dmi.product.sku: LENOVO_MT_10AY
dmi.product.version: ThinkCentre M73
dmi.sys.vendor: LENOVO
version.compiz: compiz N/A
version.libdrm2: libdrm2 2.4.120-2build1
version.libgl1-mesa-dri: libgl1-mesa-dri 24.0.5-1ubuntu1
version.libgl1-mesa-glx: libgl1-mesa-glx N/A
version.xserver-xorg-core: xserver-xorg-core 2:21.1.12-1ubuntu1
version.xserver-xorg-input-evdev: xserver-xorg-input-evdev N/A
version.xserver-xorg-video-ati: xserver-xorg-video-ati 1:22.0.0-1build1
version.xserver-xorg-video-intel: xserver-xorg-video-intel 
2:2.99.917+git20210115-1build1
version.xserver-xorg-video-nouveau: xserver-xorg-video-nouveau 1:1.0.17-2build1

** Affects: xorg (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug noble performance ubuntu wayland-session

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2067307

Title:
  Sound skips when mouse is moved

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/xorg/+bug/2067307/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-27 Thread Rob Freeman
James,

I think you're saying:

1) Grammatical abstractions may not be real, but they can still be
useful abstractions to parameterize "learning".

2) Even if after that there are "rules of thumb" which actually govern
everything.

Well, you might say why not just learn the "rules of thumb".

But the best counter against the usefulness of the Chomsky hierarchy
for parameterizing machine learning, might be that Chomsky himself
dismissed the idea it might be learned. And his most damaging
argument? That learned categories contradict. "Objects" behave
differently in one context, from how they behave in another context.

I see it a bit like our friend the Road Runner. You can figure out a
physics for him. But sometimes that just goes haywire and contradicts
itself - bodies make holes in rocks, fly high in the sky, or stretch
wide.

All the juice is in these weird "rules of thumb".

Chomsky too failed to find consistent objects. He was supposed to push
past the highly successful learning of phoneme "objects", and find
"objects" for syntax. And he failed. And the most important reason
I've found, was that even for phonemes, learned category contradicted.

That hierarchy stuff, that wasn't supposed to appear in the data. That
could only be in our heads. Innate. Why? Well for one thing, because
the data contradicted. The "learning procedures" of the time generated
contradictory objects. This is a forgotten result. Machine learning is
still ignoring this old result from the '50s. (Fair to say the
DeepMind paper ignores it?) Chomsky insisted these contradictions
meant the "objects" must be innate. The idea cognitive objects might
be new all the time (and particularly the idea they might contradict!)
is completely orthogonal to his hierarchy (well, it might be
compatible with context sensitivity, if you accept that the real juice
is in the mechanism to implement the context sensitivity?)

If categories contradict, that is represented on the Chomsky hierarchy
how? I don't know. How would you represent contradictory categories on
the Chomsky hierarchy? A form of context sensitivity?

Actually, I think, probably, using entangled objects like quantum. Or
relation and variance based objects as in category theory.

I believe Coecke's team has been working on "learning" exactly this:

>From Conceptual Spaces to Quantum Concepts: Formalising and Learning
Structured Conceptual Models
Sean Tull, Razin A. Shaikh, Sara Sabrina Zemljiˇc and Stephen Clark
Quantinuum
https://browse.arxiv.org/pdf/2401.08585

I'm not sure. I think the symbolica.ai people may be working on
something similar: find some level of abstraction which applies even
across varying objects (contradictions?)

For myself, in contrast to Bob Coecke, and the category theory folks,
I think it's pointless, and maybe unduly limiting, to learn this
indeterminate object formalism from data, and then collapse it into
one or other contradictory observable form, each time you observe it.
(Or seek some way you can reason with it even in indeterminate object
formulation, as with the category theory folks?) I think you might as
well collapse observable objects directly from the data.

I believe this collapse "rule of thumb", is the whole game, one shot,
no real "learning" involved.

All the Chomsky hierarchy limitations identified in the DeepMind paper
would disappear too. They are all limitations of not identifying
objects. Context coding hacks like LSTM, or "attention", introduced in
lieu of actual objects, and grammars over those objects, stemming from
the fact grammars of contradictory objects are not "learnable."

On Sun, May 26, 2024 at 11:24 PM James Bowery  wrote:
>
> It's also worth reiterating a point I made before about the confusion between 
> abstract grammar as a prior (heuristic) for grammar induction and the 
> incorporation of so-induced grammars as priors, such as in "physics informed 
> machine learning".
>
> In the case of physics informed machine learning, the language of physics is 
> incorporated into the learning algorithm.  This helps the machine learning 
> algorithm learn things about the physical world without having to re-derive 
> the body of physics knowledge.
>
> Don't confuse the two levels here:
>
> 1) My suspicion that natural language learning may benefit from prioritizing 
> HOPDA as an abstract grammar to learn something about natural languages -- 
> such as their grammars.
>
> 2) My suspicion (supported by "X informed machine learning" exemplified by 
> the aforelinked work) that there may be prior knowledge about natural 
> language more specific than the level of abstract grammar -- such as specific 
> rules of thumb for, say, the English language that may greatly speed training 
> time on English corpora.
>
> On Sun, May 26, 2024 at 9:40 AM James Bowery  wrote:
>>
>> See the recent DeepMind paper "Neural Networks and the Chomsky Hierarchy" 
>> for the sense of "grammar" I'm using when talking about the HNet paper's 
>> connection to 

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-25 Thread Rob Freeman
Thanks Matt.

The funny thing is though, as I recall, finding semantic primitives
was the stated goal of Marcus Hutter when he instigated his prize.

That's fine. A negative experimental result is still a result.

I really want to emphasize that this is a solution, not a problem, though.

As the HNet paper argued, using relational categories, like language
embeddings, decouples category from pattern. It means we can have
categories, grammar "objects" even, it is just that they may
constantly be new. And being constantly new, they can't be finitely
"learned".

LLMs may have been failing to reveal structure, because there is too
much of it, an infinity, and it's all tangled up together.

We might pick it apart, and have language models which expose rational
structure, the Holy Grail of a neuro-symbolic reconciliation, if we
just embrace the constant novelty, and seek it as some kind of
instantaneous energy collapse in the relational structure of the data.
Either using a formal "Hamiltonian", or, as I suggest, finding
prediction symmetries in a network of language sequences, by
synchronizing oscillations or spikes.

On Sat, May 25, 2024 at 11:33 PM Matt Mahoney  wrote:
>
> I agree. The top ranked text compressors don't model grammar at all.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-Meac024d4e635bb1d9e8f34e9
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-24 Thread Rob Freeman
sing on relations defining objects
in ways which allow their internal "pattern" to vary.

That's what I see being presented in the HNet paper. Maybe I'm getting
ahead of its authors. Because that is the solution I'm presenting
myself. But I interpret the HNet paper to present that option also.
Cognitive objects, including "grammar", can emerge with a freedom
which resembles the LLM freedom of totally ignoring "objects" (which
seems to be necessary, both by the success of LLMs at generating text,
and by the observed failure of formal grammars historically) if you
specify them in terms of external relations.

Maybe the paper authors don't see it. But the way they talk about
generating grammars based on external relations, opens the door to it.

On Fri, May 24, 2024 at 10:12 PM James Bowery  wrote:
>
>
>
> On Thu, May 23, 2024 at 9:19 PM Rob Freeman  
> wrote:
>>
>> ...(Regarding the HNet paper)
>> The ideas of relational category in that paper might really shift the
>> needle for current language models.
>>
>> That as distinct from the older "grammar of mammalian brain capacity"
>> paper, which I frankly think is likely a dead end.
>
>
> Quoting the HNet paper:
>>
>> We conjecture that ongoing hierarchical construction of
>> such entities can enable increasingly “symbol-like” repre-
>> sentations, arising from lower-level “statistic-like” repre-
>> sentations. Figure 9 illustrates construction of simple “face”
>> configuration representations, from exemplars constructed
>> within the CLEVR system consisting of very simple eyes,
>> nose, mouth features. Categories (¢) and sequential rela-
>> tions ($) exhibit full compositionality into sequential rela-
>> tions of categories of sequential relations, etc.; these define
>> formal grammars (Rodriguez & Granger 2016; Granger
>> 2020). Exemplars (a,b) and near misses (c,d) are presented,
>> initially yielding just instances, which are then greatly re-
>> duced via abductive steps (see Supplemental Figure 13).
>
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M9f8daceca7b091a0b823481d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-23 Thread Rob Freeman
James,

Not sure whether all that means you think category theory might be
useful for AI or not.

Anyway, I was moved to post those examples by Rich Hickey and Bartoz
Milewsky in my first post to this thread, by your comment that ideas
of indeterminate categories might annoy what you called 'the risible
tradition of so-called "type theories" in both mathematics and
programming languages'. I see the Hickey and Milewsky refs as examples
of ideas of indeterminate category entering computer programming
theory too.

Whether posted on the basis of a spurious connection or not, thanks
for the Granger HNet paper. That's maybe the most interesting paper
I've seen this year. As I say, it's the only reference I've seen other
than my own presenting the idea that relational categories liberate
category from any given pattern instantiating it. Which I see as
distinct from regression.

The ideas of relational category in that paper might really shift the
needle for current language models.

That as distinct from the older "grammar of mammalian brain capacity"
paper, which I frankly think is likely a dead end.

Real time "energy relaxation" finding new relational categories, as in
the Hamiltonian Net paper, is what I am pushing for. I see current
LLMs as incorporating a lot of that power by accident. But because
they still concentrate on the patterns, and not the relational
generating procedure, they do it only by becoming "large". We need to
understand the (relational) theory behind it in order to jump out of
the current LLM "local minimum".

On Thu, May 23, 2024 at 11:47 PM James Bowery  wrote:
>
>
> On Wed, May 22, 2024 at 10:34 PM Rob Freeman  
> wrote:
>>
>> On Wed, May 22, 2024 at 10:02 PM James Bowery  wrote:
>> > ...
>> > You correctly perceive that the symbolic regression presentation is not to 
>> > the point regarding the HNet paper.  A big failing of the symbolic 
>> > regression world is the same as it is in the rest of computerdom:  Failure 
>> > to recognize that functions are degenerate relations and you had damn well 
>> > better have thought about why you are degenerating when you do so.  But 
>> > likewise, when you are speaking about second-order theories (as opposed to 
>> > first-order theories), such as Category Theory, you had damn well have 
>> > thought about why you are specializing second-order predicate calculus 
>> > when you do so.
>> >
>> > Not being familiar with Category Theory I'm in no position to critique 
>> > this decision to specialize second-order predicate calculus.  I just 
>> > haven't seen Category Theory presented as a second-order theory.  Perhaps 
>> > I could understand Category Theory thence where the enthusiasm for 
>> > Category Theory comes from if someone did so.
>> >
>> > This is very much like my problem with the enthusiasm for type theories in 
>> > general.
>>
>> You seem to have an objection to second order predicate calculus.
>
>
> On the contrary; I see second order predicate calculus as foundational to any 
> attempt to deal with process which, in the classical case, is computation.
>
>> Dismissing category theory because you equate it to that. On what
>> basis do you equate them? Why do you reject second order predicate
>> calculus?
>
>
> I don't "dismiss" category theory.  It's just that I've never seen a category 
> theorist describe it as a second order theory.   Even in type theories 
> covering computation one finds such phenomena as the Wikipedia article on 
> "Type theory as a logic" lacking any reference to "second order".
>
> If I appear to "equate" category theory and second order predicate calculus 
> it is because category theory is a second order theory.  But beyond that, I 
> have an agenda related to Tom Etter's attempt to flesh out his theory of 
> "mind and matter" which I touched on in my first response to this thread 
> about fixing quantum logic.  An aspect of this project is the proof that 
> identity theory belongs to logic in the form of relative identity theory.  My 
> conjecture is that it ends up belonging to second order logic (predicate 
> calculus), which is why I resorted to Isabelle (HOL proof assistant).
>
>> What I like about category theory (as well as quantum formulations) is
>> that I see it as a movement away from definitions in terms of what
>> things are, and towards definitions in terms of how things are
>> related. Which fits with my observations of variation in objects
>> (grammar initially) defying definition, but being accessible to
>> definition in terms of relations.
>
>
> On 

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-22 Thread Rob Freeman
On Wed, May 22, 2024 at 10:02 PM James Bowery  wrote:
> ...
> You correctly perceive that the symbolic regression presentation is not to 
> the point regarding the HNet paper.  A big failing of the symbolic regression 
> world is the same as it is in the rest of computerdom:  Failure to recognize 
> that functions are degenerate relations and you had damn well better have 
> thought about why you are degenerating when you do so.  But likewise, when 
> you are speaking about second-order theories (as opposed to first-order 
> theories), such as Category Theory, you had damn well have thought about why 
> you are specializing second-order predicate calculus when you do so.
>
> Not being familiar with Category Theory I'm in no position to critique this 
> decision to specialize second-order predicate calculus.  I just haven't seen 
> Category Theory presented as a second-order theory.  Perhaps I could 
> understand Category Theory thence where the enthusiasm for Category Theory 
> comes from if someone did so.
>
> This is very much like my problem with the enthusiasm for type theories in 
> general.

You seem to have an objection to second order predicate calculus.
Dismissing category theory because you equate it to that. On what
basis do you equate them? Why do you reject second order predicate
calculus?

What I like about category theory (as well as quantum formulations) is
that I see it as a movement away from definitions in terms of what
things are, and towards definitions in terms of how things are
related. Which fits with my observations of variation in objects
(grammar initially) defying definition, but being accessible to
definition in terms of relations.

> But I should also state that my motivation for investigating Granger et al's 
> approach to ML is based not the fact that it focuses on abduced relations -- 
> but on its basis in "The grammar of mammalian brain capacity" being a 
> neglected order of grammar in the Chomsky Hierarchy: High Order Push Down 
> Automata.  The fact that the HNet paper is about abduced relations was one of 
> those serendipities that the prospector in me sees as a of gold in them thar 
> HOPDAs.

Where does the Granger Hamiltonian net paper mention "The grammar of
mammalian brain capacity"? If it's not mentioned, how do you think
they imply it?

> To wrap up, your definition of "regression" seems to differ from mine in the 
> sense that, to me, "regression" is synonymous with data-driven modeling which 
> is that aspect of learning, including machine learning, concerned with what 
> IS as opposed to what OUGHT to be the case.

The only time that paper mentions regression seems to indicate that
they are also making a distinction between their relational encoding
and regression:

'LLMs ... introduce sequential information supplementing the standard
classification-based “isa” relation, although much of the information
is learned via regression, and remains difficult to inspect or
explain'

How do you relate their relational encoding to regression?

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M2f9210fa34834e5bb8e46d0c
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-22 Thread Rob Freeman
On Thu, May 23, 2024 at 10:10 AM Quan Tesla  wrote:
>
> The paper is specific to a novel and quantitative approach and method for 
> association in general and specifically.

John was talking about the presentation James linked, not the paper,
Quan. He may be right that in that presentation they use morphisms etc
to map learned knowledge from one domain to another.

He's not criticising the paper though. Only the presentation. And the
two were discussing different techniques. John isn't criticising the
Granger et al. "relational encoding" paper at all.

> The persistence that pattern should be somehow decoupled doesn't make much 
> sense to me. Information itself is as a result of pattern. Pattern is 
> everything. Light itself is a pattern, so are the four forces. Ergo.  I 
> suppose, it depends on how you view it.

If you're questioning my point, it is that definition in terms of
relations means the pattern can vary. It's like the gap filler example
in the paper:

"If John kissed Mary, Bill kissed Mary, and Hal kissed Mary, etc.,
then a novel category ¢X can be abduced such that ¢X kissed Mary.
Importantly, the new entity ¢X is not a category based on the features
of the members of the category, let alone the similarity of such
features. I.e., it is not a statistical cluster in any usual sense.
Rather, it is a “position-based category,” signifying entities that
stand in a fixed relation with other entities. John, Bill, Hal may not
resemble each other in any way, other than being entities that all
kissed Mary. Position based categories (PBCs) thus fundamentally
differ from “isa” categories, which can be similarity-based (in
unsupervised systems) or outcome-based (in supervised systems)."

If you define your category on the basis of kissing Mary, then who's
to say that you might not find other people who have kissed Mary, and
change your category from moment to moment. As you discovered clusters
of former lovers by fits and starts, the actual pattern of your
"category" might change dramatically. But it would still be defined by
its defining relation of having kissed Mary.

That might also talk to the "regression" distinction. Or
characterizing the system, or indeed all cognition, as "learning"
period. It elides both "similarity-based" unsupervised, and
supervised, "learning". The category can in fact grow as you "learn"
of new lovers. A process which I also have difficulty equating with
regression.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M8c58bf8eb0a279da79ea34eb
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [fluid-dev] IIR low pass filter

2024-05-22 Thread Freeman Gilmore
Thanks Tom:

I printed the wikipeda to study, that will help a lot
What I was thinking was that the IIR would be less likely to be used than
the default low pass filter by an organ fonts;.  From what you said that
may not be true

Thank you for your help *fg*

On Wed, May 22, 2024 at 2:13 PM Tom M.  wrote:

> Every SF2 compliant synthesizer has a low pass filter in place. To use
> that, open a SF2 editor, select an instrument and specify filter
> cutoff frequency and filter Q.
>
> To automate this via MIDI events, you should learn about Soundfont
> modulators, and how they influence the values that you have entered in
> the previous step. Then, by using the SF2 editor, set up a modulator
> for that / those instrument(s) that you would like to control.
>
> To avoid copy the same modulators to all instruments and all
> soundfonts, you can use fluid_synth_add_default_mod() to set up a
> default modulator. This is fluidsynth-specific and not portable to
> other SF2 synths, though.
>
> Alternatively, you can use MIDI NRPNs to manipulate the filter cutoff
> freq. and Q.
>
> fluid_synth_set_custom_filter() is intended for advanced use-cases and
> most likely not what you want or need.
>
> An int is an integer, refer to
> https://en.wikipedia.org/wiki/Integer_(computer_science)
>
> Tom
>
___
fluid-dev mailing list
fluid-dev@nongnu.org
https://lists.nongnu.org/mailman/listinfo/fluid-dev


[fluid-dev] IIR low pass filter

2024-05-22 Thread Freeman Gilmore
I would like to use the low pass IIR channel.I found API, Efect – iiR
Filter: fluid_iir_filter_type {FLID_IIR_DISABLED = 0, FLUUID_IIR_LOWPASS}.

And fluid_synth_set_custom_filter() ….

Things not mentioned. Channel number, cut off frequency and what is “int”

Main thing is turning the filter on, cut off frequency and channel.
Simple filter like RC (now gain}, cut off frequency145/sec.

If possible Turn the filter off and on by midi.   And the least important
would be to change the cut off frequency by midi.

Where can I get the information that explains this?  How to use it?  Or an
example.

Thank you, *fg*
___
fluid-dev mailing list
fluid-dev@nongnu.org
https://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-21 Thread Rob Freeman
James,

The Hamiltonian paper was nice for identifying gap filler tasks as
decoupling meaning from pattern: "not a category based on the features
of the members of the category, let alone the similarity of such
features".

Here, for anyone else:

A logical re-conception of neural networks: Hamiltonian bitwise
part-whole architecture
E.F.W.Bowen,1 R.Granger,2* A.Rodriguez
https://openreview.net/pdf?id=hP4dxXvvNc8

"Part-whole architecture". A new thing. Though they 'share some
characteristics with “embeddings” in transformer architectures'.

So it's a possible alternate reason for the surprise success of
transformers. That's good. The field blunders about surprising itself.
But there's no theory behind it. Transformers just stumbled into
embedding representations because they looked at language. We need to
start thinking about why these things work. Instead of just blithely
talking about the miracle of more data. Disingenuously scaring the
world with idiotic fears about "more data" becoming conscious by
accident. Or insisting like LeCun that the secret is different data.

But I think you're missing the point of that Hamiltonian paper if you
think this decoupling of meaning from pattern is regression. I think
the point of this, and also the category theoretic representations of
Symbolica, and also quantum mechanical formalizations, is
indeterminate symbolization, even novelty.

Yeah, maybe regression will work for some things. But that ain't
language. And it ain't cognition. They are more aligned with a
different "New Kind of Science", that touted by Wolfram, new
structure, all the time. Not regression, going backward, but novelty,
creativity.

In my understanding the point with the Hamiltonian paper is that a
"position-based encoding" decouples meaning from any given pattern
which instantiates it.

Whereas the NN presentation is talking about NNs regressing to fixed
encodings. Not about an operator which "calculates energies" in real
time.

Unless I've missed something in that presentation. Is there anywhere
in the hour long presentation where they address a decoupling of
category from pattern, and the implications of this for novelty of
structure?

On Tue, May 21, 2024 at 11:36 PM James Bowery  wrote:
>
> Symbolic Regression is starting to catch on but, as usual, people aren't 
> using the Algorithmic Information Criterion so they end up with unprincipled 
> choices on the Pareto frontier between residuals and model complexity if not 
> unprincipled choices about how to weight the complexity of various "nodes" in 
> the model's "expression".
>
> https://youtu.be/fk2r8y5TfNY
>
> A node's complexity is how much machine language code it takes to implement 
> it on a CPU-only implementation.  Error residuals are program literals aka 
> "constants".
>
> I don't know how many times I'm going to have to point this out to people 
> before it gets through to them (probably well beyond the time maggots have 
> forgotten what I tasted like) .

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M8418e9bd5e49f7ca08dfb816
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [gentoo-user] Re: PCIe version 2, 3 etc and how to know which a card is.

2024-05-21 Thread Rich Freeman
On Tue, May 21, 2024 at 6:38 AM Dale  wrote:
>
> So, I created a new link to slot 4.  The network came up.  So,
> basically, it changed names as you suggested. I thought the purpose of
> the enp* names was that they are consistent.  Adding or removing cards
> wouldn't change the names of cards, like network cards.

Nope, persistent names are only persistent as long as there are no
hardware changes.

Under the old system if you had 10 NICs on a host, on any reboot some
of them could change names, at least in theory.  Under the new system
if you have 10 NICs on one host and don't touch the hardware, the
names will never change.

Under the old system if you had 1 NIC in a host, the name would never
change even if the hardware did change.  Under the new system if you
have 1 NIC in a host, the name could change if the hardware changes.

It is basically a tradeoff, which makes life much better if you have
multiple NICs, and marginally worse if you have only one.  However,
hardware changes than can cause a name change are probably rare, and
if you have only one NIC then ideally your network manager can just
use wildcards to not care so much about the name.  I usually stick e*
in my networkd config for the device name on single-NIC hosts.  If you
have multiple NICs then I maybe there is a better way to go about it -
maybe there is a network manager that can use more data from the NIC
itself to track them.

-- 
Rich



Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-20 Thread Rob Freeman
"Importantly, the new entity ¢X is not a category based on the
features of the members of the category, let alone the similarity of
such features"

Oh, nice. I hadn't seen anyone else making that point. This paper 2023?

That's what I was saying. Nice. A vindication. Such categories
decouple the pattern itself from the category.

But I'm astonished they don't cite Coecke, as the obvious quantum
formulation precedent (though I noticed it for language in the '90s.)

I wonder how their formulation relates to what Symbolica are doing
with their category theoretic formulations:

https://youtu.be/rie-9AEhYdY?si=9RUB3O_8WeFSU3ni

I haven't read closely enough to know if they make that decoupling of
category from pattern a sense for "creativity" the way I'm suggesting.
Perhaps that's because a Hamiltonian formulation is still too trapped
in symbolism. We need to remain trapped in the symbolism for physics.
Because for physics we don't have access to an underlying reality.
That's where AI, and particularly language, has an advantage. Because,
especially for language, the underlying reality of text is the only
reality we do have access to (though Chomsky tried to swap that
around, and insist we only access our cognitive insight.)

For AI, and especially for language, we have the opportunity to get
under even a quantum formalism. It will be there implicitly, but
instead of laboriously formulating it, and then collapsing it at run
time, we can simply "collapse" structure directly from observation.
But that "collapse" must be flexible, and allow different structures
to arise from different symmetries found in the data from moment to
moment. So it requires the abandonment of back-prop.

In theory it is easy though. Everything can remain much as it is for
LLMs. Only, instead of trying to "learn" stable patterns using
back-prop, we must "collapse" different symmetries in the data in
response to a different "prompt", at run time.

On Tue, May 21, 2024 at 5:01 AM James Bowery  wrote:
>
> From A logical re-conception of neural networks: Hamiltonian bitwise 
> part-whole architecture
>
>> From hierarchical statistics to abduced symbols
>> It is perhaps useful to envision some of the ongoing devel-
>> opments that are arising from enlarging and elaborating the
>> Hamiltonian logic net architecture. As yet, no large-scale
>> training whatsoever has gone into the present minimal HNet
>> model; thus far it is solely implemented at a small, introduc-
>> tory scale, as an experimental new approach to representa-
>> tions. It is conjectured that with large-scale training, hierar-
>> chical constructs would be accreted as in large deep network
>> systems, with the key difference that, in HNets, such con-
>> structs would have relational properties beyond the “isa”
>> (category) relation, as discussed earlier.
>> Such relational representations lend themselves to abduc-
>> tive steps (McDermott 1987) (or “retroductive” (Pierce
>> 1883)); i.e., inferential generalization steps that go beyond
>> warranted statistical information. If John kissed Mary, Bill
>> kissed Mary, and Hal kissed Mary, etc., then a novel cate-
>> gory ¢X can be abduced such that ¢X kissed Mary.
>> Importantly, the new entity ¢X is not a category based on
>> the features of the members of the category, let alone the
>> similarity of such features. I.e., it is not a statistical cluster
>> in any usual sense. Rather, it is a “position-based category,”
>> signifying entities that stand in a fixed relation with other
>> entities. John, Bill, Hal may not resemble each other in any
>> way, other than being entities that all kissed Mary. Position-
>> based categories (PBCs) thus fundamentally differ from
>> “isa” categories, which can be similarity-based (in unsuper-
>> vised systems) or outcome-based (in supervised systems).
>> PBCs share some characteristics with “embeddings” in
>> transformer architectures.
>> Abducing a category of this kind often entails overgener-
>> alization, and subsequent learning may require learned ex-
>> ceptions to the overgeneralization. (Verb past tenses typi-
>> cally are formed by appending “-ed”, and a language learner
>> may initially overgeneralize to “runned” and “gived,” neces-
>> sitating subsequent exception learning of “ran” and “gave”.)
>
>
> The abduced "category" ¢X bears some resemblance to the way Currying (as in 
> combinator calculus) binds a parameter of a symbol to define a new symbol.  
> In practice it only makes sense to bother creating this new symbol if it, in 
> concert with all other symbols, compresses the data in evidence.  (As for 
> "overgeneralization", that applies to any error in prediction encountered 
> during learning and, in the ideal compressor, increases the algorithm's 
> length even if only by appending the exceptional data in a conditional -- NOT 
> "falsifying" anything as would that rascal Popper).
>
> This is "related" to quantum-logic in the sense that Tom Etter calls out in 
> the linked 

Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-20 Thread Rob Freeman
Well, I don't know number theory well, but what axiomatization of
maths are you basing the predictions in your series on?

I have a hunch the distinction I am making is similar to a distinction
about the choice of axiomatization. Which will be random. (The
randomness demonstrated by Goedel's diagonalization lemma? "True" but
not provable/predictable within the system?)

On Mon, May 20, 2024 at 9:09 PM James Bowery  wrote:
>
>
>
> On Sun, May 19, 2024 at 11:32 PM Rob Freeman  
> wrote:
>>
>> James,
>>
>> My working definition of "truth" is a pattern that predicts. And I'm
>> tending away from compression for that.
>
>
> 2, 4, 6, 8
>
> does it mean
> 2n?
>
> or does it mean
> 10?
>
>
>
>> Related to your sense of "meaning" in (Algorithmic Information)
>> randomness. But perhaps not quite the same thing.
>
>
> or does it mean a probability distribution of formulae that all produce 2, 4, 
> 6, 8 whatever they may subsequently produce?
>
> or does it mean a probability distribution of sequences
> 10, 12?
> 10, 12, 14?
> 10, 13, 14?
> ...

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M086013ed4b196bdfe9a874c8
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-19 Thread Rob Freeman
James,

My working definition of "truth" is a pattern that predicts. And I'm
tending away from compression for that.

Related to your sense of "meaning" in (Algorithmic Information)
randomness. But perhaps not quite the same thing.

I want to emphasise a sense in which "meaning" is an expansion of the
world, not a compression. By expansion I mean more than one,
contradictory, predictive pattern from a single set of data.

Note I'm saying a predictive pattern, not a predictable pattern.
(Perhaps as a random distribution of billiard balls might predict the
evolution of the table, without being predictable itself?)

There's randomness at the heart of that. Contradictory patterns
require randomness. A single, predictable, pattern, could not have
contradictory predictive patterns either? But I see the meaning coming
from the prediction, not any random pattern that may be making the
prediction.

Making meaning about prediction, and not any specific pattern itself,
opens the door to patterns which are meaningful even though new. Which
can be a sense for creativity.

Anyway, the "creative" aspect of it would explain why LLMs get so big,
and don't show any interpretable structure.

With a nod to the topic of this thread, it would also explain why
symbolic systems would never be adequate. It would undermine the idea
of stable symbols, anyway.

So, not consensus through a single, stable, Algorithmic Information
most compressed pattern, as I understand you are suggesting (the most
compressed pattern not knowable anyway?) Though dependent on
randomness, and consistent with your statement that "truth" should be
"relative to a given set of observations".

On Sat, May 18, 2024 at 11:57 PM James Bowery  wrote:
>
> Rob, the problem I have with things like "type theory" and "category theory" 
> is that they almost always elide their foundation in HOL (high order logic) 
> which means they don't really admit that they are syntactic sugars for 
> second-order predicate calculus.  The reason I describe this as "risible" is 
> the same reason I rather insist on the Algorithmic Information Criterion for 
> model selection in the natural sciences:
>
> Reduce the argument surface that has us all going into hysterics over "truth" 
> aka "the science" aka what IS the case as opposed to what OUGHT to be the 
> case.
>
> Note I said "reduce" rather than "eliminate" the argument surface.  All I'm 
> trying to do is get people to recognize that relative to a given set of 
> observations the Algorithmic Information Criterion is the best operational 
> definition of the truth.
>
> It's really hard for people to take even this baby step toward standing down 
> from killing each other in a rhyme with The Thirty Years War, given that 
> social policy is so centralized that everyone must become a de facto 
> theocratic supremacist as a matter of self defence.  It's really obvious that 
> the trend is toward capturing us in a control system, e.g. a Valley-Girl 
> flirtation friendly interface to Silicon Chutulu that can only be fought at 
> the physical level such as sniper bullets through the cooling systems of data 
> centers.  This would probably take down civilization itself given the 
> over-emphasis on efficiency vs resilience in civilization's dependence on 
> information systems infrastructure.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-M8a84fef3037323602ea7dcca
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Can symbolic approach entirely replace NN approach?

2024-05-16 Thread Rob Freeman
James,

For relevance to type theories in programming I like Bartosz
Milewski's take on it here. An entire lecture series, but the part
that resonates with me is in the introductory lecture:

"maybe composability is not a property of nature"

Cued up here:

Category Theory 1.1: Motivation and Philosophy
Bartosz Milewski
https://youtu.be/I8LbkfSSR58?si=nAPc1f0unpj8i2JT=2734

Also Rich Hickey, the creator of Clojure language, had some nice
interpretations in some of his lectures, where he argued for the
advantages of functional languages over object oriented languages.
Basically because, in my interpretation, the "objects" can only ever
be partially "true".

Maybe summarized well here:

https://twobithistory.org/2019/01/31/simula.html

Or here:

https://www.flyingmachinestudios.com/programming/the-unofficial-guide-to-rich-hickeys-brain/

Anyway, the code guys are starting to notice it too.

-Rob

On Fri, May 17, 2024 at 7:25 AM James Bowery  wrote:
>
> First, fix quantum logic:
>
> https://web.archive.org/web/20061030044246/http://www.boundaryinstitute.org/articles/Dynamical_Markov.pdf
>
> Then realize that empirically true cases can occur not only in multiplicity 
> (OR), but with structure that includes the simultaneous (AND) measurement 
> dimensions of those cases.
>
> But don't tell anyone because it might obviate the risible tradition of 
> so-called "type theories" in both mathematics and programming languages 
> (including SQL and all those "fuzzy logic" kludges) and people would get 
> really pissy at you.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T682a307a763c1ced-Mea3f554271a532a282d58fa0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[jira] [Resolved] (CXF-9014) org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH OpenJDK

2024-05-16 Thread Freeman Yue Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/CXF-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Freeman Yue Fang resolved CXF-9014.
---
Fix Version/s: 4.1.0
   Resolution: Fixed

> org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH 
> OpenJDK
> 
>
> Key: CXF-9014
> URL: https://issues.apache.org/jira/browse/CXF-9014
> Project: CXF
>  Issue Type: Test
>Affects Versions: 4.0.4
>Reporter: Jamie Mark Goodyear
>Assignee: Freeman Yue Fang
>Priority: Minor
> Fix For: 4.1.0
>
> Attachments: bob-modified.jks, request-with-comment.xml, 
> request-with-trailing-whitespace.xml
>
>
> org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH 
> OpenJDK.
> In a full build of CXF 4.1.x (main) the SignatureWhitespaceTest suite will 
> fail when built on RH OpenJDK.
> Likely due to certs/algorithms supported by RH (see CXF-9006).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CXF-9014) org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH OpenJDK

2024-05-15 Thread Freeman Yue Fang (Jira)


[ 
https://issues.apache.org/jira/browse/CXF-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846667#comment-17846667
 ] 

Freeman Yue Fang commented on CXF-9014:
---

PR is
https://github.com/apache/cxf/pull/1875

> org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH 
> OpenJDK
> 
>
> Key: CXF-9014
> URL: https://issues.apache.org/jira/browse/CXF-9014
> Project: CXF
>  Issue Type: Test
>Affects Versions: 4.0.4
>Reporter: Jamie Mark Goodyear
>Assignee: Freeman Yue Fang
>Priority: Minor
> Attachments: bob-modified.jks, request-with-comment.xml, 
> request-with-trailing-whitespace.xml
>
>
> org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH 
> OpenJDK.
> In a full build of CXF 4.1.x (main) the SignatureWhitespaceTest suite will 
> fail when built on RH OpenJDK.
> Likely due to certs/algorithms supported by RH (see CXF-9006).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (CXF-9014) org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH OpenJDK

2024-05-15 Thread Freeman Yue Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/CXF-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Freeman Yue Fang reassigned CXF-9014:
-

Assignee: Freeman Yue Fang

> org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH 
> OpenJDK
> 
>
> Key: CXF-9014
> URL: https://issues.apache.org/jira/browse/CXF-9014
> Project: CXF
>  Issue Type: Test
>Affects Versions: 4.0.4
>Reporter: Jamie Mark Goodyear
>Assignee: Freeman Yue Fang
>Priority: Minor
> Attachments: bob-modified.jks, request-with-comment.xml, 
> request-with-trailing-whitespace.xml
>
>
> org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH 
> OpenJDK.
> In a full build of CXF 4.1.x (main) the SignatureWhitespaceTest suite will 
> fail when built on RH OpenJDK.
> Likely due to certs/algorithms supported by RH (see CXF-9006).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CXF-9014) org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH OpenJDK

2024-05-15 Thread Freeman Yue Fang (Jira)


[ 
https://issues.apache.org/jira/browse/CXF-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846625#comment-17846625
 ] 

Freeman Yue Fang commented on CXF-9014:
---

Thanks [~jgoodyear] for the confirmation, I will send PR soon

Freeman

> org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH 
> OpenJDK
> 
>
> Key: CXF-9014
> URL: https://issues.apache.org/jira/browse/CXF-9014
> Project: CXF
>  Issue Type: Test
>Affects Versions: 4.0.4
>Reporter: Jamie Mark Goodyear
>Priority: Minor
> Attachments: bob-modified.jks, request-with-comment.xml, 
> request-with-trailing-whitespace.xml
>
>
> org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH 
> OpenJDK.
> In a full build of CXF 4.1.x (main) the SignatureWhitespaceTest suite will 
> fail when built on RH OpenJDK.
> Likely due to certs/algorithms supported by RH (see CXF-9006).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CXF-9014) org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH OpenJDK

2024-05-14 Thread Freeman Yue Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/CXF-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Freeman Yue Fang updated CXF-9014:
--
Attachment: bob-modified.jks
request-with-comment.xml
request-with-trailing-whitespace.xml

> org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH 
> OpenJDK
> 
>
> Key: CXF-9014
> URL: https://issues.apache.org/jira/browse/CXF-9014
> Project: CXF
>  Issue Type: Test
>Affects Versions: 4.0.4
>Reporter: Jamie Mark Goodyear
>Priority: Minor
> Attachments: bob-modified.jks, request-with-comment.xml, 
> request-with-trailing-whitespace.xml
>
>
> org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH 
> OpenJDK.
> In a full build of CXF 4.1.x (main) the SignatureWhitespaceTest suite will 
> fail when built on RH OpenJDK.
> Likely due to certs/algorithms supported by RH (see CXF-9006).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CXF-9014) org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH OpenJDK

2024-05-14 Thread Freeman Yue Fang (Jira)


[ 
https://issues.apache.org/jira/browse/CXF-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846455#comment-17846455
 ] 

Freeman Yue Fang commented on CXF-9014:
---

Hi [~jgoodyear],

Thanks for reporting this, I believe this is because in bob-modified.jks(used 
in SignatureWhitespaceTest) it use Subject Public Key Algorithm: 1024-bit RSA 
key (weak) and isn't allowed in modern JDK versions. So I regenerated 
bob-modified.jks with with RSA 2048/sha256. 

Please see attached affected files, could you please override the those in 
systests/ws-security/src/test/resources/org/apache/cxf/systest/ws/action folder 
and retest it?

Thanks!
Freeman

> org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH 
> OpenJDK
> 
>
> Key: CXF-9014
> URL: https://issues.apache.org/jira/browse/CXF-9014
> Project: CXF
>  Issue Type: Test
>Affects Versions: 4.0.4
>Reporter: Jamie Mark Goodyear
>Priority: Minor
> Attachments: bob-modified.jks, request-with-comment.xml, 
> request-with-trailing-whitespace.xml
>
>
> org.apache.cxf.systest.ws.action.SignatureWhitespaceTest test fail on RH 
> OpenJDK.
> In a full build of CXF 4.1.x (main) the SignatureWhitespaceTest suite will 
> fail when built on RH OpenJDK.
> Likely due to certs/algorithms supported by RH (see CXF-9006).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [gentoo-user] Encrypted drives, password generation and management howto, guide.

2024-05-14 Thread Rich Freeman
On Tue, May 14, 2024 at 7:28 AM Dale  wrote:
>
> First, I needed to generate a password.

Honestly, I'd stop right there, and think about WHY you're encrypting
your disks, and WHY you need a password to decrypt them.  There are
many use cases and threat models to consider.

I have a whole bunch of encrypted drives on my Ceph cluster, and none
of them have a traditional "password" and I couldn't tell you what any
of them are.  They're keys stored in files on the OS drive, and I do
have a backup of them as well.  I don't have to go looking up anything
to do anything because the file is referenced in crypttab and so LUKS
just does its thing during boot.

Obviously anybody who has physical access to the host can decrypt the
drives.  The OS disks aren't even encrypted.  So why bother? Well, my
threat model is this - I have huge amounts of data on disks, and disks
eventually fail, and they're a real pain to wipe, especially if
they've failed.  With my solution, those physical disks are completely
unreadable when separated from the OS drive.  There is no risk of
brute-force attacks as there is no memorable passphrase to crack -
they're just random keys, so it is a basic brute force attack on AES
itself.  When things need rebooting I don't need to be present to type
anything in, and I don't need any fancy TPM-based solutions to make
that possible either.

The more traditional approach uses memorable passphrases, and for that
you can use pwgen, or xkcdpass.  Or you can just come up with
something memorable but not likely to be guessed, with plenty of
rounds.

The most common approach (outside of Linux) is to use a TPM to manage
the key with verified boot.  This is possible on Linux, but no distro
I'm aware of other than maybe ChromeOS does it (and ChromeOS doesn't
really do it the traditional way).  This lets you have a desktop that
makes the disk unreadable when separated from the PC, and it can only
be read if the disk is booted normally.  It is a very elegant
solution, assuming you trust the security of the TPM, but without
distro support I probably wouldn't mess with it.  On Windows it is
very common, and on ChromeOS it isn't even optional - they all do it.

-- 
Rich



Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-10 Thread Rob Freeman
In the corporate training domain, you must have come across Edward de
Bono? I recall he also focuses on discontinuous change and novelty.

Certainly I would say there is broad scope for the application of,
broadly quantum flavoured, AI based insights about meaning in broader
society. Not just project management. But not knowing how your
"Essence" works, I can't comment how much that coincides with what I
see.

There's a lot of woo woo which surrounds quantum, so I try to use
analogies sparingly. But for ways to present it, you might look at Bob
Coecke's books. I believe he has invented a whole visual,
diagrammatic, system for talking about quantum systems. He is proud of
having used it to teach high school students. The best reference for
that might be his book "Picturing Quantum Processes".

Thanks for your interest in reading more about the solutions I see. I
guess I've been lazy in not putting out more formal presentations.
Most of what I have written has been fairly technical, and directed at
language modeling.

The best non-technical summary might be an essay I posted on substack, end '22:

https://robertjohnfreeman.substack.com/p/essay-response-to-question-which

That touches briefly on the broader social implications of subjective
truth, and how a subjective truth which is emergent of objective
structural principles, might provide a new objective social consensus.

On quantum indeterminacy emerging from the complexity of combinations
of perfectly classical and observable elements, I tried to present
myself in contrast to Bob Coecke's top-down quantum grammar approach,
on the Entangled Things podcast:

https://www.entangledthings.com/entangled-things-rob-freeman

You could look at my Facebook group, Oscillating Networks for AI.
Check out my Twitter, @rob_freeman.

Technically, the best summary is probably still my AGI-21
presentation. Here's the workshop version of that, with discussion at
the end:

https://www.youtube.com/watch?v=YiVet-b-NM8

On Fri, May 10, 2024 at 9:18 PM Quan Tesla  wrote:
>
> Rob.
>
> Thank you for being candid. My verbage isn't deliberate. I don't seek 
> traction, or funding for what I do. There's no real justification for your 
> mistrust.
>
> Perhaps, let me provide some professional background instead. As an 
> independent researcher, I follow scientific developments among multiple 
> domains, seeking coherence and sense-making for my own scientific endeavor, 
> spanning 25 years. AGI has been a keen interest of mine since 2013. For AGI, 
> I advocate pure machine consciousness, shying away from biotech approaches.
>
> My field of research interest stems from a previous career in cross-cultural 
> training, and the many challenges it presented in the 80's. As 
> designer/administrator/manager and trainer, one could say I fell in love with 
> optimal learning methodologies and associated technologies.
>
> Changing careers, I started in mainframe operating to advance to programming, 
> systems analysis and design, information and business engineering and 
> ultimately contracting consultant. My one, consistent research area remained 
> knowledge engineering, especialky tacit-knowledge engineering. Today, I 
> promote the idea for a campus specializing in quantum systems engineering. 
> I'm generally regarded as being a pracademic of sorts.
>
> Like many of us practitioners here, I too was fortunate to learn with a 
> number of founders and world-class methodologists.
>
> In 1998, my job in banking was researcher/architect to the board of a 5-bank 
> merger, today part of the Barclays Group. As futurist architect and peer 
> reviewer, I was introduced to quantum physics. Specifically, in context of 
> the discovery of the quark.
>
> I realized that future, exponential complexity was approaching, especially 
> for knowledge organizations. I researched possible solutions worldwide, but 
> found none at that time, which concerned me deeply.
>
> Industries seemed to be rushing into the digital revolution without a 
> rekiable, methodological management foundation in place. As architect, I had 
> nothing to offer as a useful, 10-year futures outlook either. I didn't feel 
> competent to be the person to address that apparent gap.
>
> A good colleague of mine was a proven IE methodologist and consultant to IBM 
> Head Office. I approached him twice with my concerns, asking him to adapt his 
> proven IE methodogy to address the advancing future. He didn't take my 
> concerns seriously at all.
>
> For the next year, the future seemed ever-more clearer to me, yet I couldn't 
> find anyone to develop a future aid for enterprises as a roadmap toolkit, or 
> a coping mechanism for a complex-adaptive reality.  The world was hung up on 
> UML and Object oriented technologies.
>
> In desperation, I decided h

Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-09 Thread Rob Freeman
Quan. You may be talking sense, but you've got to tone down the
buzzwords by a whole bunch. It's suspicious when you jam so many in
together.

If you think there's a solution there, what are you doing about it in practice?

Be more specific. For instance, within the span of what I understand
here I might guess at relevance for Coecke's "Togetherness":

>From quantum foundations via natural language meaning to a theory of everything
https://arxiv.org/pdf/1602.07618.pdf

Or Tomas Mikolov's (key instigator of word2vec?) attempts to get
funding to explore evolutionary computational automata.

Tomas Mikolov - "We can design systems where complexity seems to be
growing" (Another one from AGI-21. It can be hard to motivate yourself
to listen to a whole conference, but when you pay attention, there can
be interesting stuff on the margins.)
https://youtu.be/CnsqHSCBgX0?t=10859

There's also an Artificial Life, ALife, community. Which seems to be
quite big in Japan. A group down in Okinawa under Tom Froese, anyway.
(Though they seem to go right off the edge and focus on some kind of
community consciousness.) But also in the ALife category I think of
Bert Chan, recently moved to Google(?).

https://biofish.medium.com/lenia-beyond-the-game-of-life-344847b10a72

All of that. And what Dreyfus called Heideggerian AI. Associated with
Rodney Brooks, and his "Fast, Cheap, and Out of Control", Artificial
Organism bots. It had a time in Europe especially, Luc Steels, Rolf
Pfeifer? The recently lost Daniel Dennett.

Why Heideggerian AI failed and how fixing it would require making it
more Heideggerian☆
Hubert L.Dreyfus
https://cid.nada.kth.se/en/HeideggerianAI.pdf

How would you relate what you are saying to all of these?

I'm sympathetic to them all. Though I think they miss the insight of
predictive symmetries. Which language drives you to. And what LLMs
stumbled on too. And that's held them up. Held them up for 30 years or
more.

ALife had a spike around 1995. Likely influencing Ben and his Chaotic
Logic book, too. They had the complex system idea back then, they just
didn't have a generative principle to bring it all together.

Meanwhile LLMs have kind of stumbled on the generative principle.
Though they remain stuck in the back-prop paradigm, and unable to
fully embrace the complexity.

I put myself in the context of all those threads. Though I kind of
worked back to them, starting with the language problem, and finding
the complexity as I went. As I say, language drives you to deal with
predictive symmetries. I think ALife has stalled for 30 years because
it hasn't had a central generative principle. What James might call a
"prior". Language offers a "prior" (predictive symmetries.) Combine
that with ALife complex systems, and you start to get something.

But that's to go off on my own tangent again.

Anyway, if you can be more specific, or put what you're saying in the
context of something someone else is doing, you might get more
traction.

On Thu, May 9, 2024 at 3:10 PM Quan Tesla  wrote:
>
> Rob, not butting in, but rather adding to what you said (see quotation below).
>
> The conviction across industries that hierachy (systems robustness) persist 
> only in descending and/or ascending structures, though true, can be proven to 
> be somewhat incomplete.
>
> There's another computational way to derive systems-control hierarchy(ies) 
> from. This is the quantum-engineering way (referred to before), where 
> hierachy lies hidden within contextual abstraction, identified via case-based 
> decision making and represented via compound functionality outcomes. 
> Hierarchy as a centre-outwards, in the sense of emergent, essential 
> characteristic of a scalable system. Not deterministically specified.
>
> In an evolutionary sense, hierarchies are N-nestable and self discoverable. 
> With the addition of integrated vectors, knowledge graphs may also be 
> derived, instead of crafted.
>
> Here, I'm referring to 2 systems hierarchies in particular. 'A', a hierarchy 
> of criticality (aka constraints) and 'B', a hierarchy of priority (aka 
> systemic order).
>
> Over the lifecycles of a growing system, as it mutates and evolve in 
> relevance (optimal semantics), hierarchy would start resembling - without 
> compromising -NNs and LLMs.
>
> Yes, a more-holistic envelope then, a new, quantum reality, where 
> fully-recursive functionality wasn't only guaranteed, but correlation and 
> association became foundational, architectural principles.
>
> This is the future of quantum systems engineering, which I believe quantum 
> computing would eventually lead all researchers to. Frankly, without it, 
> we'll remain stuck in the quagmire of early 1990s+ functional 
> analysis-paralysis, by any name.
>
> I'll hold out hope for that one, enlightened developer to make that quant

Re: [gentoo-user] Hard drive and PWDIS or pin 3 power disable/reset.

2024-05-09 Thread Rich Freeman
On Thu, May 9, 2024 at 5:12 PM Dale  wrote:
>
> I'm looking at buying another drive.  I'm trying to avoid buying one
> with the PWDIS pin.  I'm looking at the specs to see if it says anything
> about the feature, there or not there.  I'm not seeing anything.  This
> is what I'm looking at.
>
> https://www.seagate.com/files/www-content/datasheets/pdfs/exos-x16-DS2011-1-1904US-en_US.pdf
>
> Can someone tell me how to know when a drive has PWDIS and when it
> doesn't?  Is there some term for it that shows in the specs and I'm
> missing it?  Or is there no way to really know?

I think it would be labeled as such.  That is for a genuine retail
version of the drive with retail labeling.

So if you get the drive and it has the pretty Exos logo and green
colors and the model number that matches the datasheet and all that
stuff, then it probably won't have issues.

However, if you're buying something off ebay, and the drive just has a
plain white label, and a model number that doesn't actually match the
datasheet, but some random webpage or reddit post assures you that it
is the same thing, well, it probably is the same thing, but it might
very well have that power issue.

Those shucked drives generally come from USB enclosures, and the drive
on the inside might be a rebranded Exos with alternative firmware/etc,
but the label isn't going to actually say that, and the package will
say "EasyStore USB Drive" or whatever it is sold as.  If you use it
the way it is sold, then you again won't have issues since its
internal USB HBA will do the right thing.  It is just that when you
rip open the box that all bets are off.

The actual drives sold for enterprise use generally aren't sold in
retail packaging as I understand it.  To get one of those officially
you need to buy them through a server vendor or some other
enterprise-oriented partner, who probably has a nice sales person who
will treat you to a free lunch while you talk about the PWDIS
requirements of the $10M pallet of drives you're about to buy.

-- 
Rich



[gentoo-commits] repo/gentoo:master commit in: sys-process/systemd-cron/

2024-05-09 Thread Richard Freeman
commit: 650a30d52a41b06a9b5687c43d9e075f58066710
Author: Richard Freeman  gentoo  org>
AuthorDate: Thu May  9 10:50:57 2024 +
Commit: Richard Freeman  gentoo  org>
CommitDate: Thu May  9 10:52:44 2024 +
URL:https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=650a30d5

sys-process/systemd-cron: stabilize 2.4.0 for amd64

Bug: https://bugs.gentoo.org/931626
Signed-off-by: Richard Freeman  gentoo.org>

 sys-process/systemd-cron/systemd-cron-2.4.0.ebuild | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/sys-process/systemd-cron/systemd-cron-2.4.0.ebuild 
b/sys-process/systemd-cron/systemd-cron-2.4.0.ebuild
index 00738f1c0e07..293661ce4869 100644
--- a/sys-process/systemd-cron/systemd-cron-2.4.0.ebuild
+++ b/sys-process/systemd-cron/systemd-cron-2.4.0.ebuild
@@ -10,7 +10,7 @@ 
SRC_URI="https://github.com/systemd-cron/${PN}/archive/v${PV}.tar.gz -> systemd-
 
 LICENSE="MIT"
 SLOT="0"
-KEYWORDS="~amd64 ~arm ~arm64 ~hppa ~ia64 ~ppc ~ppc64 ~riscv ~sparc ~x86"
+KEYWORDS="amd64 ~arm ~arm64 ~hppa ~ia64 ~ppc ~ppc64 ~riscv ~sparc ~x86"
 IUSE="cron-boot etc-crontab-systemd minutely +runparts setgid yearly"
 # We can't run the unshare tests within sandbox/with low privs, and the
 # 'test-nounshare' target just does static analysis (shellcheck etc).



Re: [agi] α, αGproton, Combinatorial Hierarchy, Computational Irreducibility and other things that just don't matter to reaching AGI

2024-05-09 Thread Rob Freeman
On Thu, May 9, 2024 at 6:15 AM James Bowery  wrote:
>
> Shifting this thread to a more appropriate topic.
>
> -- Forwarded message -
>>
>> From: Rob Freeman 
>> Date: Tue, May 7, 2024 at 8:33 PM
>> Subject: Re: [agi] Hey, looks like the goertzel is hiring...
>> To: AGI 
>
>
>> I'm disappointed you don't address my points James. You just double
>> down that there needs to be some framework for learning, and that
>> nested stacks might be one such constraint.
> ...
>> Well, maybe for language a) we can't find top down heuristics which
>> work well enough and b) we don't need to, because for language a
>> combinatorial basis is actually sitting right there for us, manifest,
>> in (sequences of) text.
>
>
> The origin of the Combinatorial Hierarchy thence ANPA was the Cambridge 
> Language Research Unit.

Interesting tip about the Cambridge Language Research Unit. Inspired
by Wittgenstein?

But this history means what?

> PS:  I know I've disappointed you yet again for not engaging directly your 
> line of inquiry.  Just be assured that my failure to do so is not because I 
> in any way discount what you are doing -- hence I'm not "doubling down" on 
> some opposing line of thought -- I'm just not prepared to defend Granger's 
> work as much as I am prepared to encourage you to take up your line of 
> thought directly with him and his school of thought.

Well, yes.

Thanks for the link to Granger's work. It looks like he did a lot on
brain biology, and developed a hypothesis that the biology of the
brain split into different regions is consistent with aspects of
language suggesting limits on nested hierarchy.

But I don't see it engages in any way with the original point I made
(in response to Matt's synopsis of OpenCog language understanding.)
That OpenCog language processing didn't fail because it didn't do
language learning (or even because it didn't attempt "semantic"
learning first.) That it was somewhat the opposite. That OpenCog
language failed because it did attempt to find an abstract grammar.
And LLMs succeed to the extent they do because they abandon a search
for abstract grammar, and just focus on prediction.

That's just my take on the OpenCog (and LLM) language situation.
People can take it or leave it.

Criticisms are welcome. But just saying, oh, but hey look at my idea
instead... Well, it might be good for people who are really puzzled
and looking for new ideas.

I guess it's a problem for AI research in general that people rarely
attempt to engage with other people's ideas. They all just assert
their own ideas. Like Matt's reply to the above... "Oh no, the real
problem was they didn't try to learn semantics..."

If you think OpenCog language failed instead because it didn't attempt
to learn grammar as nested stacks, OK, that's your idea. Good luck
trying to learn abstract grammar as nested stacks.

Actual progress in the field stumbles along by fits and starts. What's
happened in 30 years? Nothing much. A retreat to statistical
uncertainty about grammar in the '90s with HMMs? A first retreat to
indeterminacy. Then, what, 8 years ago the surprise success of
transformers, a cross-product of embedding vectors which ignores
structure and focuses on prediction. Why did it succeed? You, because
transformers somehow advance the nested stack idea? Matt, because
transformers somehow advance the semantics first idea?

My idea is that they advance the idea that a search for an abstract
grammar is flawed (in practice if not in theory.)

My idea is consistent with the ongoing success of LLMs. Which get
bigger and bigger, and don't appear to have any consistent structure.
But also their failures. That they still try to learn that structure
as a fixed artifact.

Actually, as far as I know, the first model in the LLM style of
indeterminate grammar as a cross-product of embedding vectors, was
mine.

***If anyone can point to an earlier precedent I'd love to see it.***

So LLMs feel like a nice vindication of those early ideas to me.
Without embracing the full extent of them. They still don't grasp the
full point. I don't see reason to be discouraged in it.

And it seems by chance that the idea seems consistent with the
emergent structure theme of this thread. With the difference that with
language, we have access to the emergent system, bottom-up, instead of
top down, the way we do with physics, maths.

But everyone is working on their own thing. I just got drawn in by
Matt's comment that OpenCog didn't do language learning.

-Rob

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Teaac2c1a9c4f4ce3-Mc80863f9a44a6d34f3ba12a6
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[jira] [Commented] (CXF-9011) WSDLTo JAXWS Frontend service.vm Velocity template uses deprecated URL constructor

2024-05-08 Thread Freeman Yue Fang (Jira)


[ 
https://issues.apache.org/jira/browse/CXF-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844771#comment-17844771
 ] 

Freeman Yue Fang commented on CXF-9011:
---

Hi [~fjmateo],

Thanks for reporting this! A patch is welcome to address it!

Freeman

> WSDLTo JAXWS Frontend service.vm Velocity template uses deprecated URL 
> constructor
> --
>
> Key: CXF-9011
> URL: https://issues.apache.org/jira/browse/CXF-9011
> Project: CXF
>  Issue Type: Bug
>  Components: Soap Binding
>Affects Versions: 4.0.4
>Reporter: Francisco Mateo
>Priority: Minor
>
> The URL constructors were deprecated in Java 20 
> [https://bugs.openjdk.org/browse/JDK-8294241].
> The template uses the deprecated constructor: 
> [https://github.com/apache/cxf/blob/cxf-4.0.4/tools/wsdlto/frontend/jaxws/src/main/java/org/apache/cxf/tools/wsdlto/frontend/jaxws/template/service.vm#L123]
> This becomes an issue when applications compile with warnings enabled, for 
> example {{-Xlint:all -Werror}}
> Seems we could just switch to using {{URI.create(...).toURL()}} in the 
> template since that has been available since Java 1.4



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-08 Thread Rob Freeman
Is a quantum basis fractal?

To the extent you're suggesting some kind of quantum computation might
be a good implementation for the structures I'm suggesting, though,
yes. At least, Bob Coecke thinks quantum computation will be a good
fit for his quantum style grammar formalisms, which kind of parallel
what I'm suggesting in some ways. That's what they are working on with
their Quantinuum, Honeywell and Cambridge Quantum spin-off (recently
another 300 million from JP Morgan.) Here's a recent paper from their
language formalism team (Stephen Clark a recent hire from DeepMind, I
think, though I think Coecke did the original quantum and category
theoretic combinatorial semantics papers with him when they were
together at Oxford back from 2008 or so.)

>From Conceptual Spaces to Quantum Concepts:
Formalising and Learning Structured Conceptual Models
Sean Tull, Razin A. Shaikh, Sara Sabrina Zemljiˇc and Stephen Clark
https://browse.arxiv.org/pdf/2401.08585

Personally I think they've gone off on the wrong tangent with that. I
like the fact that Coecke has recognized a quantum indeterminacy to
natural language grammar. But I think it is pointless to try to
actually apply a quantum formalization to it. If it emerges, just let
it emerge. You don't need to formalize it at all. It's pointless to
bust a gut pushing the data into a formalism. And then bust a gut
picking the formalism apart again to "collapse" it into something
observable at run time.

But these maths guys love their formalisms. That's the approach they
are taking. And they think they need the power of quantum computation
to pull it apart again once they do it. So there's quantum computation
as a natural platform for that, yes.

For the rest of what you've written, I don't well understand what you
are saying. But if you're talking about the interpretability of the
kind of self structuring sequence networks I'm talking about,
paradoxically, allowing the symmetry groupings to emerge chaotically,
should result in more "visible" and "manageable" structure, not less.
It should give us nice, interpretable, cognitive hierarchies, objects,
concepts, etc, that you can use to do logic and reasoning, much like
the nodes of one of OpenCogs hypergraphs (it's just you need an on the
fly structuring system like I'm talking about to get adequate
representation for the nodes of an OpenCog hypergraph. They don't
exist as "primitives". Though Ben's probably right they could emerge
on top of whatever nodes he does have. But he's never had either the
computing power, or, actually the LLM like relational parameters, to
even start doing that.) So I see it as the answer for
interpretability, logic, "truthiness", and all the problems we have
now with LLMs (as well as the novelty, creativity, new "ideas" bit
associated with the complex system side.) You only get the quantum
like woo woo when you insist on squeezing the variability into a
single global formalism. Practically, the whole system should resolve
from moment to moment as clearly as the alternative perspectives of an
Escher sketch appear to us. One or the other. Clear in and of
themselves. Just that they would be able to flip to another state
discontinuously depending on context (and essentially both be there at
the same time until they are resolved.)

On Wed, May 8, 2024 at 1:00 PM Quan Tesla  wrote:
>
> If I understood your assertions correctly, then I'd think that a 
> quantum-based (fractal), evolutionary (chemistry-like) model would be 
> suitable for extending the cohesive cognition to x levels.
>
>  If the boundaried result emerges as synonymous with an LLM, or NN, then it 
> would be useful. However, if it emerges as an as-of-yet unnamed, 
> recombinatory lattice, it would be groundbreaking.
>
> My primary thought here relates to inherent constraints in visualizing 
> quantum systems. Once the debate between simple and complex systems end 
> (e.g., when an absolutist perspective is advanced), then the observational 
> system stops learning. Volume does not equate real progress.
>
> With an evolutionary model, "brainsnaps in time" may be possible. This 
> suggests  that scaling would be managable within relativistic and relevance 
> boundaries/targets.
>
> In the least, trackable evolutionary pathways and predictability of the 
> pattern of tendency of a system should become visible, and manageability 
> would be increased.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M210f900801eb7251599971d1
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-07 Thread Rob Freeman
I'm disappointed you don't address my points James. You just double
down that there needs to be some framework for learning, and that
nested stacks might be one such constraint.

I replied that nested stacks might be emergent on dependency length.
So not a constraint based on actual nested stacks in the brain, but a
"soft" constraint based on the effect of dependency
 length on groups/stacks generated/learned from sequence networks.

BTW just noticed your "Combinatorial Hierarchy, Computational
Irreducibility and other things that just don't matter..." thread.
Perhaps that thread is a better location to discuss this. Were you
positing in that thread that all of maths and physics might be
emergent on combinatorial hierarchies? Were you saying yes, but it
doesn't matter to the practice of AGI, because for physics we can't
find the combinatorial basis, and in practice we can find top down
heuristics which work well enough?

Well, maybe for language a) we can't find top down heuristics which
work well enough and b) we don't need to, because for language a
combinatorial basis is actually sitting right there for us, manifest,
in (sequences of) text.

With language we don't just have the top-down perception of structure
like we do with physics (or maths.) Language is different to other
perceptual phenomena that way. Because language is the brain's attempt
to generate a perception in others. So with language we're also privy
to what the system looks like bottom up. We also have the, bottom up,
"word" tokens which are the combinatorial basis which generates a
perception.

Anyway, it seems like my point is similar to your point: language
structure, and cognition, might be emergent on combinatorial
hierarchies.

LLMs go part way to implementing that emergent structure. They succeed
to the extent they abandon an explicit search for top-down structure,
and just allow the emergent structure to balloon. Seemingly endlessly.
But they are a backwards implementation of emergent structure.
Succeeding by allowing the structure to grow. But failing because
back-prop assumes the structure will somehow not grow too. That there
will be an end to growth. Which will somehow be a compression of the
growth it hasn't captured yet... Actually, if it grows, you can't
capture it all. And in particular, back-prop can't capture all of the
emergent structure, because, like physics, that emergent structure
manifests some entanglement, and chaos.

In this thesis, LLMs are on the right track. We just need to replace
back-prop with some other way of finding emergent hierarchies of
predictive symmetries, and do it generatively, on the fly.

In practical terms, maybe, as I said earlier, the variational
estimation with heat of Extropic. Or maybe some kind of distributed
reservoir computer like LiquidAI are proposing. Otherwise just
straight out spiking NNs should be a good fit. If we focus on actively
seeking new variational symmetries using the spikes, and not
attempting to (mis)fit them to back-propagation.

On Tue, May 7, 2024 at 11:32 PM James Bowery  wrote:
>...
>
> At all levels of abstraction where natural science is applicable, people 
> adopt its unspoken presumption which is that mathematics is useful.  This is 
> what makes Solomonoff's proof relevant despite the intractability of proving 
> that one has found the ideal mathematical model.  The hard sciences are 
> merely the most obvious level of abstraction in which one may recognize this.
>...
>
> Any constraint on the program search (aka search for the ultimate algorithmic 
> encoding of all data in evidence at any given level of abstraction) is a 
> prior.  The thing that makes the high order push down automata (such as 
> nested stacks) interesting is that it may provide a constraint on program 
> search that evolution has found useful enough to hard wire into the structure 
> of the human brain -- specifically in the ratio of "capital investment" 
> between sub-modules of brain tissue.  This is a constraint, the usefulness of 
> which, may be suspected as generally applicable to the extent that human 
> cognition is generally applicable.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-M321384a83da19a33df5ba986
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[jira] [Resolved] (CXF-9006) TrustedAuthorityValidatorCRLTest#testIsCertChainValid fails when using Red Hat OpenJDK on PPC64LE

2024-05-07 Thread Freeman Yue Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/CXF-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Freeman Yue Fang resolved CXF-9006.
---
Fix Version/s: 4.1.0
   Resolution: Fixed

> TrustedAuthorityValidatorCRLTest#testIsCertChainValid fails when using Red 
> Hat OpenJDK on PPC64LE
> -
>
> Key: CXF-9006
> URL: https://issues.apache.org/jira/browse/CXF-9006
> Project: CXF
>  Issue Type: Test
>Affects Versions: 4.0.4, 4.1.0
> Environment: Java version: 17.0.6, vendor: Red Hat, Inc., runtime: 
> /usr/lib/jvm/java-17-openjdk-17.0.6.0.10-3.el9.ppc64le
> OS name: "linux", version: "5.14.0-378.el9.ppc64le", arch: "ppc64le", family: 
> "unix"
>Reporter: Jamie Mark Goodyear
>Priority: Minor
> Fix For: 4.1.0
>
> Attachments: wss40.cer, wss40CA.cer, wss40CACRL.cer, wss40rev.cer
>
>
> {{TrustedAuthorityValidatorCRLTest#testIsCertChainValid}} failing when using 
> Red Hat OpenJDK 17. The error revealed when using 
> {{-Djava.security.debug=certpath}} is that the JVM can not find a valid 
> certification path to the requested target -- note: this unit test works on 
> Bellsoft, Temurin, IBM, Corretto, Zulu, on other platforms, and is confirmed 
> to work with Semeru/Temurin Java on PPC64LE.
> Given this is restricted to one particular JVM distribution/platform, would 
> this be a candidate for a skip test?
> {{sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target at 
> java.base/sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
>  at 
> java.base/sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
>  at 
> java.base/java.security.cert.CertPathBuilder.build(CertPathBuilder.java:297) 
> at 
> org.apache.cxf.xkms.x509.validator.TrustedAuthorityValidator.isCertificateChainValid(TrustedAuthorityValidator.java:84)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CXF-9006) TrustedAuthorityValidatorCRLTest#testIsCertChainValid fails when using Red Hat OpenJDK on PPC64LE

2024-05-07 Thread Freeman Yue Fang (Jira)


[ 
https://issues.apache.org/jira/browse/CXF-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844311#comment-17844311
 ] 

Freeman Yue Fang commented on CXF-9006:
---

No, I dropped my PR 
https://github.com/apache/cxf/pull/1842
without merging it as I realized your PR has it, anyway, I just merged your PR
https://github.com/apache/cxf/pull/1836

Please try it again. Sorry for the confusion.

Freeman

> TrustedAuthorityValidatorCRLTest#testIsCertChainValid fails when using Red 
> Hat OpenJDK on PPC64LE
> -
>
> Key: CXF-9006
> URL: https://issues.apache.org/jira/browse/CXF-9006
> Project: CXF
>  Issue Type: Test
>Affects Versions: 4.0.4, 4.1.0
> Environment: Java version: 17.0.6, vendor: Red Hat, Inc., runtime: 
> /usr/lib/jvm/java-17-openjdk-17.0.6.0.10-3.el9.ppc64le
> OS name: "linux", version: "5.14.0-378.el9.ppc64le", arch: "ppc64le", family: 
> "unix"
>Reporter: Jamie Mark Goodyear
>Priority: Minor
> Attachments: wss40.cer, wss40CA.cer, wss40CACRL.cer, wss40rev.cer
>
>
> {{TrustedAuthorityValidatorCRLTest#testIsCertChainValid}} failing when using 
> Red Hat OpenJDK 17. The error revealed when using 
> {{-Djava.security.debug=certpath}} is that the JVM can not find a valid 
> certification path to the requested target -- note: this unit test works on 
> Bellsoft, Temurin, IBM, Corretto, Zulu, on other platforms, and is confirmed 
> to work with Semeru/Temurin Java on PPC64LE.
> Given this is restricted to one particular JVM distribution/platform, would 
> this be a candidate for a skip test?
> {{sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target at 
> java.base/sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
>  at 
> java.base/sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
>  at 
> java.base/java.security.cert.CertPathBuilder.build(CertPathBuilder.java:297) 
> at 
> org.apache.cxf.xkms.x509.validator.TrustedAuthorityValidator.isCertificateChainValid(TrustedAuthorityValidator.java:84)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CXF-9006) TrustedAuthorityValidatorCRLTest#testIsCertChainValid fails when using Red Hat OpenJDK on PPC64LE

2024-05-07 Thread Freeman Yue Fang (Jira)


[ 
https://issues.apache.org/jira/browse/CXF-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844307#comment-17844307
 ] 

Freeman Yue Fang commented on CXF-9006:
---

Hi [~jgoodyear],

It's wired, the attached cer validity is until 2034
{code}
Not After : Mar 11 22:46:11 2034 GMT
{code}

Probably somehow you still use the old certificates? I noticed that your PR
https://github.com/apache/cxf/pull/1836
not merged yet.

Freeman

> TrustedAuthorityValidatorCRLTest#testIsCertChainValid fails when using Red 
> Hat OpenJDK on PPC64LE
> -
>
> Key: CXF-9006
> URL: https://issues.apache.org/jira/browse/CXF-9006
> Project: CXF
>  Issue Type: Test
>Affects Versions: 4.0.4, 4.1.0
> Environment: Java version: 17.0.6, vendor: Red Hat, Inc., runtime: 
> /usr/lib/jvm/java-17-openjdk-17.0.6.0.10-3.el9.ppc64le
> OS name: "linux", version: "5.14.0-378.el9.ppc64le", arch: "ppc64le", family: 
> "unix"
>Reporter: Jamie Mark Goodyear
>Priority: Minor
> Attachments: wss40.cer, wss40CA.cer, wss40CACRL.cer, wss40rev.cer
>
>
> {{TrustedAuthorityValidatorCRLTest#testIsCertChainValid}} failing when using 
> Red Hat OpenJDK 17. The error revealed when using 
> {{-Djava.security.debug=certpath}} is that the JVM can not find a valid 
> certification path to the requested target -- note: this unit test works on 
> Bellsoft, Temurin, IBM, Corretto, Zulu, on other platforms, and is confirmed 
> to work with Semeru/Temurin Java on PPC64LE.
> Given this is restricted to one particular JVM distribution/platform, would 
> this be a candidate for a skip test?
> {{sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target at 
> java.base/sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
>  at 
> java.base/sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
>  at 
> java.base/java.security.cert.CertPathBuilder.build(CertPathBuilder.java:297) 
> at 
> org.apache.cxf.xkms.x509.validator.TrustedAuthorityValidator.isCertificateChainValid(TrustedAuthorityValidator.java:84)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [gentoo-user] Hard drive and PWDIS or pin 3 power disable/reset.

2024-05-07 Thread Rich Freeman
On Tue, May 7, 2024 at 6:04 AM Michael  wrote:
>
> On Tuesday, 7 May 2024 08:50:26 BST Dale wrote:
> >
> > I'm aware of what it is and the cable part.  I was curious what it looks
> > like to BIOS and the OS when one is connected and that pin has the drive
> > disabled.  From what I've read in some places, the drive doesn't power
> > up at all.
>
> I don't have a drive like this, but as I understand it when the drive receives
> voltage on pin 3 it powers down.  This requires a MoBo and firmware which
> supports such a function - probably unlikely to be found on consumer kit.

I have had these drives.  If the drive is connected to many ATX power
supplies via a standard cable, the drive simply will not be detected
by the computer.  With some power supplies it will work fine.  It all
depends on whether the power supply follows the original SATA spec, or
was designed to be compatible with enterprise drives which use the
revised spec, which isn't backwards compatible (I don't know who the
genius was who had that idea).

In order to actually toggle the reset line you need SOMETHING able to
switch the line in-between the drive and the PSU.  That might be a
motherboard (especially with the newer trend towards running all the
power through the motherboard), or some other accessory card.  Unless
the HBA provides the power it won't be there.

However, you don't need any fancy hardware for the drive to just work
- that is only needed to send the hardware reset to the drive.  All
you need is to not have that pin powered.  That just means the right
power supply, the right cable, the right adapter, or some improvised
solution (tape over the pin is a common one).

In any case, if the pin is the problem, the drive simply won't be
detected.  Your SATA issues are due to something else.  It might be a
bad drive, an incompatibility (maybe the drive isn't in the
smartmontools database yet), or maybe an issue with the HBA (for USB
HBAs in particular you often need to pass command line parameters as
there apparently isn't a standard way to pass these commands over
USB).  I doubt the power line is your problem.

As far as shucked drives go - that is typically indicated by the
label/model.  If it isn't branded in any way it may have been shucked.
That shouldn't be a problem as long as you don't have the power issue
- the drive might simply be bad.

-- 
Rich



Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-06 Thread Rob Freeman
Addendum: another candidate for this variational model for finding
distributions to replace back-prop (and consequently with the
potential to capture predictive structure which is chaotic attractors.
Though they don't appreciate the need yet.) There's Extropic, which is
proposing using heat noise. And, another, LiquidAI. If it's true
LiquidAI have nodes which are little reservoir computers, potentially
that might work on a similar variational estimation/generation of
distributions basis. Joscha Bach is involved with that. Though I don't
know in what capacity.

James: "Physics Informed Machine Learning". "Building models from data
using optimization and regression techniques".

Fine. If you have a physics to constrain it to. We don't have that
"physics" for language.

Richard Granger you say? The brain is constrained to be a "nested stack"?

https://www.researchgate.net/publication/343648662_Toward_the_quantification_of_cognition

Language is a nested stack? Possibly. Certainly you get a (softish)
ceiling of recursion starting level 3. The famous, level 2: "The rat
the cat chased escaped" (OK) vs. level 3: "The rat the cat the dog bit
chased escaped." (Borderline not OK.)

How does that contradict my assertion that such nested structures must
be formed on the fly, because they are chaotic attractors of
predictive symmetry on a sequence network?

On the other hand, can fixed, pre-structured, nested stacks explain
contradictory (semantic) categories, like "strong tea" (OK) vs
"powerful tea" (not OK)?

Unless stacks form on the fly, and can contradict, how can we explain
that "strong" can be a synonym (fit in the stack?) for "powerful" in
some contexts, but not others?

On the other hand, a constraint like an observation of limitations on
nesting, might be a side effect of the other famous soft restriction,
the one on dependency length. A restriction on dependency length is an
easier explanation for nesting limits, and fits with the model that
language is just a sequence network, which gets structured (into
substitution groups/stacks?) on the fly.

On Mon, May 6, 2024 at 11:06 PM James Bowery  wrote:
>
> Let's give the symbolists their due:
>
> https://youtu.be/JoFW2uSd3Uo?list=PLMrJAkhIeNNQ0BaKuBKY43k4xMo6NSbBa
>
> The problem isn't that symbolists have nothing to offer, it's just that 
> they're offering it at the wrong level of abstraction.
>
> Even in the extreme case of LLM's having "proven" that language modeling 
> needs no priors beyond the Transformer model and some hyperparameter 
> tweaking, there are language-specific priors acquired over the decades if not 
> centuries that are intractable to learn.
>
> The most important, if not conspicuous, one is Richard Granger's discovery 
> that Chomsky's hierarchy elides the one grammar category that human cognition 
> seems to use.

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb63883dd9d6b59cc-Me078486d3e7a407326e33a8a
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[fluid-dev] channel filter

2024-05-06 Thread Freeman Gilmore
Can the cut off frequency of the channel built in high or low pass filter
be programmed to be manipulated by midi?

On Sat, May 4, 2024 at 3:11 PM Freeman Gilmore 
wrote:

> I would like to add a low or high pass channel filter that I can control
> by midi.   It is not listed as a feather but it is listed as a in the
> library functions, Synthesizer, Effect - IIR Filter.. I do not want to
> disable the synthesizer of its fonction if it is used for sound fonts.
> Also i could not find how to use the Library function.
> Thank you, fg
>
___
fluid-dev mailing list
fluid-dev@nongnu.org
https://lists.nongnu.org/mailman/listinfo/fluid-dev


Re: [agi] Hey, looks like the goertzel is hiring...

2024-05-05 Thread Rob Freeman
On Sat, May 4, 2024 at 4:53 AM Matt Mahoney  wrote:
>
> ... OpenCog was a hodgepodge of a hand coded structured natural language 
> parser, a toy neural vision system, and a hybrid fuzzy logic knowledge 
> representation data structure that was supposed to integrate it all together 
> but never did after years of effort. There was never any knowledge base or 
> language learning algorithm.

Good summary of the OpenCog system Matt.

But there was a language learning algorithm. Actually there was more
of a language learning algorithm in OpenCog than there is now in LLMs.
That's been the problem with OpenCog. By contrast LLMs don't try to
learn grammar. They just try to learn to predict words.

Rather than the mistake being that they had no language learning
algorithm, the mistake was OpenCog _did_ try to implement a language
learning algorithm.

By contrast the success, with LLMs, came to those who just tried to
predict words. Using a kind of vector cross product across word
embedding vectors, as it turns out.

Trying to learn grammar was linguistic naivety. You could have seen it
back then. Hardly anyone in the AI field has any experience with
language, actually, that's the problem. Even now with LLMs. They're
all linguistic naifs. A tragedy for wasted effort for OpenCog. Formal
grammars for natural language are unlearnable. I was telling Linas
that since 2011. I posted about it here numerous times. They spent a
decade, and millions(?) trying to learn a formal grammar.

Meanwhile vector language models which don't coalesce into formal
grammars, swooped in and scooped the pool.

That was NLP. But more broadly in OpenCog too, the problem seems to be
that Ben is still convinced AI needs some kind of symbolic
representation to build chaos on top of. A similar kind of error.

I tried to convince Ben otherwise the last time he addressed the
subject of semantic primitives in this AGI Discussion Forum session
two years ago, here:

March 18, 2022, 7AM-8:30AM Pacific time: Ben Goertzel leading
discussion on semantic primitives
https://singularitynet.zoom.us/rec/share/qwLpQuc_4UjESPQyHbNTg5TBo9_U7TSyZJ8vjzudHyNuF9O59pJzZhOYoH5ekhQV.2QxARBxV5DZxtqHQ?startTime=164761312

Starting timestamp 1:24:48, Ben says, disarmingly:

"For f'ing decades, which is ridiculous, it's been like, OK, I want to
explore these chaotic dynamics and emergent strange attractors, but I
want to explore them in a very fleshed out system, with a rich
representational capability, interacting with a complex world, and
then we still haven't gotten to that system ... Of course, an
alternative approach could be taken as you've been attempting, of ...
starting with the chaotic dynamics but in a simpler setting. ... But I
think we have agreed over the decades that to get to human level AGI
you need structure emerging from chaos. You need a system with complex
chaotic dynamics, you need structured strange attractors there, you
need the system's own pattern recognition to be recognizing the
patterns in these structured strange attractors, and then you have
that virtuous cycle."

So he embraces the idea cognitive structure is going to be chaotic
attractors, as he did when he wrote his "Chaotic Logic" book back in
1994. But he's still convinced the chaos needs to emerge on top of
some kind of symbolic representation.

I think there's a sunken cost fallacy at work. So much is invested in
the paradigm of chaos appearing on top of a "rich" symbolic
representation. He can't try anything else.

As I understand it, Hyperon is a re-jig of the software for this
symbol based "atom" network representation, to make it easier to
spread the processing load over networks.

As a network representation, the potential is there to merge insights
of no formal symbolic representation which has worked for LLMs, with
chaos on top which was Ben's earlier insight.

I presented on that potential at a later AGI Discussion Forum session.
But mysteriously the current devs failed to upload the recording for
that session.

> Maybe Hyperon will go better. But I suspect that LLMs on GPU clusters will 
> make it irrelevant.

Here I disagree with you. LLMs are at their own dead-end. What they
got right was to abandon formal symbolic representation. They likely
generate their own version of chaos, but they are unaware of it. They
are still trapped in their own version of the "learning" idea. Any
chaos generated is frozen and tangled in their enormous
back-propagated networks. That's why they exhibit no structure,
hallucinate, and their processing of novelty is limited to rough
mapping to previous knowledge. The solution will require a different
way of identifying chaotic attractors in networks of sequences.

A Hyperon style network might be a better basis to make that advance.
It would have to abandon the search for a symbolic representation.
LLMs can show the way there. Make prediction not representation the
focus. Just start with any old (sequential) tokens. But in contrast to
LLMs, 

[jira] [Created] (CXF-9008) FIPS support

2024-05-05 Thread Freeman Yue Fang (Jira)
Freeman Yue Fang created CXF-9008:
-

 Summary: FIPS support
 Key: CXF-9008
 URL: https://issues.apache.org/jira/browse/CXF-9008
 Project: CXF
  Issue Type: New Feature
Reporter: Freeman Yue Fang


The goal of this ticket is to ensure we can run CXF with FIPS compliant 
security algorithm on FIPS enabled machines.

We should ensure all security related tests(unless those tests specifically 
test algos which are not allowed in FIPS mode) can also pass in FIPS mode



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (CXF-9008) FIPS support

2024-05-05 Thread Freeman Yue Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/CXF-9008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Freeman Yue Fang reassigned CXF-9008:
-

Assignee: Freeman Yue Fang

> FIPS support
> 
>
> Key: CXF-9008
> URL: https://issues.apache.org/jira/browse/CXF-9008
> Project: CXF
>  Issue Type: New Feature
>    Reporter: Freeman Yue Fang
>    Assignee: Freeman Yue Fang
>Priority: Major
>
> The goal of this ticket is to ensure we can run CXF with FIPS compliant 
> security algorithm on FIPS enabled machines.
> We should ensure all security related tests(unless those tests specifically 
> test algos which are not allowed in FIPS mode) can also pass in FIPS mode



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[fluid-dev] channel filter

2024-05-04 Thread Freeman Gilmore
I would like to add a low or high pass channel filter that I can control by
midi.   It is not listed as a feather but it is listed as a in the library
functions, Synthesizer, Effect - IIR Filter.. I do not want to disable the
synthesizer of its fonction if it is used for sound fonts.Also i could
not find how to use the Library function.
Thank you, fg
___
fluid-dev mailing list
fluid-dev@nongnu.org
https://lists.nongnu.org/mailman/listinfo/fluid-dev


[Birdnews]Sad news: Bruce Falls has passed away at age 100

2024-04-30 Thread Lynne Freeman via birdnews
Hello dear birding community,

Bruce Falls has passed away.  Such sad news for all who knew him, and a
loss for the birding community. He was instrumental in founding NCC and
Birds Canada. He was the first president of Ontario Nature and the editor
of the first atlas. He was humble and funny and so generous with everyone.

Here is the obit:

https://necrocanada.com/obituaries-2023/canada-ontario-toronto-dr-j-bruce-falls-oc/

Lynne
-- 
Lynne Freeman (she/her/hers)
lynnef...@gmail.com
--
Ontbirds and Birdnews are moderated email Listservs provided by the Ontario 
Field Ornithologists (OFO) as a service to all birders in Ontario.

Birdnews is reserved for announcements, location summaries, first of year 
reports, etc. To post a message on Birdnews, send an email to: 
birdnews@ontbirds.ca.

If you have any questions or concerns, contact the Birdnews Moderators by email 
at birdn...@ofo.ca. Please review posting rules and guidelines at 
http://ofo.ca/site/content/listserv-guidelines

During the COVID-19 pandemic, all Ontario birders should be taking extra 
precautions and following local, provincial, and federal regulations regarding 
physical distancing and non-essential travel.

To find out more about OFO, please visit our website at ofo.ca or Facebook page 
at https://www.facebook.com/OntarioFieldOrnithologists.


[gentoo-commits] repo/gentoo:master commit in: sys-process/systemd-cron/

2024-04-30 Thread Richard Freeman
commit: 9650ada19875e027a46c5ce23dc1bd0044d0
Author: Richard Freeman  gentoo  org>
AuthorDate: Tue Apr 30 11:45:05 2024 +
Commit: Richard Freeman  gentoo  org>
CommitDate: Tue Apr 30 12:51:00 2024 +
URL:https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=9650ada1

sys-process/systemd-cron: add 2.4.0

Bug: https://bugs.gentoo.org/930950
Signed-off-by: Richard Freeman  gentoo.org>

 sys-process/systemd-cron/Manifest  |  1 +
 sys-process/systemd-cron/systemd-cron-2.4.0.ebuild | 93 ++
 2 files changed, 94 insertions(+)

diff --git a/sys-process/systemd-cron/Manifest 
b/sys-process/systemd-cron/Manifest
index 06b7f8013a30..830f4e8bbd72 100644
--- a/sys-process/systemd-cron/Manifest
+++ b/sys-process/systemd-cron/Manifest
@@ -1,2 +1,3 @@
 DIST systemd-cron-2.2.0.tar.gz 55825 BLAKE2B 
ca4b02fdea5084439aa56b3f04603000d811f21922c11cd26a22ea6387e4b54575587ff4e1eb7fc7a3260d2f656ea0eb91365942c135982f4bd26aead1a080f1
 SHA512 
f26c7d7e2da7eb5cd5558f352aff852585bfefd961de6ecc2409a4a53b63f82662a89bdbf71f739ea8e44ef9e3e1fdec15cdc63ce1e90c289fb0e636ff679ca0
 DIST systemd-cron-2.3.4.tar.gz 58458 BLAKE2B 
594fff8f7cc126aa33b1dcbf74293a39b5939576203c11f8f0fc300285462f266c35503a6cfe46ee797e5e617e54e09b92dd6ba8a4044f962d1efd2822f0a87c
 SHA512 
2a9743df6d0e1a83b65d15609e47b901fde1d77d1207c4cc0617395be8d9e94daece91aec9a3398c3d09f86383e01cfff301614df727ca598efe873453f5a3c9
+DIST systemd-cron-2.4.0.tar.gz 60462 BLAKE2B 
6a4450637b69ed9c32ea5711018be9265db96a6bf19896bb72c13184817750e7d64d2fdd00ac885d5ae3393b671c04c89d1bf46f73fbb817c1b1798a4809b955
 SHA512 
88ce99307101d33e6fc6a5dfa25f16db9754785809b44da78c6b05b52592385c9a957770ee781b97a248ab475304bd7eb234bffa47114031bd804e2aa5f79c06

diff --git a/sys-process/systemd-cron/systemd-cron-2.4.0.ebuild 
b/sys-process/systemd-cron/systemd-cron-2.4.0.ebuild
new file mode 100644
index ..00738f1c0e07
--- /dev/null
+++ b/sys-process/systemd-cron/systemd-cron-2.4.0.ebuild
@@ -0,0 +1,93 @@
+# Copyright 1999-2024 Gentoo Authors
+# Distributed under the terms of the GNU General Public License v2
+
+EAPI=8
+inherit systemd toolchain-funcs
+
+DESCRIPTION="systemd units to create timers for cron directories and crontab"
+HOMEPAGE="https://github.com/systemd-cron/systemd-cron/;
+SRC_URI="https://github.com/systemd-cron/${PN}/archive/v${PV}.tar.gz -> 
systemd-cron-${PV}.tar.gz"
+
+LICENSE="MIT"
+SLOT="0"
+KEYWORDS="~amd64 ~arm ~arm64 ~hppa ~ia64 ~ppc ~ppc64 ~riscv ~sparc ~x86"
+IUSE="cron-boot etc-crontab-systemd minutely +runparts setgid yearly"
+# We can't run the unshare tests within sandbox/with low privs, and the
+# 'test-nounshare' target just does static analysis (shellcheck etc).
+RESTRICT="test"
+
+BDEPEND="virtual/pkgconfig"
+RDEPEND="
+   !sys-process/cronie[anacron]
+   acct-user/_cron-failure
+   acct-group/_cron-failure
+   app-crypt/libmd:=
+   sys-process/cronbase
+   >=sys-apps/systemd-255[-split-usr(-)]
+   !etc-crontab-systemd? ( !sys-process/dcron )
+   runparts? ( sys-apps/debianutils )
+"
+DEPEND="
+   dev-libs/openssl:=
+   sys-process/cronbase
+"
+
+src_prepare() {
+   sed -i \
+   -e 's/^crontab/crontab-systemd/' \
+   -e 's/^CRONTAB/CRONTAB-SYSTEMD/' \
+   -- "${S}/src/man/crontab."{1,5}".in" || die
+
+   if use etc-crontab-systemd
+   thensed -i \
+   -e "s!/etc/crontab!/etc/crontab-systemd!" \
+   -- "${S}/src/man/crontab."{1,5}".in" \
+   "${S}/src/bin/systemd-crontab-generator.cpp" \
+   "${S}/test/test-generator" || die
+   fi
+
+   default
+}
+
+my_use_enable() {
+   if use ${1}; then
+   echo --enable-${2:-${1}}=yes
+   else
+   echo --enable-${2:-${1}}=no
+   fi
+}
+
+src_configure() {
+   tc-export PKG_CONFIG CXX CC
+
+   ./configure \
+   --prefix="${EPREFIX}/usr" \
+   --mandir="${EPREFIX}/usr/share/man" \
+   --unitdir="$(systemd_get_systemunitdir)" \
+   --generatordir="$(systemd_get_systemgeneratordir)" \
+   $(my_use_enable cron-boot boot) \
+   $(my_use_enable minutely) \
+   $(my_use_enable runparts) \
+   $(my_use_enable yearly) \
+   $(my_use_enable yearly quarterly) \
+   $(my_use_enable yearly semi_annually) || die
+
+   export CRONTAB=crontab-systemd
+}
+
+src_compile() {
+   emake PCH=
+}
+
+src_install() {
+   emake DESTDIR="${D}" PCH= install
+   rm -f "${ED}"/usr/lib/sysusers.d/systemd-cron.conf
+}
+
+pkg_postinst() {
+   elog "This package now support

[jira] (CXF-9006) TrustedAuthorityValidatorCRLTest#testIsCertChainValid fails when using Red Hat OpenJDK on PPC64LE

2024-04-29 Thread Freeman Yue Fang (Jira)


[ https://issues.apache.org/jira/browse/CXF-9006 ]


Freeman Yue Fang deleted comment on CXF-9006:
---

was (Author: ffang):
Send PR to upgrade the cer files
https://github.com/apache/cxf/pull/1842

> TrustedAuthorityValidatorCRLTest#testIsCertChainValid fails when using Red 
> Hat OpenJDK on PPC64LE
> -
>
> Key: CXF-9006
> URL: https://issues.apache.org/jira/browse/CXF-9006
> Project: CXF
>  Issue Type: Test
>Affects Versions: 4.0.4, 4.1.0
> Environment: Java version: 17.0.6, vendor: Red Hat, Inc., runtime: 
> /usr/lib/jvm/java-17-openjdk-17.0.6.0.10-3.el9.ppc64le
> OS name: "linux", version: "5.14.0-378.el9.ppc64le", arch: "ppc64le", family: 
> "unix"
>Reporter: Jamie Mark Goodyear
>Priority: Minor
> Attachments: wss40.cer, wss40CA.cer, wss40CACRL.cer, wss40rev.cer
>
>
> {{TrustedAuthorityValidatorCRLTest#testIsCertChainValid}} failing when using 
> Red Hat OpenJDK 17. The error revealed when using 
> {{-Djava.security.debug=certpath}} is that the JVM can not find a valid 
> certification path to the requested target -- note: this unit test works on 
> Bellsoft, Temurin, IBM, Corretto, Zulu, on other platforms, and is confirmed 
> to work with Semeru/Temurin Java on PPC64LE.
> Given this is restricted to one particular JVM distribution/platform, would 
> this be a candidate for a skip test?
> {{sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target at 
> java.base/sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
>  at 
> java.base/sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
>  at 
> java.base/java.security.cert.CertPathBuilder.build(CertPathBuilder.java:297) 
> at 
> org.apache.cxf.xkms.x509.validator.TrustedAuthorityValidator.isCertificateChainValid(TrustedAuthorityValidator.java:84)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CXF-9006) TrustedAuthorityValidatorCRLTest#testIsCertChainValid fails when using Red Hat OpenJDK on PPC64LE

2024-04-29 Thread Freeman Yue Fang (Jira)


[ 
https://issues.apache.org/jira/browse/CXF-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842138#comment-17842138
 ] 

Freeman Yue Fang commented on CXF-9006:
---

Send PR to upgrade the cer files
https://github.com/apache/cxf/pull/1842

> TrustedAuthorityValidatorCRLTest#testIsCertChainValid fails when using Red 
> Hat OpenJDK on PPC64LE
> -
>
> Key: CXF-9006
> URL: https://issues.apache.org/jira/browse/CXF-9006
> Project: CXF
>  Issue Type: Test
>Affects Versions: 4.0.4, 4.1.0
> Environment: Java version: 17.0.6, vendor: Red Hat, Inc., runtime: 
> /usr/lib/jvm/java-17-openjdk-17.0.6.0.10-3.el9.ppc64le
> OS name: "linux", version: "5.14.0-378.el9.ppc64le", arch: "ppc64le", family: 
> "unix"
>Reporter: Jamie Mark Goodyear
>Priority: Minor
> Attachments: wss40.cer, wss40CA.cer, wss40CACRL.cer, wss40rev.cer
>
>
> {{TrustedAuthorityValidatorCRLTest#testIsCertChainValid}} failing when using 
> Red Hat OpenJDK 17. The error revealed when using 
> {{-Djava.security.debug=certpath}} is that the JVM can not find a valid 
> certification path to the requested target -- note: this unit test works on 
> Bellsoft, Temurin, IBM, Corretto, Zulu, on other platforms, and is confirmed 
> to work with Semeru/Temurin Java on PPC64LE.
> Given this is restricted to one particular JVM distribution/platform, would 
> this be a candidate for a skip test?
> {{sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target at 
> java.base/sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
>  at 
> java.base/sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
>  at 
> java.base/java.security.cert.CertPathBuilder.build(CertPathBuilder.java:297) 
> at 
> org.apache.cxf.xkms.x509.validator.TrustedAuthorityValidator.isCertificateChainValid(TrustedAuthorityValidator.java:84)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[gentoo-commits] repo/gentoo:master commit in: app-backup/duplicity/

2024-04-29 Thread Richard Freeman
commit: 0f5b07d5d4f94d5088198ddd207dd664fae9ea27
Author: Richard Freeman  gentoo  org>
AuthorDate: Mon Apr 29 19:15:52 2024 +
Commit: Richard Freeman  gentoo  org>
CommitDate: Mon Apr 29 19:17:34 2024 +
URL:https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=0f5b07d5

app-backup/duplicity: stabilize 2.2.3 for amd64

Signed-off-by: Richard Freeman  gentoo.org>

 app-backup/duplicity/duplicity-2.2.3.ebuild | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/app-backup/duplicity/duplicity-2.2.3.ebuild 
b/app-backup/duplicity/duplicity-2.2.3.ebuild
index 71908351c86d..0594bd819a43 100644
--- a/app-backup/duplicity/duplicity-2.2.3.ebuild
+++ b/app-backup/duplicity/duplicity-2.2.3.ebuild
@@ -13,7 +13,7 @@ HOMEPAGE="https://duplicity.gitlab.io/;
 
 LICENSE="GPL-3"
 SLOT="0"
-KEYWORDS="~amd64 ~x86 ~amd64-linux ~x86-linux ~x64-macos"
+KEYWORDS="amd64 ~x86 ~amd64-linux ~x86-linux ~x64-macos"
 IUSE="s3 test"
 
 CDEPEND="



[jira] [Commented] (CXF-9006) TrustedAuthorityValidatorCRLTest#testIsCertChainValid fails when using Red Hat OpenJDK on PPC64LE

2024-04-29 Thread Freeman Yue Fang (Jira)


[ 
https://issues.apache.org/jira/browse/CXF-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842022#comment-17842022
 ] 

Freeman Yue Fang commented on CXF-9006:
---

Hi [~jgoodyear],

Could you please copy the cer files I attached here to
services/xkms/xkms-x509-handlers/src/test/resources/trustedAuthorityValidator/
folder to override the old ones and rerun the test against the JDKs you are 
testing?

These new cer files are RSA2048 instead the previous DSA1024.

Thanks
Freeman

> TrustedAuthorityValidatorCRLTest#testIsCertChainValid fails when using Red 
> Hat OpenJDK on PPC64LE
> -
>
> Key: CXF-9006
> URL: https://issues.apache.org/jira/browse/CXF-9006
> Project: CXF
>  Issue Type: Test
>Affects Versions: 4.0.4, 4.1.0
> Environment: Java version: 17.0.6, vendor: Red Hat, Inc., runtime: 
> /usr/lib/jvm/java-17-openjdk-17.0.6.0.10-3.el9.ppc64le
> OS name: "linux", version: "5.14.0-378.el9.ppc64le", arch: "ppc64le", family: 
> "unix"
>Reporter: Jamie Mark Goodyear
>Priority: Minor
> Attachments: wss40.cer, wss40CA.cer, wss40CACRL.cer, wss40rev.cer
>
>
> {{TrustedAuthorityValidatorCRLTest#testIsCertChainValid}} failing when using 
> Red Hat OpenJDK 17. The error revealed when using 
> {{-Djava.security.debug=certpath}} is that the JVM can not find a valid 
> certification path to the requested target -- note: this unit test works on 
> Bellsoft, Temurin, IBM, Corretto, Zulu, on other platforms, and is confirmed 
> to work with Semeru/Temurin Java on PPC64LE.
> Given this is restricted to one particular JVM distribution/platform, would 
> this be a candidate for a skip test?
> {{sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target at 
> java.base/sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
>  at 
> java.base/sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
>  at 
> java.base/java.security.cert.CertPathBuilder.build(CertPathBuilder.java:297) 
> at 
> org.apache.cxf.xkms.x509.validator.TrustedAuthorityValidator.isCertificateChainValid(TrustedAuthorityValidator.java:84)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CXF-9006) TrustedAuthorityValidatorCRLTest#testIsCertChainValid fails when using Red Hat OpenJDK on PPC64LE

2024-04-29 Thread Freeman Yue Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/CXF-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Freeman Yue Fang updated CXF-9006:
--
Attachment: wss40.cer
wss40CA.cer
wss40CACRL.cer
wss40rev.cer

> TrustedAuthorityValidatorCRLTest#testIsCertChainValid fails when using Red 
> Hat OpenJDK on PPC64LE
> -
>
> Key: CXF-9006
> URL: https://issues.apache.org/jira/browse/CXF-9006
> Project: CXF
>  Issue Type: Test
>Affects Versions: 4.0.4, 4.1.0
> Environment: Java version: 17.0.6, vendor: Red Hat, Inc., runtime: 
> /usr/lib/jvm/java-17-openjdk-17.0.6.0.10-3.el9.ppc64le
> OS name: "linux", version: "5.14.0-378.el9.ppc64le", arch: "ppc64le", family: 
> "unix"
>Reporter: Jamie Mark Goodyear
>Priority: Minor
> Attachments: wss40.cer, wss40CA.cer, wss40CACRL.cer, wss40rev.cer
>
>
> {{TrustedAuthorityValidatorCRLTest#testIsCertChainValid}} failing when using 
> Red Hat OpenJDK 17. The error revealed when using 
> {{-Djava.security.debug=certpath}} is that the JVM can not find a valid 
> certification path to the requested target -- note: this unit test works on 
> Bellsoft, Temurin, IBM, Corretto, Zulu, on other platforms, and is confirmed 
> to work with Semeru/Temurin Java on PPC64LE.
> Given this is restricted to one particular JVM distribution/platform, would 
> this be a candidate for a skip test?
> {{sun.security.provider.certpath.SunCertPathBuilderException: unable to find 
> valid certification path to requested target at 
> java.base/sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
>  at 
> java.base/sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
>  at 
> java.base/java.security.cert.CertPathBuilder.build(CertPathBuilder.java:297) 
> at 
> org.apache.cxf.xkms.x509.validator.TrustedAuthorityValidator.isCertificateChainValid(TrustedAuthorityValidator.java:84)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (WSS-711) Introduce a system property "fips.enabled" so that WSS4J can work easier in FIPS mode

2024-04-25 Thread Freeman Yue Fang (Jira)
Freeman Yue Fang created WSS-711:


 Summary: Introduce a system property "fips.enabled" so that WSS4J 
can work easier in FIPS mode
 Key: WSS-711
 URL: https://issues.apache.org/jira/browse/WSS-711
 Project: WSS4J
  Issue Type: New Feature
Reporter: Freeman Yue Fang
Assignee: Colm O hEigeartaigh


Currently WSS4J has some default security algo settings which are not 
applicable on FIPS machine.

For example AES_CBC, RSA-OAEP and PBEWithMD5AndTripleDES are not FIPS 
compliant, while  we should use AES_GCM, RSA-1_5 and 
PBEWithHmacSHA512AndAES_256 on FIPS machine.

So I propose to introduce a system property "fips.enabled", when this property 
set as true, the FIPS compliant algos will be used accordingly, and this new 
introduced system propery won't affect current default behaviour.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: dev-unsubscr...@ws.apache.org
For additional commands, e-mail: dev-h...@ws.apache.org



Re: [NWRUG] Getting Into Rails Development

2024-04-20 Thread Paul Bennett-Freeman
> Is this normal or am I doing something wrong?

Oh, it’s absolutely “normal”.

In particular never turn down another offer until you’ve got a signed
contract from a company and not just a verbal offer via a recruiter.



On Sat, 20 Apr 2024 at 10:19, DAZ  wrote:

> Also, you mentioned recruiters and they do seem quite soulless ... I've
> already come across a few that seem to have 'led me on' and then completely
> ghosted me - not even replying to my emails for updates. Is this normal or
> am I doing something wrong?
>
> --
> You received this message because you are subscribed to the Google Groups
> "North West Ruby User Group (NWRUG)" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to nwrug-members+unsubscr...@googlegroups.com.
> To view this discussion on the web, visit
> https://groups.google.com/d/msgid/nwrug-members/30a69172-7d1e-4371-83ea-5dc943cd765bn%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"North West Ruby User Group (NWRUG)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to nwrug-members+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/nwrug-members/CAPNVErqkWLUbSdsyuQ7xBHTUe01%2B135orqP3Kz19M1fyCxJz9w%40mail.gmail.com.


Re: [NWRUG] Getting Into Rails Development

2024-04-20 Thread Paul Bennett-Freeman
> I'm currently working on building a portfolio, but is there anything I
should focus on in particular?

Since you've changed careers, is there anything you could build the plays
on that? Something you can spin a narrative around of "As a teacher, I
always wanted to do X, and so I built this tool that does it."

Something like "Every term, I'd need to log on to the school web site and
copy a pupils grades one by one into a spreadsheet, so I built a web
scraper that does that automatically"

When it comes to interviewing, being able to demonstrate you have
transferable skills and experience you can apply can give you the edge over
another junior developer who might be able to code as well as you, but has
nothing else to offer - that's a bit blunt, but recruiters are soulless
monsters mostly.

>From a more personal perspective, and definitely more controversial, so
take this as one person's option: Frontend in Rails (Turbo and Hotwire) is
a hot mess and very few companies actually use it. Learning some React, and
building against APIs you've written, or other people's APIs is a much more
transferable skill set. I'd recommend Noel Rappin's Modern Front-End
Development for Rails

which
covers all bases by including Turbo, Stimulus, React, and TypeScript. That
has a more balanced approach than a random person on the internet screaming
"Hotwire sucks!" ;)



On Thu, 18 Apr 2024 at 12:47, DAZ  wrote:
>
> Hi everyone,
>
> I've been a long time member of this group and been coding in Ruby since
Rails first came out, but it's only ever been a hobby for me.
>
> I've recently decided to have a career change from teaching to web
development and would like to get into Rails development, preferably in
Manchester or remote.
>
> I know a lot of you on here are already working as Rails devs - does
anybody have any tips about what the best things I should be doing? I'm
currently working on building a portfolio, but is there anything I should
focus on in particular? And any tips about the best way to find Rails
vacancies or opportunities?
>
> I'd also be interested to hear if anyone knows of any opportunities just
to get any unpaid Rails development experience.
>
> Thanks,
>
> Daz
>
> --
> You received this message because you are subscribed to the Google Groups
"North West Ruby User Group (NWRUG)" group.
> To unsubscribe from this group and stop receiving emails from it, send an
email to nwrug-members+unsubscr...@googlegroups.com.
> To view this discussion on the web, visit
https://groups.google.com/d/msgid/nwrug-members/16950019-53fa-4139-a45d-6ba94c3c587dn%40googlegroups.com
.

-- 
You received this message because you are subscribed to the Google Groups 
"North West Ruby User Group (NWRUG)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to nwrug-members+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/nwrug-members/CAPNVErobv6Cg-5mTOmMdNT%3DJMHW8QBYEZ_9JHgMyhvuoz0EW7w%40mail.gmail.com.


[jira] [Resolved] (CXF-9004) Jetty12 : always use pre-saved HTTP_REQUEST from InMessage to populate SecurityContext

2024-04-19 Thread Freeman Yue Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/CXF-9004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Freeman Yue Fang resolved CXF-9004.
---
Resolution: Fixed

> Jetty12 : always use pre-saved HTTP_REQUEST from InMessage to populate 
> SecurityContext
> --
>
> Key: CXF-9004
> URL: https://issues.apache.org/jira/browse/CXF-9004
> Project: CXF
>  Issue Type: Bug
>  Components: Transports
>Reporter: Freeman Yue Fang
>Assignee: Freeman Yue Fang
>Priority: Major
> Fix For: 4.1.0
>
>
> Ensure we use HttpServletRequest from the one saved in inMessage as this 
> could be the cachedInput one in oneway and ReplyTo is specified when 
> ws-addressing is used, which means we need to switch thread context and 
> underlying transport might discard any data on the original stream.
> So always retrieve the pre-saved HTTP_REQUEST from InMessage, so this could 
> be the original HttpServletRequest or the Cached HttpServletRequest when it's 
> necessary



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (CXF-9004) Jetty12 : always use pre-saved HTTP_REQUEST from InMessage to populate SecurityContext

2024-04-19 Thread Freeman Yue Fang (Jira)
Freeman Yue Fang created CXF-9004:
-

 Summary: Jetty12 : always use pre-saved HTTP_REQUEST from 
InMessage to populate SecurityContext
 Key: CXF-9004
 URL: https://issues.apache.org/jira/browse/CXF-9004
 Project: CXF
  Issue Type: Bug
  Components: Transports
Reporter: Freeman Yue Fang
Assignee: Freeman Yue Fang
 Fix For: 4.1.0


Ensure we use HttpServletRequest from the one saved in inMessage as this could 
be the cachedInput one in oneway and ReplyTo is specified when ws-addressing is 
used, which means we need to switch thread context and underlying transport 
might discard any data on the original stream.

So always retrieve the pre-saved HTTP_REQUEST from InMessage, so this could be 
the original HttpServletRequest or the Cached HttpServletRequest when it's 
necessary



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [gentoo-user] NAS box and switching from Phenom II X6 1090T to FX-6300

2024-04-17 Thread Rich Freeman
On Wed, Apr 17, 2024 at 9:33 AM Dale  wrote:
>
> Rich Freeman wrote:
>
> > All AM5 CPUs have GPUs, but in general motherboards with video outputs
> > do not require the CPU to have a GPU built in.  The ports just don't
> > do anything if this is lacking, and you would need a dedicated GPU.
> >
>
> OK.  I read that a few times.  If I want to use the onboard video I have
> to have a certain CPU that supports it?  Do those have something so I
> know which is which?  Or do I read that as all the CPUs support onboard
> video but if one plugs in a video card, that part of the CPU isn't
> used?  The last one makes more sense but asking to be sure.

To use onboard graphics, you need a motherboard that supports it, and
a CPU that supports it.  I believe that internal graphics and an
external GPU card can both be used at the same time.  Note that
internal graphics solutions typically steal some RAM from other system
use, while an external GPU will have its own dedicated RAM (and those
can also make use of internal RAM too).

The 7600X has a built-in RDNA2 GPU.   All the original Ryzen zen4 CPUs
had GPU support, but it looks like they JUST announced a new line of
consumer zen4 CPUs that don't have it - they all end in an F right
now.

In any case, if you google the CPU you're looking at it will tell you
if it supports integrated graphics.  Most better stores/etc have
filters for this feature as well (places like Newegg or PCPartPicker
or whatever).

If you don't play games, then definitely get integrated graphics.
Even if the CPU costs a tiny bit more, it will give you a free empty
16x PCIe slot at whatever speed the CPU supports (v5 in this case -
which is as good as you can get right now).

> That could mean a slight price drop for the things I'm looking at then.
> One can hope.  Right???

Everything comes down in price eventually...

>
> I might add, simply right clicking on the desktop can take sometimes 20
> or 30 seconds for the menu to pop up.  Switching from one desktop to
> another can take several seconds, sometimes 8 or 10.  This rig is
> getting slower.  Actually, the software is just getting bigger.  You get
> my meaning tho.  I bet the old KDE3 would be blazingly fast compared to
> the rig I ran it on originally.

That sounds like RAM but I couldn't say for sure.  In any case a
modern system will definitely help.

> Given the new rig can have 128GBs, I assume it comes in 32GB sticks.

Consumer DDR5 seems to come as large as 48GB, though that seems like
an odd size.

> I'd get 32GBs at first.  Maybe a month or so later get another 32GB.
> That'll get me 64Gbs.  Later on, a good sale maybe, buy another 32GB or
> a 64GB set and max it out.

You definitely want to match the timings, and you probably want to
match the sticks themselves.  Also, you generally need to be mindful
of how many channels you're occupying, though as I understand it DDR5
is essentially natively dual channel.  If you just stick one DDR4
stick in a system it will not perform as well as two sticks of half
the size.  I forget the gory details but I believe it comes down to
the timings of switching between two different channels vs moving
around within a single one.  DDR RAM timings get really confusing, and
it comes down to the fact that addresses are basically grouped in
various ways and randomly seeking from one address to another can take
a different amount of time depending on how the new address is related
to the address you last read.  The idea of "seeking" with RAM may seem
odd, but recent memory technologies are a bit like storage, and they
are accessed in a semi-serial manner.  Essentially the latencies and
transfer rates are such that even dynamic RAM chips are too slow to
work in the conventional sense.  I'm guessing it gets into a lot of
gory details with reactances and so on, and just wiring up every
memory cell in parallel like in the old days would slow down all the
voltage transitions.

> I've looked at server type boards.  I'd like to have one.  I'd like one
> that has SAS ports.

So, I don't really spend much time looking at them, but I'm guessing
SAS is fairly rare on the motherboards themselves.  They probably
almost always have an HBA/RAID controller in a PCIe slot.  You can put
the same cards in any PC, but of course you're just going to struggle
to have a slot free.  You can always use a riser or something to cram
an HBA into a slot that is too small for it, but then you're going to
suffer reduced performance.  For just a few spinning disks though it
probably won't matter.

Really though I feel like the trend is towards NVMe and that gets into
a whole different world.  U.2 allows either SAS or PCIe over the bus,
and there are HBAs that will handle both.  Or if you only want NVMe it
looks like you can use bifurcation-based solutions to more cheaply
break slots out.

I'm kinda thinking about going that direct

Re: [gentoo-user] NAS box and switching from Phenom II X6 1090T to FX-6300

2024-04-17 Thread Rich Freeman
On Wed, Apr 17, 2024 at 6:33 AM Dale  wrote:
>
> On the AM5 link, I found a mobo that I kinda like.  I still wish it had
> more PCIe slots tho.

AM5 has 28 PCIe lanes.  Anything above that comes from a switch on the
motherboard.

0.1% of the population cares about having anything on their
motherboard besides a 16x slot for the GPU.  So, that's what all the
cheaper boards deliver these days.  The higher end boards often have a
switch and will deliver extra lanes, and MAYBE those will go into
another PCIe slot (probably not wired for 16x but it might have that
form factor), and more often those go into additional M.2 slots and
USB3 ports.  (USB3 is very high bandwidth, especially later
generations, and eats up PCIe lanes as a result.)

Keep in mind those 28 v5 lanes have the bandwidth of over 100 v3
lanes, which is part of why the counts are being reduced.  The problem
is that hardware to do that conversion is kinda niche right now.  It
is much easier to bifurcate a larger slot, but that doesn't buy you
more lanes.

> It supports not only the Ryzen 9
> series but also supports Ryzen 5 series.

That is because the 9 and 5 are branding and basically convey no
information at all besides the price point.

The Ryzen 7 1700X has about half the performance of the Ryzen 5 7600X,
and that would be because the first chip came out in 2017, and the
second came out in 2022 and is three generations newer.

Likewise the intel branding of "i3" or "i7" and so on also conveys no
information beyond the general price level they were introduced at.
You can expect the bigger numbers to offer more performance/features
than the smaller ones OF THE SAME GENERATION.  The same branding keeps
getting re-applied to later generations of chips, and IMO it is
intentionally confusing.

> I looked up the Ryzen 5 7600X
> and 8600G.  I think the X has no video and the G has video support.

Both have onboard graphics.  The G designates zen1-3 chips with a GPU
built in, and all zen4 CPUs have this as a standard feature.  The
7600X is zen4.

See what I mean about the branding getting confusing?

> I
> haven't researched yet to see if the mobo requires the G since it has
> video ports, two to be more precise which is the minimum I need.

All AM5 CPUs have GPUs, but in general motherboards with video outputs
do not require the CPU to have a GPU built in.  The ports just don't
do anything if this is lacking, and you would need a dedicated GPU.

> Anyway, those two CPUs are cheaper than the Ryzen 9 I was looking at.  I
> could upgrade later on as prices drop.  I'm sure a new Ryzen is lurking
> around the corner.

Zen5 is supposedly coming out later this year.  It will be very
expensive.  Zen4 is still kinda expensive I believe though I haven't
gone looking recently at prices.  I have a zen4 system and it was
expensive (particularly the motherboard, and the DDR5 is more
expensive, and if you want NVMe that does v5 that is more expensive as
well).

 > I have a FX-8350 8 core CPU now.  Would the Ryzen 5's mentioned above be
> a good bit faster, a lot, a whole lot?

So, that very much depends on what you're doing.

Single-thread performance of that 7600X is 2-3x faster.  Total
performance is almost 5x faster.  The 7600X will use moderately less
power at full load, and I'm guessing WAY less power at less than full
load.  It will also have much better performance than those numbers
reflect for very short bursts of work, since modern CPUs can boost.

That's just pure CPU performance.

The DDR5 performance of the recent CPU is MUCH better than that of the
DDR3 you're using now.  Your old motherboard might be PCIe v2 (I think
the controller for that was on the motherboard back then?).  If so
each lane delivers 8x more bandwidth on the recent CPU, which matters
a great deal for graphics, or for NVMe performance if you're using an
NVMe that supports it and have a workload that benefits from it.

Gaming tends to be a workload that benefits the most from all of these
factors.  If your system is just acting as a NAS and all the storage
is on hard drives, I'm guessing you won't see much of a difference at
all, except maybe in boot time, especially if you put the OS on an
NVMe.

If this is just for your NAS I would not drop all that money on zen4,
let alone zen5.  I'd look for something older, possibly used, that is
way cheaper.

> Still, I need more memory too.  32GBs just isn't much when running
> Seamonkey, three Firefox profiles and torrent software.

Ok, if this is for a desktop you'll benefit more from a newer CPU.
RAM is really expensive though these days.  Getting something
off-lease is going to save you a fortune as the RAM is practically
free in those.  You can get something with 32GB of DDR4 for $150 or
less in a SFF PC.

> I'm not running
> out but at times, it's using a lot of it.  I was hoping for a mobo that
> would handle more than 128GB but that is a lot of memory.

Any recent motherboard will handle 128GB.  You'll just need to use
large DIMMs as 

[gentoo-commits] repo/gentoo:master commit in: sys-process/systemd-cron/files/, sys-process/systemd-cron/

2024-04-16 Thread Richard Freeman
commit: 46d48ef3f778d1ec33a8115b17f8cd0a53b60b51
Author: Richard Freeman  gentoo  org>
AuthorDate: Tue Apr 16 15:08:08 2024 +
Commit: Richard Freeman  gentoo  org>
CommitDate: Tue Apr 16 15:16:01 2024 +
URL:https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=46d48ef3

sys-process/systemd-cron: drop 1.16.7-r1, 2.3.0-r1, 2.3.0-r2

Signed-off-by: Richard Freeman  gentoo.org>

 sys-process/systemd-cron/Manifest  |   2 -
 .../files/systemd-cron-2.3.0-pch.patch |  46 -
 .../systemd-cron/systemd-cron-1.16.7-r1.ebuild |  95 --
 .../systemd-cron/systemd-cron-2.3.0-r1.ebuild  |  92 --
 .../systemd-cron/systemd-cron-2.3.0-r2.ebuild  | 106 -
 5 files changed, 341 deletions(-)

diff --git a/sys-process/systemd-cron/Manifest 
b/sys-process/systemd-cron/Manifest
index 06aa4d41d515..06b7f8013a30 100644
--- a/sys-process/systemd-cron/Manifest
+++ b/sys-process/systemd-cron/Manifest
@@ -1,4 +1,2 @@
-DIST systemd-cron-1.16.7.tar.gz 37887 BLAKE2B 
a900058cef1cd02ac464d3ecdd43ce2f264bdba386f349ef82f0a915104302b1e88d94331d5fbaabe2c54f526900f3e1ac65ea6bdc2f27a6464e6d7514561a19
 SHA512 
d65d641fd449cdc0e91db3ae6ebe464bc4e24027c501b30a8ab17e7cc40de290cc6141bfb7880a724d97248861587e6f5fea113a6aa6e468d971aff3a13b056f
 DIST systemd-cron-2.2.0.tar.gz 55825 BLAKE2B 
ca4b02fdea5084439aa56b3f04603000d811f21922c11cd26a22ea6387e4b54575587ff4e1eb7fc7a3260d2f656ea0eb91365942c135982f4bd26aead1a080f1
 SHA512 
f26c7d7e2da7eb5cd5558f352aff852585bfefd961de6ecc2409a4a53b63f82662a89bdbf71f739ea8e44ef9e3e1fdec15cdc63ce1e90c289fb0e636ff679ca0
-DIST systemd-cron-2.3.0.tar.gz 56873 BLAKE2B 
3efe8adc1b735ed5eb91c64d0936edceec50ff476d42ba5c1e9941c196a7bc8c777b0c293c8ed71894dae31c5b721a45a2876cab0143298e1b1ab3e82fcb7ceb
 SHA512 
abb7c34d6901160395d64cfc4e5124887909b963bcfee027f64642b25bb138b3f085eb45595197a380faf39b7f5980e32c50d083be6307d7c985a55057962565
 DIST systemd-cron-2.3.4.tar.gz 58458 BLAKE2B 
594fff8f7cc126aa33b1dcbf74293a39b5939576203c11f8f0fc300285462f266c35503a6cfe46ee797e5e617e54e09b92dd6ba8a4044f962d1efd2822f0a87c
 SHA512 
2a9743df6d0e1a83b65d15609e47b901fde1d77d1207c4cc0617395be8d9e94daece91aec9a3398c3d09f86383e01cfff301614df727ca598efe873453f5a3c9

diff --git a/sys-process/systemd-cron/files/systemd-cron-2.3.0-pch.patch 
b/sys-process/systemd-cron/files/systemd-cron-2.3.0-pch.patch
deleted file mode 100644
index e27f253a62ca..
--- a/sys-process/systemd-cron/files/systemd-cron-2.3.0-pch.patch
+++ /dev/null
@@ -1,46 +0,0 @@
-https://bugs.gentoo.org/917646
-https://github.com/systemd-cron/systemd-cron/issues/141
-https://github.com/systemd-cron/systemd-cron/commit/1662b899b206f00face30b9d4671551427262b07
-
-From 1662b899b206f00face30b9d4671551427262b07 Mon Sep 17 00:00:00 2001
-From: =?UTF-8?q?=D0=BD=D0=B0=D0=B1?= 
-Date: Tue, 21 Nov 2023 19:40:05 +0100
-Subject: [PATCH] Add PCH= for broken compilers like #141
-
 a/Makefile.in
-+++ b/Makefile.in
-@@ -1,6 +1,7 @@
- CFLAGS ?= -O2
- SHELLCHECK ?= shellcheck
- CRONTAB ?= crontab
-+PCH ?= y
- 
- version   := @version@
- schedules := @schedules@
-@@ -208,12 +209,12 @@ $(builddir)/include/%.hpp: $(srcdir)/include/%.hpp
- CXXVER := $(shell $(CXX) --version | { read -r l; echo "$$l"; })
- ifneq "$(findstring clang,$(CXXVER))" ""
-   # clang doesn't use PCHs automatically
--  PCH_ARG := -include-pch $(builddir)/include/libvoreutils.hpp.gch 
-Wno-gcc-compat
-+  PCH_ARG := $(if $(PCH),-include-pch 
$(builddir)/include/libvoreutils.hpp.gch) -Wno-gcc-compat
- else
-   PCH_ARG :=
- endif
- 
--common_headers := $(builddir)/include/configuration.hpp 
$(builddir)/include/libvoreutils.hpp.gch $(builddir)/include/util.hpp
-+common_headers := $(builddir)/include/configuration.hpp 
$(builddir)/include/libvoreutils.hpp$(if $(PCH),.gch) 
$(builddir)/include/util.hpp
- CFLAGS += -Wall -Wextra -fno-exceptions -Wno-psabi
- $(builddir)/include/libvoreutils.hpp.gch : 
$(builddir)/include/libvoreutils.hpp
-   $(CXX) $(CFLAGS) $(CPPFLAGS) -std=c++20 -I $(builddir)/include  
  $< -o $@
 a/README.md
-+++ b/README.md
-@@ -146,6 +146,8 @@ without the override, the jobs would run twice since 
native-timer detection woul
- If there is already a perfect 1:1 mapping between `/etc/cron./` 
and `/usr/lib/systemd/system/.timer`,
- then it is not needed to add an entry to these tables.
- 
-+If your compiler's [PCH compilation is 
broken](https://github.com/systemd-cron/systemd-cron/issues/141), build with 
`make PCH=`.
-+
- ### Caveat
- 
- Your package should also run these extra commands before starting cron.target
-

diff --git a/sys-process/systemd-cron/systemd-cron-1.16.7-r1.ebuild 
b/sys-process/systemd-cron/systemd-cron-1.16.7-r1.ebuild
deleted file mode 100644
index b779832b971b..
--- a/sys-process/systemd-cron/systemd-cron-1.16.7-r1.ebuild
+++ /dev/null
@@ -1,95 +0,

[gentoo-commits] repo/gentoo:master commit in: sys-process/systemd-cron/

2024-04-16 Thread Richard Freeman
commit: 641c536fe4afa656f34ebb656bf1e8bca8bd87bc
Author: Richard Freeman  gentoo  org>
AuthorDate: Tue Apr 16 15:14:12 2024 +
Commit: Richard Freeman  gentoo  org>
CommitDate: Tue Apr 16 15:16:02 2024 +
URL:https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=641c536f

sys-process/systemd-cron: Require split-usr in deps and no longer die.

Closes: https://bugs.gentoo.org/928893
Signed-off-by: Richard Freeman  gentoo.org>

 ...{systemd-cron-2.3.4.ebuild => systemd-cron-2.3.4-r1.ebuild} | 10 +-
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/sys-process/systemd-cron/systemd-cron-2.3.4.ebuild 
b/sys-process/systemd-cron/systemd-cron-2.3.4-r1.ebuild
similarity index 90%
rename from sys-process/systemd-cron/systemd-cron-2.3.4.ebuild
rename to sys-process/systemd-cron/systemd-cron-2.3.4-r1.ebuild
index 32892c37b102..00738f1c0e07 100644
--- a/sys-process/systemd-cron/systemd-cron-2.3.4.ebuild
+++ b/sys-process/systemd-cron/systemd-cron-2.3.4-r1.ebuild
@@ -23,7 +23,7 @@ RDEPEND="
acct-group/_cron-failure
app-crypt/libmd:=
sys-process/cronbase
-   >=sys-apps/systemd-253
+   >=sys-apps/systemd-255[-split-usr(-)]
!etc-crontab-systemd? ( !sys-process/dcron )
runparts? ( sys-apps/debianutils )
 "
@@ -32,14 +32,6 @@ DEPEND="
sys-process/cronbase
 "
 
-pkg_pretend() {
-   if use runparts && ! [ -x /usr/bin/run-parts ] ; then
-   eerror "Please complete the migration to merged-usr."
-   eerror "https://wiki.gentoo.org/wiki/Merge-usr;
-   die "systemd-cron no longer supports split-usr"
-   fi
-}
-
 src_prepare() {
sed -i \
-e 's/^crontab/crontab-systemd/' \



[jira] [Resolved] (CAMEL-20683) camel-spring-boot-examples/soap-cxf: use spring-boot-starter-undertow as the embedded servlet server and demonstrate how to configure undertow handlers

2024-04-16 Thread Freeman Yue Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/CAMEL-20683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Freeman Yue Fang resolved CAMEL-20683.
--
Resolution: Fixed

> camel-spring-boot-examples/soap-cxf: use spring-boot-starter-undertow as the 
> embedded servlet server and demonstrate how to configure undertow handlers
> ---
>
> Key: CAMEL-20683
> URL: https://issues.apache.org/jira/browse/CAMEL-20683
> Project: Camel
>  Issue Type: Improvement
>  Components: camel-spring-boot
>Reporter: Freeman Yue Fang
>Assignee: Freeman Yue Fang
>Priority: Major
> Fix For: 4.x
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CAMEL-20683) camel-spring-boot-examples/soap-cxf: use spring-boot-starter-undertow as the embedded servlet server and demonstrate how to configure undertow handlers

2024-04-16 Thread Freeman Yue Fang (Jira)


[ 
https://issues.apache.org/jira/browse/CAMEL-20683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837745#comment-17837745
 ] 

Freeman Yue Fang commented on CAMEL-20683:
--

By implementing WebServerFactoryCustomizer we 
can demonstrate how to configure undertow handlers in this example

> camel-spring-boot-examples/soap-cxf: use spring-boot-starter-undertow as the 
> embedded servlet server and demonstrate how to configure undertow handlers
> ---
>
> Key: CAMEL-20683
> URL: https://issues.apache.org/jira/browse/CAMEL-20683
> Project: Camel
>  Issue Type: Improvement
>  Components: camel-spring-boot
>Reporter: Freeman Yue Fang
>Assignee: Freeman Yue Fang
>Priority: Major
> Fix For: 4.x
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (CAMEL-20683) camel-spring-boot-examples/soap-cxf: use spring-boot-starter-undertow as the embedded servlet server and demonstrate how to configure undertow handlers

2024-04-16 Thread Freeman Yue Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/CAMEL-20683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Freeman Yue Fang updated CAMEL-20683:
-
Fix Version/s: 4.x

> camel-spring-boot-examples/soap-cxf: use spring-boot-starter-undertow as the 
> embedded servlet server and demonstrate how to configure undertow handlers
> ---
>
> Key: CAMEL-20683
> URL: https://issues.apache.org/jira/browse/CAMEL-20683
> Project: Camel
>  Issue Type: Improvement
>  Components: camel-spring-boot
>Reporter: Freeman Yue Fang
>Assignee: Freeman Yue Fang
>Priority: Major
> Fix For: 4.x
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (CAMEL-20683) camel-spring-boot-examples/soap-cxf: use spring-boot-starter-undertow as the embedded servlet server and demonstrate how to configure undertow handlers

2024-04-16 Thread Freeman Yue Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/CAMEL-20683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Freeman Yue Fang reassigned CAMEL-20683:


Assignee: Freeman Yue Fang

> camel-spring-boot-examples/soap-cxf: use spring-boot-starter-undertow as the 
> embedded servlet server and demonstrate how to configure undertow handlers
> ---
>
> Key: CAMEL-20683
> URL: https://issues.apache.org/jira/browse/CAMEL-20683
> Project: Camel
>  Issue Type: Improvement
>  Components: camel-spring-boot
>Reporter: Freeman Yue Fang
>Assignee: Freeman Yue Fang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (CAMEL-20683) camel-spring-boot-examples/soap-cxf: use spring-boot-starter-undertow as the embedded servlet server and demonstrate how to configure undertow handlers

2024-04-16 Thread Freeman Yue Fang (Jira)
Freeman Yue Fang created CAMEL-20683:


 Summary: camel-spring-boot-examples/soap-cxf: use 
spring-boot-starter-undertow as the embedded servlet server and demonstrate how 
to configure undertow handlers
 Key: CAMEL-20683
 URL: https://issues.apache.org/jira/browse/CAMEL-20683
 Project: Camel
  Issue Type: Improvement
  Components: camel-spring-boot
Reporter: Freeman Yue Fang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [gentoo-user] NAS box and switching from Phenom II X6 1090T to FX-6300

2024-04-13 Thread Rich Freeman
On Sat, Apr 13, 2024 at 8:20 AM Dale  wrote:
>
> Right now, I have a three drive setup in a removable cage for the NAS
> box.

If you only need three drives I'm sure you can find cheap used
hardware that will handle that.  Odds are it will use way less power
and perform better than whatever you're going to upgrade your system
to.

> I'm not familiar with Ceph but I've seen it mentioned before.

Do NOT deploy Ceph with three drives on one host.

Ceph is what you think about using when you are tired of stacking HBAs
to cram a dozen SATA ports in a single host.  It isn't what you'd use
for backup/etc storage.

Honestly, if you're just looking for backup drives I'd consider USB3
drives you just plug into a host and run in a zpool or whatever.
Export the filesystem and unplug the drives and you're done.  That is
how I backup Ceph right now (k8s job that runs restic against ceph
dumping it on a zpool).

-- 
Rich



Re: [gentoo-user] NAS box and switching from Phenom II X6 1090T to FX-6300

2024-04-13 Thread Rich Freeman
On Sat, Apr 13, 2024 at 8:11 AM Dale  wrote:
>
> My biggest thing right now, finding a mobo with plenty of PCIe slots.
> They put all this new stuff, wifi and such, but remove things I do need,
> PCIe slots.

PCIe and memory capacity seem to have become the way the
server/workstation and consumer markets are segmented.

AM5 gets you 28x v5 lanes.  SP5 gets you 128x v5 lanes.  The server
socket also has way more memory capacity, though I couldn't quickly
identify exactly how much more due to the ambiguous way in which DDR5
memory channels are referenced all over the place.  Suffice it to say
you can put several times as many DIMMs into a typical server
motherboard, especially if you have two CPUs on it (two CPUs likewise
increases the PCIe capacity).

IT would be nice if there were switches out there that would let you
take a v5 PCIe slot on newer consumer hardware and break it out into a
bunch of v3/4 NVMe adapters (U.2, M.2, PCIe, whatever).

-- 
Rich



Re: [gentoo-user] NAS box and switching from Phenom II X6 1090T to FX-6300

2024-04-13 Thread Rich Freeman
On Sat, Apr 13, 2024 at 3:58 AM Dale  wrote:
>
> Given the FX-6300 has a higher clocks speed, 3.8GHz versus 3.2GHz for
> the Phenom, I'd think the FX would be a upgrade, quite a good one at
> that.  More L2 cache too.  Both are 6 cores according to what I found.
> Anyone know something I don't that would make switching to the FX-6300 a
> bad idea?

The most obvious issue is that you're putting money into a very obsolete system.

Obviously hardware of this generation is fairly cheap, but it isn't
actually the best bang for the buck, ESPECIALLY when you factor in
power use.  Like most AMD chips of that generation (well, most chips
in general when you get that old), that CPU uses quite a bit of power
at idle, and so that chip which might cost you $35 even at retail
might cost you double that amount per year just in electricity.

If your goal is to go cheap you also need to consider alternatives.
You can get used hardware from various places, and most of it is 3-5
years old.  Even commodity hardware of that age is far more powerful
than a 15 year old CPU socket and often it starts at $100 or so - and
that is for a complete system.  Often you can get stuff that is
ex-corporate that has a fair bit of RAM as well, since a lot of
companies need to deal with compatibility with office productivity
software that might be a little RAM hungry.  RAM isn't cheap these
days, and they practically give it away when they dispose of old
hardware.

The biggest issue you're going to have with NAS is finding something
with the desired number of drive bays, as a lot of used desktop
hardware is SFF (but also super-low-power, which is something
companies consider in their purchasing decisions when picking
something they're going to be buying thousands of).

Right now most of my storage is on Ceph on SFF PCs.  I do want to try
to get future expansion onto NVMe but even used systems that support
much of that are kinda expensive still (mostly servers since desktop
CPUs have so few PCIe lanes, and switches aren't that common).  One of
my constraints using Ceph though is I need a lot of RAM, which is part
of why I'm going the SFF route - for $100 you can get one with 32GB of
RAM and 2-3 SATA ports, plus USB3 and an unused 4-16x PCIe slot.  That
is a lot of RAM/IO compared to most options at that price point (ARM
in particular tends to lack both - not that it doesn't support it, but
rather nobody makes cheap ARM hardware with PCIe+DIMM slots).

-- 
Rich



Re: [PROPOSAL] Branch of 4.0.x, move main to 4.1.x (Jakarta EE 10)

2024-04-11 Thread Freeman Fang
+1 to move main branch to CXF 4.1.x, thanks Andriy!

Freeman

On Thu, Apr 11, 2024 at 7:48 AM Andriy Redko  wrote:

> Hey Folks,
>
> I would like to resume the discussion regarding 4.1.x (Jakarta EE 10): the
> Jakarta EE 9.x was a transitional release, most (if not all) already
> jumped off.
> From the CXF side, quite a lot of work has been done already on 4.1.x
> (Jakarta EE 10),
> streamlining it would definitely help. So far only @Francesco responded
> (thanks a lot)
> but would be great to hear from others. Thanks!
>
> Best Regards,
> Andriy Redko
>
> > +1 thanks Andriy
>
> > Regards.
>
> > On 09/07/23 20:29, Andriy Redko wrote:
> >> Hey Folks,
>   >>> The work on Jakarta EE 10 had started already (so far the progress
> was a bit slower
> >> than desired but the things should accelerate I hope). It becomes
> increasingly difficult
> >> to keep the separate pull request [1] intact (particularly, dependabot
> is constantly trying
> >> to update to latest specs). To ease the implementation efforts, I would
> like to propose
> >> to:
> >>   - branch of 4.0.x-fixes from main
> >>   - move main to 4.1.x
> >>   - work on [1] to be ready for merge with main
> >>  From the maintenance perspective, that would increase a bit the amount
> of work required:
> >>   - 3.5.x and 3.6.x: those are the last (hopefully) branches for
> javax.* to maintain
> >>   - 4.0.x and 4.1.x (upcoming): those are the ones to support going
> forward for jakarta.* to maintain
> >> Any feedback / thoughts / suggestions are very welcome!
> >> Thanks!
> >> [1] https://issues.apache.org/jira/browse/CXF-8671
> >> [2] https://github.com/apache/cxf/pull/1201
> >> Best Regards,
> >>  Andriy Redko
>
>


WWW: missing link to plus75.html on plus74.html

2024-04-09 Thread Ryan Freeman
Quickly reviewed other plusXX.html pages, only plus74.html seems to omit it.

Index: plus74.html
===
RCS file: /cvs/www/plus74.html,v
diff -u -p -r1.3 plus74.html
--- plus74.html 4 Oct 2023 15:06:06 -   1.3
+++ plus74.html 10 Apr 2024 04:31:28 -
@@ -91,6 +91,7 @@ For changes in other releases, click bel
 7.1,
 7.2,
 7.3,
+7.5,
 current.
 



fix closing tag in plus75.html

2024-04-09 Thread Ryan Freeman
Hi, this fixes the closing  tag on the -current line.

-ryan

18:14 ryan@build-amd64:/usr/src/www$ cvs diff -uNp plus75.html
Index: plus75.html
===
RCS file: /cvs/www/plus75.html,v
diff -u -p -u -p -r1.1 plus75.html
--- plus75.html 5 Apr 2024 12:16:50 -   1.1
+++ plus75.html 10 Apr 2024 01:14:38 -
@@ -92,7 +92,7 @@ For changes in other releases, click bel
 7.2,
 7.3,
 7.4,
--current/a>.
+-current.
 
 
 



ls flag to show number of hard links

2024-04-09 Thread Tony Freeman
I seem to recall that some years ago ls -l would show the number of hard
links to a file. This may have been on an AIX system. I needed to use this
today on a red hat 8 machine but couldn't figure it out.   So I used ls -i
| sort

Am I overlooking a flag to ls that would show me the number of hard links?


[jira] [Commented] (CXF-8987) Java 21 - HttpClientHTTPConduit thread locked during shutdown

2024-04-04 Thread Freeman Yue Fang (Jira)


[ 
https://issues.apache.org/jira/browse/CXF-8987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834027#comment-17834027
 ] 

Freeman Yue Fang commented on CXF-8987:
---

Hi [~carnevalegiacomo],

Is it possible that you append a reproducer project so that we can reproduce 
this behaviour easily?

Thanks!
Freeman

> Java 21 - HttpClientHTTPConduit thread locked during shutdown 
> --
>
> Key: CXF-8987
> URL: https://issues.apache.org/jira/browse/CXF-8987
> Project: CXF
>  Issue Type: Bug
>  Components: Transports
>Affects Versions: 4.0.3, 4.0.4
> Environment: [^thdump2]
> *OpenJDK 21.0.2*
> *Apache CXF 4.0.4*
> *Apache Camel 4.4.1*
>Reporter: Giacomo Carnevale
>Priority: Blocker
> Attachments: thdump2
>
>
> Hi,
> I am using Apache CXF client via the Apache Camel CXF connector.
> After I updated frm OpenJDK 17.x to OpenJDK 21.0.2, during application 
> shutdown, the following lock occurs:
> *at java.lang.Thread.join(java.base@21.0.2/Thread.java:2072)*
>     *- locked <0x00061cd2ab80> (a 
> jdk.internal.net.http.HttpClientImpl$SelectorManager)*
>     *at java.lang.Thread.join(java.base@21.0.2/Thread.java:2200)*
>     *at 
> jdk.internal.net.http.HttpClientImpl.awaitTermination(java.net.http@21.0.2/HttpClientImpl.java:628)*
>     *at 
> java.net.http.HttpClient.{color:#de350b}close{color}(java.net.http@21.0.2/HttpClient.java:900)*
>     *at 
> jdk.internal.net.http.HttpClientFacade.{color:#de350b}close{color}(java.net.http@21.0.2/HttpClientFacade.java:192)*
>     *at 
> org.apache.cxf.transport.http.HttpClientHTTPConduit.{color:#de350b}close{color}(HttpClientHTTPConduit.java:125)*
> HttpClientHTTPConduit.close
> {code:java}
> public void close() {
> if (client instanceof AutoCloseable) {
> try {
> ((AutoCloseable)client).close();
> } catch (Exception e) {
> //ignore
> }
> } else if (client != null) {
> String name = client.toString();
> client = null;
> tryToShutdownSelector(name);
> }
> defaultAddress = null;
> super.close();
> } {code}
>  
> java.net.HttpClient.close
>  
> {code:java}
> public void close() {
> boolean terminated = isTerminated();
> if (!terminated) {
> shutdown();
> boolean interrupted = false;
> while (!terminated) {
> try {
> terminated = awaitTermination(Duration.ofDays(1L));
> } catch (InterruptedException e) {
> if (!interrupted) {
> interrupted = true;
> shutdownNow();
> if (isTerminated()) break;
> }
> }
> }
> if (interrupted) {
> Thread.currentThread().interrupt();
> }
> }
> } {code}
> My workaround
> {code:java}
> public void close() {
> if (client instanceof AutoCloseable) {
> try {
> client.shutdownNow();
> //((AutoCloseable)client).close();
> } catch (Exception e) {
> //ignore
> }
> } else if (client != null) {
> String name = client.toString();
> client = null;
> tryToShutdownSelector(name);
> }
> defaultAddress = null;
> super.close();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CXF-8998) cxf-codegen-plugin - debug output velocity

2024-04-03 Thread Freeman Yue Fang (Jira)


[ 
https://issues.apache.org/jira/browse/CXF-8998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17833655#comment-17833655
 ] 

Freeman Yue Fang commented on CXF-8998:
---

Hi [~khmarbaise],

I just quick tested samples/wsdl_first shipped with CXF 4.0.4 kit, but I can't 
see the DEBUG org.apache.velocity output there.

Any chance you can append a reproducer project so that we can take a closer 
look?

Thanks!
Freeman

> cxf-codegen-plugin - debug output velocity
> --
>
> Key: CXF-8998
> URL: https://issues.apache.org/jira/browse/CXF-8998
> Project: CXF
>  Issue Type: Improvement
>  Components: Configuration
>Affects Versions: 4.0.4
>Reporter: Karl Heinz Marbaise
>Priority: Minor
>
> Currently we are using {{cxf-codegen-plugin}}, which produces allways 
> debugging output of velocity, on the console
> {code}
> 15:59:45  [INFO] 
> 15:59:45  [INFO] --- cxf-codegen:4.0.4:wsdl2java (default) @ wsdls ---
> 15:59:45  [INFO] Running code generation in fork mode...
> 15:59:45  [INFO] bin/java
> 15:59:45  [INFO] Building jar: /tmp/cxf-tmp-7/cxf-codegen.jar
> 15:59:45  [WARNING] Picked up JAVA_TOOL_OPTIONS: .
> 15:59:46  [INFO] 15:59:46.356 [main] DEBUG org.apache.velocity -- 
> Initializing Velocity, Calling init()...
> 15:59:46  [INFO] 15:59:46.360 [main] DEBUG org.apache.velocity -- Starting 
> Apache Velocity v2.3
> 15:59:46  [INFO] 15:59:46.362 [main] DEBUG org.apache.velocity -- Default 
> Properties resource: org/apache/velocity/runtime/defaults/velocity.properties
> 15:59:53  [INFO] 
> {code}
> Is there an easy way to suppress the debug output for the execution of the 
> plugin?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [gentoo-user] Re: [gentoo-dev] Current unavoidable use of xz utils in Gentoo

2024-03-31 Thread Rich Freeman
On Sun, Mar 31, 2024 at 5:36 PM Wol  wrote:
>
> On 31/03/2024 20:38, Håkon Alstadheim wrote:
> > For commercial entities, the government could just contact the company
> > and apply pressure, no need to sneak the backdoor in. Cf. RSA .
>
> Serving a "secret compliance" notice on a third party is always fraught
> with danger. Okay, I probably can't trust my own government to protect
> me, but if the US Government served a compliance notice on me I'd treat
> it with the respect it deserved - probably use it as loo paper!

I imagine most large companies would just comply with their local
government, but there are some major limitations all the same:

1. It isn't necessarily the local government who wants to plant the
back door.  The FBI can't just call up Huawei and get the same results
they would with Google.
2. Even if the company complies, there are going to be more people who
are aware of the back door.  Some of those could be foreign agents.
If you infiltrate the company and obfuscate your code, then only your
own agents are aware there is an intrusion.
3. The methods employed in your attack might also be sensitive, and so
that's another reason to not want to disclose them.  If you have some
way of subtly compromising some encryption scheme, you might not want
any employees of the company to even know the cryptosystem weakness
even exists, let alone the fact that you're exploiting it.  When the
methods are secret in this way it is that much easier to obfuscate a
clandestine attack as well.

When you look at engineer salaries against national defense budgets,
it wouldn't surprise me if a LOT of FOSS (and other) contributors are
being paid to add back doors.  On the positive side, that probably
also means that they're getting paid to fix a lot of bugs and add
features just to give them cover.

To bomb a power plant might take the combined efforts of 1-2 dozen
military aircraft in various roles, at $100M+ each (granted, that's
acquisition cost and not operational cost).  Installing a trojan that
would cause the plant to blow itself up on command might just require
paying a few developers for a few years, for probably less than $1M
total, and it isn't even that obvious that you were involved if it
gets discovered, or even after the plant blows up.

-- 
Rich



Re: [gentoo-user] Re: [gentoo-dev] Current unavoidable use of xz utils in Gentoo

2024-03-31 Thread Rich Freeman
On Sun, Mar 31, 2024 at 10:59 AM Michael  wrote:
>
> On Sunday, 31 March 2024 13:33:20 BST Rich Freeman wrote:
> > (moving this to gentoo-user as this is really getting off-topic for -dev)
>
> Thanks for bringing this to our attention Rich.
>
> Is downgrading to app-arch/xz-utils-5.4.2 all that is needed for now, or are
> we meant to rebuilding any other/all packages, especially if we rebuilt our
> @world only a week ago as part of the move to profile 23.0?

It is not necessary to rebuild anything, unless you're doing something
so unusual that you'd already know the answer to the question.

-- 
Rich



[gentoo-user] Re: [gentoo-dev] Current unavoidable use of xz utils in Gentoo

2024-03-31 Thread Rich Freeman
(moving this to gentoo-user as this is really getting off-topic for -dev)

On Sun, Mar 31, 2024 at 7:32 AM stefan1
 wrote:
>
> Had I seen someone say that a bad actor would spend years gaining the
> trust of FOSS
> project maintainers in order to gain commit access and introduce such
> sophisticated
> back doors, I would have told them to take their meds.
> This is insane.

It makes quite a bit of sense though.  For a low-activity FOSS
project, how much manpower does it take to gain a majority share of
the governance?  In this case it is one person, but even for a big
project (such as Gentoo) I suspect that 3-4 people working full time
could probably hit upwards of 50% of the commit volume.  That doesn't
have to be 3-4 "Gentoo developers."  It could be 3-4 human beings with
1 admin assistant who manages 50 email addresses that the commits get
spread across, and they sign up as 50 Gentoo developers and get 50
votes for the Council (and probably half the positions there if they
want them), the opportunity to peer review "each other's"
contributions, and so on.

I just use Gentoo as an example as we're all familiar with it and
probably assume it couldn't happen here.  As you go on, the actual
targets are likely to be other projects...

> If this happened to something like firefox, I don't think anyone would
> have found out.
> No one bats an eye if a website loads 0.5s longer.

It seems likely that something like this has ALREADY happened to firefox.

It might also happen with commercial software, but the challenge there
is HR as you can't just pay 1 person to masquerade as 10 when they all
need to deal with payroll taxes.

We're going on almost 20 years since the Snowden revelations, and back
then the NSA was basically doing intrusion on an industrial scale.
You'd have dev teams building zero days and rootkits, sysadmin teams
who just administrate those back doors to make sure there are always
2-3 ways in just in case one gets closed, SMEs who actually make sense
of the stolen data, rooms full of engineers who receive intercepted
shipments of hardware and install backdoors on them, and so on.

We're looking at what probably only one person can do if they can
dedicate full time to something like this.  Imagine what a cube farm
full of supervised developers with a $50M budget could do, and that is
pocket change to most state actors.  The US government probably spends
more than that in a year on printer paper.

-- 
Rich



Re: [gentoo-dev] Current unavoidable use of xz utils in Gentoo

2024-03-30 Thread Rich Freeman
On Sat, Mar 30, 2024 at 10:57 AM Eddie Chapman  wrote:
>
> No, this is the the bad actor *themselves* being a
> principal author of the software, working stealthily and in very
> sophisticated ways for years, to manoeuvrer themselves and their software
> into a position of trust in the ecosystem whereby they were almost able to
> pull off the mother of all security nightmares for the world.

This is entirely speculative at this point.  It isn't certain that the
author is the one behind the exploit, and if they were, it is not
known for how long their intentions were malicious, or even what their
motivations were.  It is also unclear what pseudonymous accounts with
what projects are associated with the attacker.

You could end up being right, but it probably makes sense to at least
give things a few days for more facts to become available, before
making decisions to retool the entire distro.

I think the bigger challenge is what could have been done to prevent
this sort of problem in the first place.  There are so many projects
that end up with code running as root that have one or two people
taking care of them, and if somebody does the work to become one of
those maintainers, there aren't many people looking out for problems.

I think one thing that would help here is for distros to have better
ways to ensure that the code in the scm matches the code in the
tarball.  It is pretty common for releases to be manipulated in some
way (even if only to gpg sign them, but often to switch from commit
IDs to version numbers and so on), and that can be a place where stuff
gets added.  That still says nothing about obfuscated code, which this
also involved.

-- 
Rich



Re: [gentoo-dev] Current unavoidable use of xz utils in Gentoo

2024-03-30 Thread Rich Freeman
On Sat, Mar 30, 2024 at 3:06 AM Dale  wrote:
>
> when I got to the part about it not likely to affect Gentoo, my level of 
> concern dropped significantly.  If this is still true, there's no need to be 
> concerned.

"not likely" is the best way to characterize this.  The exploit has
not been fully analyzed, and it could have additional malicious
behavior, either designed by its author, or perhaps even unintended by
its author.

I just wanted to toss in that caveat, but agree that the defaults
deployed in Gentoo seem the most sensible for general use.  There is
nothing magical about xz - ANY widely-used library could have
something like this embedded in it, and the attacker exploited what
they had access to in order to go after a configuration that was going
to be widely deployed and reachable (xz+deb/rpm+systemd+openssh).  If
the attacker had an intended target that used gentoo+openrc and access
to something in our supply chain, this could have been a vulnerability
that only impacted Gentoo.

I think the big lesson here is that FOSS continues to suffer from core
dependencies that are challenged for resources, and that efforts to
fix this have to constantly keep up with the changing landscape.  xz
is going on 15 years old, but I don't think it was nearly as commonly
used until fairly recently.

libz has been a pretty well-known source of security flaws for ages
(granted, usually not intentional like this).  It isn't too surprising
that in both cases compression libraries were targeted, as these are
so widely depended on.

This is getting tangential, but part of me wonders if there is a
better way to do authentication.  Programs like ssh tend to run as
root so that they can authenticate users and then fork and suid to the
appropriate user.  Could some OS-level facility be created to allow
unprivileged processes to run the daemons and then as part of the
authentication process they can have the OS accept a controlled and
minimal set of data to create the process as the new user and hand
over the connection?  PAM already has a large amount of control over
the authentication process, so it seems like we just need to change
the security context that this runs in.  That's just
brainstorming-level thinking though - there could be obvious issues
with this that just haven't occurred to me.

-- 
Rich



[gentoo-commits] repo/gentoo:master commit in: sys-process/systemd-cron/

2024-03-29 Thread Richard Freeman
commit: 5bf8858b838e87def072cbc01927768799b7cc89
Author: Richard Freeman  gentoo  org>
AuthorDate: Fri Mar 29 11:31:04 2024 +
Commit: Richard Freeman  gentoo  org>
CommitDate: Fri Mar 29 11:31:24 2024 +
URL:https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=5bf8858b

sys-process/systemd-cron: add 2.3.4

Signed-off-by: Richard Freeman  gentoo.org>

 sys-process/systemd-cron/Manifest  |   1 +
 sys-process/systemd-cron/systemd-cron-2.3.4.ebuild | 101 +
 2 files changed, 102 insertions(+)

diff --git a/sys-process/systemd-cron/Manifest 
b/sys-process/systemd-cron/Manifest
index 8da4bc90b8c5..06aa4d41d515 100644
--- a/sys-process/systemd-cron/Manifest
+++ b/sys-process/systemd-cron/Manifest
@@ -1,3 +1,4 @@
 DIST systemd-cron-1.16.7.tar.gz 37887 BLAKE2B 
a900058cef1cd02ac464d3ecdd43ce2f264bdba386f349ef82f0a915104302b1e88d94331d5fbaabe2c54f526900f3e1ac65ea6bdc2f27a6464e6d7514561a19
 SHA512 
d65d641fd449cdc0e91db3ae6ebe464bc4e24027c501b30a8ab17e7cc40de290cc6141bfb7880a724d97248861587e6f5fea113a6aa6e468d971aff3a13b056f
 DIST systemd-cron-2.2.0.tar.gz 55825 BLAKE2B 
ca4b02fdea5084439aa56b3f04603000d811f21922c11cd26a22ea6387e4b54575587ff4e1eb7fc7a3260d2f656ea0eb91365942c135982f4bd26aead1a080f1
 SHA512 
f26c7d7e2da7eb5cd5558f352aff852585bfefd961de6ecc2409a4a53b63f82662a89bdbf71f739ea8e44ef9e3e1fdec15cdc63ce1e90c289fb0e636ff679ca0
 DIST systemd-cron-2.3.0.tar.gz 56873 BLAKE2B 
3efe8adc1b735ed5eb91c64d0936edceec50ff476d42ba5c1e9941c196a7bc8c777b0c293c8ed71894dae31c5b721a45a2876cab0143298e1b1ab3e82fcb7ceb
 SHA512 
abb7c34d6901160395d64cfc4e5124887909b963bcfee027f64642b25bb138b3f085eb45595197a380faf39b7f5980e32c50d083be6307d7c985a55057962565
+DIST systemd-cron-2.3.4.tar.gz 58458 BLAKE2B 
594fff8f7cc126aa33b1dcbf74293a39b5939576203c11f8f0fc300285462f266c35503a6cfe46ee797e5e617e54e09b92dd6ba8a4044f962d1efd2822f0a87c
 SHA512 
2a9743df6d0e1a83b65d15609e47b901fde1d77d1207c4cc0617395be8d9e94daece91aec9a3398c3d09f86383e01cfff301614df727ca598efe873453f5a3c9

diff --git a/sys-process/systemd-cron/systemd-cron-2.3.4.ebuild 
b/sys-process/systemd-cron/systemd-cron-2.3.4.ebuild
new file mode 100644
index ..32892c37b102
--- /dev/null
+++ b/sys-process/systemd-cron/systemd-cron-2.3.4.ebuild
@@ -0,0 +1,101 @@
+# Copyright 1999-2024 Gentoo Authors
+# Distributed under the terms of the GNU General Public License v2
+
+EAPI=8
+inherit systemd toolchain-funcs
+
+DESCRIPTION="systemd units to create timers for cron directories and crontab"
+HOMEPAGE="https://github.com/systemd-cron/systemd-cron/;
+SRC_URI="https://github.com/systemd-cron/${PN}/archive/v${PV}.tar.gz -> 
systemd-cron-${PV}.tar.gz"
+
+LICENSE="MIT"
+SLOT="0"
+KEYWORDS="~amd64 ~arm ~arm64 ~hppa ~ia64 ~ppc ~ppc64 ~riscv ~sparc ~x86"
+IUSE="cron-boot etc-crontab-systemd minutely +runparts setgid yearly"
+# We can't run the unshare tests within sandbox/with low privs, and the
+# 'test-nounshare' target just does static analysis (shellcheck etc).
+RESTRICT="test"
+
+BDEPEND="virtual/pkgconfig"
+RDEPEND="
+   !sys-process/cronie[anacron]
+   acct-user/_cron-failure
+   acct-group/_cron-failure
+   app-crypt/libmd:=
+   sys-process/cronbase
+   >=sys-apps/systemd-253
+   !etc-crontab-systemd? ( !sys-process/dcron )
+   runparts? ( sys-apps/debianutils )
+"
+DEPEND="
+   dev-libs/openssl:=
+   sys-process/cronbase
+"
+
+pkg_pretend() {
+   if use runparts && ! [ -x /usr/bin/run-parts ] ; then
+   eerror "Please complete the migration to merged-usr."
+   eerror "https://wiki.gentoo.org/wiki/Merge-usr;
+   die "systemd-cron no longer supports split-usr"
+   fi
+}
+
+src_prepare() {
+   sed -i \
+   -e 's/^crontab/crontab-systemd/' \
+   -e 's/^CRONTAB/CRONTAB-SYSTEMD/' \
+   -- "${S}/src/man/crontab."{1,5}".in" || die
+
+   if use etc-crontab-systemd
+   thensed -i \
+   -e "s!/etc/crontab!/etc/crontab-systemd!" \
+   -- "${S}/src/man/crontab."{1,5}".in" \
+   "${S}/src/bin/systemd-crontab-generator.cpp" \
+   "${S}/test/test-generator" || die
+   fi
+
+   default
+}
+
+my_use_enable() {
+   if use ${1}; then
+   echo --enable-${2:-${1}}=yes
+   else
+   echo --enable-${2:-${1}}=no
+   fi
+}
+
+src_configure() {
+   tc-export PKG_CONFIG CXX CC
+
+   ./configure \
+   --prefix="${EPREFIX}/usr" \
+   --mandir="${EPREFIX}/usr/share/man" \
+   --unitdir="$(systemd_get_systemunitdir)" \
+   --generatordir="$(systemd_get_systemgeneratordir)" \

[gentoo-commits] repo/gentoo:master commit in: app-backup/duplicity/, app-backup/duplicity/files/

2024-03-29 Thread Richard Freeman
commit: 0692ee17f584257f8e0dce50e2849c823bf02b81
Author: Richard Freeman  gentoo  org>
AuthorDate: Fri Mar 29 11:18:50 2024 +
Commit: Richard Freeman  gentoo  org>
CommitDate: Fri Mar 29 11:19:16 2024 +
URL:https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=0692ee17

app-backup/duplicity: add 2.2.3

Signed-off-by: Richard Freeman  gentoo.org>

 app-backup/duplicity/Manifest  |  1 +
 app-backup/duplicity/duplicity-2.2.3.ebuild| 51 ++
 .../files/duplicity-2.2.3-fix-docs-cmd.patch   | 21 +
 3 files changed, 73 insertions(+)

diff --git a/app-backup/duplicity/Manifest b/app-backup/duplicity/Manifest
index 7e9e1a4f388a..945214bd10d2 100644
--- a/app-backup/duplicity/Manifest
+++ b/app-backup/duplicity/Manifest
@@ -1,2 +1,3 @@
 DIST duplicity-2.1.1.tar.gz 1420132 BLAKE2B 
35cfa7c6c2caa647f3b2046783185973203b5d838c0d1a1a8e24982f1c7f74a1d025e0b0740c0c7bc14d516c59d3e691a2712b19b30882e9dbb411cecb90f4be
 SHA512 
fb19b1723e1e220ca72a41c3678ca29d889b2315c7fd043334d55cc2040d991e66480d71c6cc3f2ee5d17d9e1d9fb24ddc4c0ed771bbbefb6f1f6aa14cbe0347
 DIST duplicity-2.1.4.tar.gz 1556341 BLAKE2B 
d8302a7097519fd593fc05c8390101e615eaf11333e9d15e1ba7756b8ed9764709db80df41c741ee39eda0fa6de22c910b53db32d558c1ab09867c66724a056c
 SHA512 
91804c6f4dc13d700cbe4747317f9611f530996de8a22a0907d714fb6f8a7fadc3371c270a2257c24324c0233bb4501a4b7d33aea7631862568c8530f7173ef1
+DIST duplicity-2.2.3.tar.gz 1978008 BLAKE2B 
29a88eb059c3dd6faa7d08d52216cd0f9d96255eae1e613e2c5432bf8f36ad014484953e20b4a0dfaa2704dd6ac426a3285ff40a8cc82f287a8a89199df5a2c5
 SHA512 
b667092317899674c5e9d4b221815f24a7eae177d3d2b6d298f07d3e2d4a7badd6c976a6317331b7c6cea940a7885a3da397ab7197d5fd671d33278316f86916

diff --git a/app-backup/duplicity/duplicity-2.2.3.ebuild 
b/app-backup/duplicity/duplicity-2.2.3.ebuild
new file mode 100644
index ..71908351c86d
--- /dev/null
+++ b/app-backup/duplicity/duplicity-2.2.3.ebuild
@@ -0,0 +1,51 @@
+# Copyright 1999-2024 Gentoo Authors
+# Distributed under the terms of the GNU General Public License v2
+
+EAPI=8
+PYTHON_COMPAT=( python3_10 python3_11 python3_12 )
+DISTUTILS_USE_PEP517=setuptools
+DISTUTILS_EXT=1
+
+inherit distutils-r1 pypi
+
+DESCRIPTION="Secure backup system using gnupg to encrypt data"
+HOMEPAGE="https://duplicity.gitlab.io/;
+
+LICENSE="GPL-3"
+SLOT="0"
+KEYWORDS="~amd64 ~x86 ~amd64-linux ~x86-linux ~x64-macos"
+IUSE="s3 test"
+
+CDEPEND="
+   net-libs/librsync
+   app-crypt/gnupg
+   dev-python/fasteners[${PYTHON_USEDEP}]
+"
+DEPEND="${CDEPEND}
+   dev-python/setuptools[${PYTHON_USEDEP}]
+   dev-python/setuptools-scm[${PYTHON_USEDEP}]
+   test? (
+   app-arch/par2cmdline
+   dev-python/mock[${PYTHON_USEDEP}]
+   dev-python/pexpect[${PYTHON_USEDEP}]
+   )
+"
+RDEPEND="${CDEPEND}
+   dev-python/paramiko[${PYTHON_USEDEP}]
+   s3? ( dev-python/boto3[${PYTHON_USEDEP}] )
+"
+
+RESTRICT="test"
+
+PATCHES=(
+   "${FILESDIR}/${P}-fix-docs-cmd.patch"
+)
+
+python_test() {
+   esetup.py test
+}
+
+pkg_postinst() {
+   elog "Duplicity has many optional dependencies to support various 
backends."
+   elog "Currently it's up to you to install them as necessary."
+}

diff --git a/app-backup/duplicity/files/duplicity-2.2.3-fix-docs-cmd.patch 
b/app-backup/duplicity/files/duplicity-2.2.3-fix-docs-cmd.patch
new file mode 100644
index ..13e4d909f46a
--- /dev/null
+++ b/app-backup/duplicity/files/duplicity-2.2.3-fix-docs-cmd.patch
@@ -0,0 +1,21 @@
+--- a/setup.py 2024-03-29 07:04:27.847027200 -0400
 b/setup.py 2024-03-29 07:05:03.924506321 -0400
+@@ -93,18 +93,6 @@
+ "man/duplicity.1",
+ ],
+ ),
+-(
+-f"share/doc/duplicity-{Version}",
+-[
+-"CHANGELOG.md",
+-"AUTHORS.md",
+-"COPYING",
+-"README.md",
+-"README-LOG.md",
+-"README-REPO.md",
+-"README-TESTING.md",
+-],
+-),
+ ]
+ 
+ # short circuit fot READTHEDOCS



[jira] [Commented] (CXF-8990) Cxf plugins (codegen, xjc): fails in versions 4.0.4, 4.0.1

2024-03-22 Thread Freeman Yue Fang (Jira)


[ 
https://issues.apache.org/jira/browse/CXF-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829981#comment-17829981
 ] 

Freeman Yue Fang commented on CXF-8990:
---

You are very welcome Claus!

> Cxf plugins (codegen, xjc): fails in versions 4.0.4, 4.0.1 
> ---
>
> Key: CXF-8990
> URL: https://issues.apache.org/jira/browse/CXF-8990
> Project: CXF
>  Issue Type: Bug
>Reporter: Jiri Ondrusek
>    Assignee: Freeman Yue Fang
>Priority: Major
>
> I was building Camel right after the upgrade of plugins (cxf-codegen to 4.0.4 
> and cxf-xjc to 4.0.1) See 
> [https://github.com/apache/camel/commit/a1bc751d7d8f5fb2016ed54e0735de5e4e9ec18b]
>  
> The build failed with:
> {code:java}
> [INFO] < org.apache.camel:camel-itest 
> >
> [INFO] Building Camel :: Integration Tests 4.5.0-SNAPSHOT             
> [557/559]
> [INFO]   from tests/camel-itest/pom.xml
> [INFO] [ jar 
> ]-
> [INFO] 
> [INFO] --- clean:3.3.2:clean (default-clean) @ camel-itest ---
> [INFO] Deleting /home/jondruse/git/community/camel/tests/camel-itest/target
> [INFO] 
> [INFO] --- cxf-codegen:4.0.4:wsdl2java (generate-test-sources) @ camel-itest 
> ---
> [INFO] Running code generation in fork mode...
> [INFO] The java executable is 
> /home/jondruse/.sdkman/candidates/java/17.0.9-tem/bin/java
> [INFO] Building jar: 
> /tmp/cxf-tmp-14372731791450387841/cxf-codegen4454546328799244847.jar
> [WARNING] Exception in thread "main" 
> org.apache.cxf.tools.common.ToolException: Not a valid jaxb or jaxws binding 
> file, please check the namespace
>  {code}
> {code:java}
> ...
> ERROR] Failed to execute goal 
> org.apache.cxf:cxf-codegen-plugin:4.0.4:wsdl2java (generate-test-sources) on 
> project camel-itest: 
> [ERROR] Exit code: 1
> [ERROR] Command line was: 
> /home/jondruse/.sdkman/candidates/java/17.0.9-tem/bin/java 
> --add-exports=jdk.xml.dom/org.w3c.dom.html=ALL-UNNAMED 
> --add-exports=java.xml/com.sun.org.apache.xerces.internal.impl.xs=ALL-UNNAMED 
> --add-opens java.base/java.security=ALL-UNNAMED --add-opens 
> java.base/java.net=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED 
> --add-opens java.base/java.util=ALL-UNNAMED --add-opens 
> java.base/java.util.concurrent=ALL-UNNAMED -jar 
> /tmp/cxf-tmp-14372731791450387841/cxf-codegen4454546328799244847.jar 
> /tmp/cxf-tmp-14372731791450387841/cxf-w2j15211708284601236974args
> {code}
> Issue can be simulated by building of Camel from this commit: 
> https://github.com/apache/camel/commit/a1bc751d7d8f5fb2016ed54e0735de5e4e9ec18b



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (CXF-8990) Cxf plugins (codegen, xjc): fails in versions 4.0.4, 4.0.1

2024-03-22 Thread Freeman Yue Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/CXF-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Freeman Yue Fang reassigned CXF-8990:
-

Assignee: Freeman Yue Fang

> Cxf plugins (codegen, xjc): fails in versions 4.0.4, 4.0.1 
> ---
>
> Key: CXF-8990
> URL: https://issues.apache.org/jira/browse/CXF-8990
> Project: CXF
>  Issue Type: Bug
>Reporter: Jiri Ondrusek
>    Assignee: Freeman Yue Fang
>Priority: Major
>
> I was building Camel right after the upgrade of plugins (cxf-codegen to 4.0.4 
> and cxf-xjc to 4.0.1) See 
> [https://github.com/apache/camel/commit/a1bc751d7d8f5fb2016ed54e0735de5e4e9ec18b]
>  
> The build failed with:
> {code:java}
> [INFO] < org.apache.camel:camel-itest 
> >
> [INFO] Building Camel :: Integration Tests 4.5.0-SNAPSHOT             
> [557/559]
> [INFO]   from tests/camel-itest/pom.xml
> [INFO] [ jar 
> ]-
> [INFO] 
> [INFO] --- clean:3.3.2:clean (default-clean) @ camel-itest ---
> [INFO] Deleting /home/jondruse/git/community/camel/tests/camel-itest/target
> [INFO] 
> [INFO] --- cxf-codegen:4.0.4:wsdl2java (generate-test-sources) @ camel-itest 
> ---
> [INFO] Running code generation in fork mode...
> [INFO] The java executable is 
> /home/jondruse/.sdkman/candidates/java/17.0.9-tem/bin/java
> [INFO] Building jar: 
> /tmp/cxf-tmp-14372731791450387841/cxf-codegen4454546328799244847.jar
> [WARNING] Exception in thread "main" 
> org.apache.cxf.tools.common.ToolException: Not a valid jaxb or jaxws binding 
> file, please check the namespace
>  {code}
> {code:java}
> ...
> ERROR] Failed to execute goal 
> org.apache.cxf:cxf-codegen-plugin:4.0.4:wsdl2java (generate-test-sources) on 
> project camel-itest: 
> [ERROR] Exit code: 1
> [ERROR] Command line was: 
> /home/jondruse/.sdkman/candidates/java/17.0.9-tem/bin/java 
> --add-exports=jdk.xml.dom/org.w3c.dom.html=ALL-UNNAMED 
> --add-exports=java.xml/com.sun.org.apache.xerces.internal.impl.xs=ALL-UNNAMED 
> --add-opens java.base/java.security=ALL-UNNAMED --add-opens 
> java.base/java.net=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED 
> --add-opens java.base/java.util=ALL-UNNAMED --add-opens 
> java.base/java.util.concurrent=ALL-UNNAMED -jar 
> /tmp/cxf-tmp-14372731791450387841/cxf-codegen4454546328799244847.jar 
> /tmp/cxf-tmp-14372731791450387841/cxf-w2j15211708284601236974args
> {code}
> Issue can be simulated by building of Camel from this commit: 
> https://github.com/apache/camel/commit/a1bc751d7d8f5fb2016ed54e0735de5e4e9ec18b



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (CXF-8990) Cxf plugins (codegen, xjc): fails in versions 4.0.4, 4.0.1

2024-03-22 Thread Freeman Yue Fang (Jira)


[ 
https://issues.apache.org/jira/browse/CXF-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829920#comment-17829920
 ] 

Freeman Yue Fang commented on CXF-8990:
---

Hi [~jondruse],

I just pushed the fix on camel side.
https://github.com/apache/camel/commit/c2f719109610538374003e66049a83b6f5d5b323

Freeman

> Cxf plugins (codegen, xjc): fails in versions 4.0.4, 4.0.1 
> ---
>
> Key: CXF-8990
> URL: https://issues.apache.org/jira/browse/CXF-8990
> Project: CXF
>  Issue Type: Bug
>Reporter: Jiri Ondrusek
>Priority: Major
>
> I was building Camel right after the upgrade of plugins (cxf-codegen to 4.0.4 
> and cxf-xjc to 4.0.1) See 
> [https://github.com/apache/camel/commit/a1bc751d7d8f5fb2016ed54e0735de5e4e9ec18b]
>  
> The build failed with:
> {code:java}
> [INFO] < org.apache.camel:camel-itest 
> >
> [INFO] Building Camel :: Integration Tests 4.5.0-SNAPSHOT             
> [557/559]
> [INFO]   from tests/camel-itest/pom.xml
> [INFO] [ jar 
> ]-
> [INFO] 
> [INFO] --- clean:3.3.2:clean (default-clean) @ camel-itest ---
> [INFO] Deleting /home/jondruse/git/community/camel/tests/camel-itest/target
> [INFO] 
> [INFO] --- cxf-codegen:4.0.4:wsdl2java (generate-test-sources) @ camel-itest 
> ---
> [INFO] Running code generation in fork mode...
> [INFO] The java executable is 
> /home/jondruse/.sdkman/candidates/java/17.0.9-tem/bin/java
> [INFO] Building jar: 
> /tmp/cxf-tmp-14372731791450387841/cxf-codegen4454546328799244847.jar
> [WARNING] Exception in thread "main" 
> org.apache.cxf.tools.common.ToolException: Not a valid jaxb or jaxws binding 
> file, please check the namespace
>  {code}
> {code:java}
> ...
> ERROR] Failed to execute goal 
> org.apache.cxf:cxf-codegen-plugin:4.0.4:wsdl2java (generate-test-sources) on 
> project camel-itest: 
> [ERROR] Exit code: 1
> [ERROR] Command line was: 
> /home/jondruse/.sdkman/candidates/java/17.0.9-tem/bin/java 
> --add-exports=jdk.xml.dom/org.w3c.dom.html=ALL-UNNAMED 
> --add-exports=java.xml/com.sun.org.apache.xerces.internal.impl.xs=ALL-UNNAMED 
> --add-opens java.base/java.security=ALL-UNNAMED --add-opens 
> java.base/java.net=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED 
> --add-opens java.base/java.util=ALL-UNNAMED --add-opens 
> java.base/java.util.concurrent=ALL-UNNAMED -jar 
> /tmp/cxf-tmp-14372731791450387841/cxf-codegen4454546328799244847.jar 
> /tmp/cxf-tmp-14372731791450387841/cxf-w2j15211708284601236974args
> {code}
> Issue can be simulated by building of Camel from this commit: 
> https://github.com/apache/camel/commit/a1bc751d7d8f5fb2016ed54e0735de5e4e9ec18b



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (CXF-8990) Cxf plugins (codegen, xjc): fails in versions 4.0.4, 4.0.1

2024-03-22 Thread Freeman Yue Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/CXF-8990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Freeman Yue Fang resolved CXF-8990.
---
Resolution: Not A Bug

> Cxf plugins (codegen, xjc): fails in versions 4.0.4, 4.0.1 
> ---
>
> Key: CXF-8990
> URL: https://issues.apache.org/jira/browse/CXF-8990
> Project: CXF
>  Issue Type: Bug
>Reporter: Jiri Ondrusek
>    Assignee: Freeman Yue Fang
>Priority: Major
>
> I was building Camel right after the upgrade of plugins (cxf-codegen to 4.0.4 
> and cxf-xjc to 4.0.1) See 
> [https://github.com/apache/camel/commit/a1bc751d7d8f5fb2016ed54e0735de5e4e9ec18b]
>  
> The build failed with:
> {code:java}
> [INFO] < org.apache.camel:camel-itest 
> >
> [INFO] Building Camel :: Integration Tests 4.5.0-SNAPSHOT             
> [557/559]
> [INFO]   from tests/camel-itest/pom.xml
> [INFO] [ jar 
> ]-
> [INFO] 
> [INFO] --- clean:3.3.2:clean (default-clean) @ camel-itest ---
> [INFO] Deleting /home/jondruse/git/community/camel/tests/camel-itest/target
> [INFO] 
> [INFO] --- cxf-codegen:4.0.4:wsdl2java (generate-test-sources) @ camel-itest 
> ---
> [INFO] Running code generation in fork mode...
> [INFO] The java executable is 
> /home/jondruse/.sdkman/candidates/java/17.0.9-tem/bin/java
> [INFO] Building jar: 
> /tmp/cxf-tmp-14372731791450387841/cxf-codegen4454546328799244847.jar
> [WARNING] Exception in thread "main" 
> org.apache.cxf.tools.common.ToolException: Not a valid jaxb or jaxws binding 
> file, please check the namespace
>  {code}
> {code:java}
> ...
> ERROR] Failed to execute goal 
> org.apache.cxf:cxf-codegen-plugin:4.0.4:wsdl2java (generate-test-sources) on 
> project camel-itest: 
> [ERROR] Exit code: 1
> [ERROR] Command line was: 
> /home/jondruse/.sdkman/candidates/java/17.0.9-tem/bin/java 
> --add-exports=jdk.xml.dom/org.w3c.dom.html=ALL-UNNAMED 
> --add-exports=java.xml/com.sun.org.apache.xerces.internal.impl.xs=ALL-UNNAMED 
> --add-opens java.base/java.security=ALL-UNNAMED --add-opens 
> java.base/java.net=ALL-UNNAMED --add-opens java.base/java.lang=ALL-UNNAMED 
> --add-opens java.base/java.util=ALL-UNNAMED --add-opens 
> java.base/java.util.concurrent=ALL-UNNAMED -jar 
> /tmp/cxf-tmp-14372731791450387841/cxf-codegen4454546328799244847.jar 
> /tmp/cxf-tmp-14372731791450387841/cxf-w2j15211708284601236974args
> {code}
> Issue can be simulated by building of Camel from this commit: 
> https://github.com/apache/camel/commit/a1bc751d7d8f5fb2016ed54e0735de5e4e9ec18b



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >