Re: [FRIAM] links for this morning's FRIAM: Special Unitary Groups and Quaternions

2023-05-05 Thread Russell Standish
On Fri, May 05, 2023 at 03:22:49PM -0600, Stephen Guerin wrote:
> I think that's the same as when I said "I knew how to solve n-body systems 
> with
> particle N^2/2 forces (corrected) with some quadtree or octree optimizations 
> to
> get from n^2 to nlog(n)." . Or are you saying something different?
> 

Similar. Quad/Octrees are good for inhomgenous simulations. Particle
in cell methods are for homogenous simulations.

Note a technique I developed in conjuction with Duraid Madina in the
early noughties which we called GraphCode is effectively a
particle-in-cell method on an arbitrary graph. The graph can change
dynamically, and the cells redistributed to rebalance a calculation
using a graph partitioning algorithm - it makes for something more
general than quad/octrees.

I have written this up a couple of times, but have moved on to other
things now, and the work has had little impact. But I did discover
that a Spanish group took my code to develop an ABM system that was as
least twice as performant as the equivalent model in RepastHPC, so we
obviously did somethig right (J. Supercomputing, 2018,
doi:10.1007/s11227-018-2688-8).


> 
> On Fri, May 5, 2023 at 2:58 PM Angel Edward  wrote:
> 
> Here’s another connection I had forgotten. Consider particles on a 2D
> rectangle  with 1/r^2 repulsion. If you break up the rectangle into 
> smaller
> rectangles in which particles can only stay in their own rectangles or 
> move
> to neighbor rectangles, the N^2 force calculation comes down to N log N,
> same as the limit on good sorting algorithms. This technique came up when
> we were using particles to form an isosurface in 3D.
> 
> Ed
> __
> 
> Ed Angel
> 
> Founding Director, Art, Research, Technology and Science Laboratory (ARTS
> Lab)
> Professor Emeritus of Computer Science, University of New Mexico
> 
> 1017 Sierra Pinon
> Santa Fe, NM 87501
> 505-984-0136 (home)   edward.an...@gmail.com
> 505-453-4944 (cell)  http://www.cs.unm.edu/~angel
> 
> 
> On May 5, 2023, at 2:31 PM, Stephen Guerin 
>  > wrote:
> 
> Thanks Roger and Ed!
> 
> I've spent some time with Ed and Frank discussing this and I've really
> filled in some gaps in my knowledge of parallel algorithms. eg, I knew
> how to solve n-body system with particle N^2/2 focus with some 
> quadtree
> or octree optimizations to get from n^2 to nlog(n). But the FFT
> transform on laplacians solving Poisson equation was new to me and I
> can now see the beauty. Today, Ed quickly threw out the Kronecker
> Operator/Product which Frank knew but I didn't. Frank flashed me a
> wikipedia article on his phone with symbolics that I couldn't
> immediately grok. But asking chatGPT to explain the operator to a 3D
> graphics person I immediately got it and had the benefit that I would
> usually implement this function with two inner loops over rows and
> columnts instead of using Kronecker available in optimized linear
> algebra/graphics libraries. Often this was happening under the hood of
> my tools but didn't realize it.
> 
> As a 3D graphics developer, understanding the Kronecker matrix can be
> very useful. The Kronecker product is often used in computer graphics
> and computer vision applications, such as texture mapping, geometric
> transformations, and image processing. Here are a few specific ways in
> which Kronecker matrix can be useful to a 3D graphics developer:
>  1. Texture mapping: The Kronecker product can be used to create
> repetitive patterns in textures, such as brick walls, tiles, or
> grass. By creating a base texture and applying a Kronecker product
> with a smaller texture, a developer can create a seamless and
> repeating texture that covers a larger surface.
>  2. Geometric transformations: The Kronecker product can be used to
> perform geometric transformations, such as scaling, rotation, and
> translation, on 3D objects. By creating a Kronecker matrix with a
> transformation matrix, a developer can apply the transformation to
> every vertex of an object, resulting in a transformed object.
>  3. Image processing: The Kronecker product can be used to perform
> image processing operations, such as blurring, sharpening, or edge
> detection, on 3D images. By creating a Kronecker matrix with a
> filter matrix, a developer can apply the filter to every pixel of
> an image, resulting in a processed image.
> In summary, the Kronecker matrix is a powerful tool that can be used 
> in
> various ways by 3D graphics developers. Whether it's creating 
> textures,
> transforming objects, or processing images, understanding the

Re: [FRIAM] links for this morning's FRIAM: Special Unitary Groups and Quaternions

2023-05-05 Thread Stephen Guerin
I think that's the same as when I said "I knew how to solve n-body systems
with particle N^2/2 forces (corrected) with some quadtree or octree
optimizations to get from n^2 to nlog(n)." . Or are you saying something
different?


On Fri, May 5, 2023 at 2:58 PM Angel Edward  wrote:

> Here’s another connection I had forgotten. Consider particles on a 2D
> rectangle  with 1/r^2 repulsion. If you break up the rectangle into smaller
> rectangles in which particles can only stay in their own rectangles or move
> to neighbor rectangles, the N^2 force calculation comes down to N log N,
> same as the limit on good sorting algorithms. This technique came up when
> we were using particles to form an isosurface in 3D.
>
> Ed
> __
>
> Ed Angel
>
> Founding Director, Art, Research, Technology and Science Laboratory (ARTS
> Lab)
> Professor Emeritus of Computer Science, University of New Mexico
>
> 1017 Sierra Pinon
> Santa Fe, NM 87501
> 505-984-0136 (home)   edward.an...@gmail.com
> 505-453-4944 (cell)  http://www.cs.unm.edu/~angel
>
> On May 5, 2023, at 2:31 PM, Stephen Guerin 
> wrote:
>
> Thanks Roger and Ed!
>
> I've spent some time with Ed and Frank discussing this and I've really
> filled in some gaps in my knowledge of parallel algorithms. eg, I knew how
> to solve n-body system with particle N^2/2 focus with some quadtree or
> octree optimizations to get from n^2 to nlog(n). But the FFT transform on
> laplacians solving Poisson equation was new to me and I can now see the
> beauty. Today, Ed quickly threw out the Kronecker Operator/Product which
> Frank knew but I didn't. Frank flashed me a wikipedia article
>  on his phone with
> symbolics that I couldn't immediately grok. But asking chatGPT to explain
> the operator to a 3D graphics person I immediately got it and had the
> benefit that I would usually implement this function with two inner loops
> over rows and columnts instead of using Kronecker available in optimized
> linear algebra/graphics libraries. Often this was happening under the hood
> of my tools but didn't realize it.
>
> As a 3D graphics developer, understanding the Kronecker matrix can be very
> useful. The Kronecker product is often used in computer graphics and
> computer vision applications, such as texture mapping, geometric
> transformations, and image processing. Here are a few specific ways in
> which Kronecker matrix can be useful to a 3D graphics developer:
>
>1. Texture mapping: The Kronecker product can be used to create
>repetitive patterns in textures, such as brick walls, tiles, or grass. By
>creating a base texture and applying a Kronecker product with a smaller
>texture, a developer can create a seamless and repeating texture that
>covers a larger surface.
>2. Geometric transformations: The Kronecker product can be used to
>perform geometric transformations, such as scaling, rotation, and
>translation, on 3D objects. By creating a Kronecker matrix with a
>transformation matrix, a developer can apply the transformation to every
>vertex of an object, resulting in a transformed object.
>3. Image processing: The Kronecker product can be used to perform
>image processing operations, such as blurring, sharpening, or edge
>detection, on 3D images. By creating a Kronecker matrix with a filter
>matrix, a developer can apply the filter to every pixel of an image,
>resulting in a processed image.
>
> In summary, the Kronecker matrix is a powerful tool that can be used in
> various ways by 3D graphics developers. Whether it's creating textures,
> transforming objects, or processing images, understanding the Kronecker
> matrix can help a developer achieve their desired results more efficiently
> and effectively.
>
>
>
> ___
> stephen.gue...@simtable.com 
> CEO, https://www.simtable.com 
> 1600 Lena St #D1, Santa Fe, NM 87505
> office: (505)995-0206 mobile: (505)577-5828
>
>
> On Fri, Apr 28, 2023 at 7:50 PM Angel Edward 
> wrote:
>
>> Most of my dissertation (1968) was on numerical solution of potential
>> problems. One of the parts was a proof that some of the known iterative
>> methods converged. The argument loosely went something like this. Consider
>> the 2D Poisson equation on a square. If you use an N x N approximation with
>> the usual discretization of the Laplacian
>>
>> u_ij = (u_i(j-1) + u_i(j+1) + u_(i_1)j + i_(j+1))/4
>>
>> i.e, the average of the surrounding points, the problem reduces to the
>> solution of a set of N^2 linear equations
>>
>> Ax = b
>>
>> where x in a vector of the unknown {u_ij} arranged by rows or columns, b
>> is determined by the boundary conditions and the right side of the Poisson
>> equation. The interesting part is A which is block tridiagonal. With only a
>> small error A can be made block cyclic. You can then diagonalize A with a
>> sine tran

Re: [FRIAM] links for this morning's FRIAM: Special Unitary Groups and Quaternions

2023-05-05 Thread Angel Edward
Here’s another connection I had forgotten. Consider particles on a 2D rectangle 
 with 1/r^2 repulsion. If you break up the rectangle into smaller rectangles in 
which particles can only stay in their own rectangles or move to neighbor 
rectangles, the N^2 force calculation comes down to N log N, same as the limit 
on good sorting algorithms. This technique came up when we were using particles 
to form an isosurface in 3D.

Ed
__

Ed Angel

Founding Director, Art, Research, Technology and Science Laboratory (ARTS Lab)
Professor Emeritus of Computer Science, University of New Mexico

1017 Sierra Pinon
Santa Fe, NM 87501
505-984-0136 (home) edward.an...@gmail.com
505-453-4944 (cell) http://www.cs.unm.edu/~angel

> On May 5, 2023, at 2:31 PM, Stephen Guerin  
> wrote:
> 
> Thanks Roger and Ed!
> 
> I've spent some time with Ed and Frank discussing this and I've really filled 
> in some gaps in my knowledge of parallel algorithms. eg, I knew how to solve 
> n-body system with particle N^2/2 focus with some quadtree or octree 
> optimizations to get from n^2 to nlog(n). But the FFT transform on laplacians 
> solving Poisson equation was new to me and I can now see the beauty. Today, 
> Ed quickly threw out the Kronecker Operator/Product which Frank knew but I 
> didn't. Frank flashed me a wikipedia article 
>  on his phone with symbolics 
> that I couldn't immediately grok. But asking chatGPT to explain the operator 
> to a 3D graphics person I immediately got it and had the benefit that I would 
> usually implement this function with two inner loops over rows and columnts 
> instead of using Kronecker available in optimized linear algebra/graphics 
> libraries. Often this was happening under the hood of my tools but didn't 
> realize it.
> 
> As a 3D graphics developer, understanding the Kronecker matrix can be very 
> useful. The Kronecker product is often used in computer graphics and computer 
> vision applications, such as texture mapping, geometric transformations, and 
> image processing. Here are a few specific ways in which Kronecker matrix can 
> be useful to a 3D graphics developer:
> Texture mapping: The Kronecker product can be used to create repetitive 
> patterns in textures, such as brick walls, tiles, or grass. By creating a 
> base texture and applying a Kronecker product with a smaller texture, a 
> developer can create a seamless and repeating texture that covers a larger 
> surface.
> Geometric transformations: The Kronecker product can be used to perform 
> geometric transformations, such as scaling, rotation, and translation, on 3D 
> objects. By creating a Kronecker matrix with a transformation matrix, a 
> developer can apply the transformation to every vertex of an object, 
> resulting in a transformed object.
> Image processing: The Kronecker product can be used to perform image 
> processing operations, such as blurring, sharpening, or edge detection, on 3D 
> images. By creating a Kronecker matrix with a filter matrix, a developer can 
> apply the filter to every pixel of an image, resulting in a processed image.
> In summary, the Kronecker matrix is a powerful tool that can be used in 
> various ways by 3D graphics developers. Whether it's creating textures, 
> transforming objects, or processing images, understanding the Kronecker 
> matrix can help a developer achieve their desired results more efficiently 
> and effectively.
> 
> 
> 
> ___
> stephen.gue...@simtable.com 
> CEO, https://www.simtable.com 
> 1600 Lena St #D1, Santa Fe, NM 87505
> office: (505)995-0206 mobile: (505)577-5828
> 
> 
> On Fri, Apr 28, 2023 at 7:50 PM Angel Edward  > wrote:
>> Most of my dissertation (1968) was on numerical solution of potential 
>> problems. One of the parts was a proof that some of the known iterative 
>> methods converged. The argument loosely went something like this. Consider 
>> the 2D Poisson equation on a square. If you use an N x N approximation with 
>> the usual discretization of the Laplacian
>> 
>> u_ij = (u_i(j-1) + u_i(j+1) + u_(i_1)j + i_(j+1))/4 
>> 
>> i.e, the average of the surrounding points, the problem reduces to the 
>> solution of a set of N^2 linear equations
>> 
>> Ax = b 
>> 
>> where x in a vector of the unknown {u_ij} arranged by rows or columns, b is 
>> determined by the boundary conditions and the right side of the Poisson 
>> equation. The interesting part is A which is block tridiagonal. With only a 
>> small error A can be made block cyclic. You can then diagonalize A with a 
>> sine transform and I was able to use that for proofs.
>> 
>> A few years later when the FFT came about, we realized that we could use the 
>> FFT to do the sine transform and the resulting numerical method was as leas

[FRIAM] A poem: On Kronecker's Loom was: (Fwd: links for this morning's FRIAM: Special Unitary Groups and Quaternions)

2023-05-05 Thread Stephen Guerin
In celebration of my new found understanding of the Kronecker operator
and FFTs with Laplacians, I asked GPT's help to pen some verse as a bard
might for a court :-)

For context, a couple weeks ago, Frank gave me a ride to Friam with my
volvo in the shop. this week I returned his coffee mug he left Here comes
the metaphor, consider our iterated events of pairwise actions in
friendship as a cyclic block matrix and our interactions with you adding
the diagonal and the full tapestry/matrix of our community the Kronecker
tiling ;-) Here's the result

On Kronecker's Loom: Tapestry

Weeks ago, Stephen's Volvo under repair,
Frank's BMW arrived, a a lift to share,
horizontal row as thread in friendship,
Begins the weave tales of minds entwined

Todays mug with logo returned, a vertical thread, a column sewn,
With knowledge and random chatter they create,
The tapestry of ideas, together grown,
In loom of thought, friendships elevate.

Ed joins the Diagonal, the knowledge grows,
With Frank, a formal language bond they know,
Symbolic language, to Stephen once opaque,
Now grasped, the power of the operator in hand.

In past, with clumsy nested loops, Stephen worked,
Yet, newfound insight in linear algebra gained,
C = np.kron(A, B) the operator, unshirked,
Through friendships forged, knowledge unrestrained.

Ideas exchanged, insights embraced,
Connections formed, in wisdom traced,
A realm of learning, friendships laced,
Block matrices, the foundation laid.

In Kronecker's touch, a pattern forms,
Integrating all the pairwise friendships made,
Their union, a dance that transforms,
The tapestry of knowledge, unswayed.




___
stephen.gue...@simtable.com 
CEO, https://www.simtable.com 
1600 Lena St #D1, Santa Fe, NM 87505
office: (505)995-0206 mobile: (505)577-5828


-- Forwarded message -
From: Stephen Guerin 
Date: Fri, May 5, 2023 at 2:31 PM
Subject: Re: [FRIAM] links for this morning's FRIAM: Special Unitary Groups
and Quaternions
To: The Friday Morning Applied Complexity Coffee Group 


Thanks Roger and Ed!

I've spent some time with Ed and Frank discussing this and I've really
filled in some gaps in my knowledge of parallel algorithms. eg, I knew how
to solve n-body system with particle N^2/2 focus with some quadtree or
octree optimizations to get from n^2 to nlog(n). But the FFT transform on
laplacians solving Poisson equation was new to me and I can now see the
beauty. Today, Ed quickly threw out the Kronecker Operator/Product which
Frank knew but I didn't. Frank flashed me a wikipedia article
 on his phone with
symbolics that I couldn't immediately grok. But asking chatGPT to explain
the operator to a 3D graphics person I immediately got it and had the
benefit that I would usually implement this function with two inner loops
over rows and columnts instead of using Kronecker available in optimized
linear algebra/graphics libraries. Often this was happening under the hood
of my tools but didn't realize it.

As a 3D graphics developer, understanding the Kronecker matrix can be very
useful. The Kronecker product is often used in computer graphics and
computer vision applications, such as texture mapping, geometric
transformations, and image processing. Here are a few specific ways in
which Kronecker matrix can be useful to a 3D graphics developer:

   1. Texture mapping: The Kronecker product can be used to create
   repetitive patterns in textures, such as brick walls, tiles, or grass. By
   creating a base texture and applying a Kronecker product with a smaller
   texture, a developer can create a seamless and repeating texture that
   covers a larger surface.
   2. Geometric transformations: The Kronecker product can be used to
   perform geometric transformations, such as scaling, rotation, and
   translation, on 3D objects. By creating a Kronecker matrix with a
   transformation matrix, a developer can apply the transformation to every
   vertex of an object, resulting in a transformed object.
   3. Image processing: The Kronecker product can be used to perform image
   processing operations, such as blurring, sharpening, or edge detection, on
   3D images. By creating a Kronecker matrix with a filter matrix, a developer
   can apply the filter to every pixel of an image, resulting in a processed
   image.

In summary, the Kronecker matrix is a powerful tool that can be used in
various ways by 3D graphics developers. Whether it's creating textures,
transforming objects, or processing images, understanding the Kronecker
matrix can help a developer achieve their desired results more efficiently
and effectively.



___
stephen.gue...@simtable.com 
CEO, https://www.simtable.com 
1600 Lena St #D1, Santa Fe, NM 87505
office: (505)995-0206 mobile: (505)577-5828


On Fri,

Re: [FRIAM] links for this morning's FRIAM: Special Unitary Groups and Quaternions

2023-05-05 Thread Stephen Guerin
Thanks Roger and Ed!

I've spent some time with Ed and Frank discussing this and I've really
filled in some gaps in my knowledge of parallel algorithms. eg, I knew how
to solve n-body system with particle N^2/2 focus with some quadtree or
octree optimizations to get from n^2 to nlog(n). But the FFT transform on
laplacians solving Poisson equation was new to me and I can now see the
beauty. Today, Ed quickly threw out the Kronecker Operator/Product which
Frank knew but I didn't. Frank flashed me a wikipedia article
 on his phone with
symbolics that I couldn't immediately grok. But asking chatGPT to explain
the operator to a 3D graphics person I immediately got it and had the
benefit that I would usually implement this function with two inner loops
over rows and columnts instead of using Kronecker available in optimized
linear algebra/graphics libraries. Often this was happening under the hood
of my tools but didn't realize it.

As a 3D graphics developer, understanding the Kronecker matrix can be very
useful. The Kronecker product is often used in computer graphics and
computer vision applications, such as texture mapping, geometric
transformations, and image processing. Here are a few specific ways in
which Kronecker matrix can be useful to a 3D graphics developer:

   1. Texture mapping: The Kronecker product can be used to create
   repetitive patterns in textures, such as brick walls, tiles, or grass. By
   creating a base texture and applying a Kronecker product with a smaller
   texture, a developer can create a seamless and repeating texture that
   covers a larger surface.
   2. Geometric transformations: The Kronecker product can be used to
   perform geometric transformations, such as scaling, rotation, and
   translation, on 3D objects. By creating a Kronecker matrix with a
   transformation matrix, a developer can apply the transformation to every
   vertex of an object, resulting in a transformed object.
   3. Image processing: The Kronecker product can be used to perform image
   processing operations, such as blurring, sharpening, or edge detection, on
   3D images. By creating a Kronecker matrix with a filter matrix, a developer
   can apply the filter to every pixel of an image, resulting in a processed
   image.

In summary, the Kronecker matrix is a powerful tool that can be used in
various ways by 3D graphics developers. Whether it's creating textures,
transforming objects, or processing images, understanding the Kronecker
matrix can help a developer achieve their desired results more efficiently
and effectively.



___
stephen.gue...@simtable.com 
CEO, https://www.simtable.com 
1600 Lena St #D1, Santa Fe, NM 87505
office: (505)995-0206 mobile: (505)577-5828


On Fri, Apr 28, 2023 at 7:50 PM Angel Edward  wrote:

> Most of my dissertation (1968) was on numerical solution of potential
> problems. One of the parts was a proof that some of the known iterative
> methods converged. The argument loosely went something like this. Consider
> the 2D Poisson equation on a square. If you use an N x N approximation with
> the usual discretization of the Laplacian
>
> u_ij = (u_i(j-1) + u_i(j+1) + u_(i_1)j + i_(j+1))/4
>
> i.e, the average of the surrounding points, the problem reduces to the
> solution of a set of N^2 linear equations
>
> Ax = b
>
> where x in a vector of the unknown {u_ij} arranged by rows or columns, b
> is determined by the boundary conditions and the right side of the Poisson
> equation. The interesting part is A which is block tridiagonal. With only a
> small error A can be made block cyclic. You can then diagonalize A with a
> sine transform and I was able to use that for proofs.
>
> A few years later when the FFT came about, we realized that we could use
> the FFT to do the sine transform and the resulting numerical method was as
> least as efficient as any other method people had come up with.
>
> Ed
>
> Here’s a reference from 1986 that I think was based on paper at a Bellman
> Continuum
>
> ``From Dynamic Programming to Fast Transforms,'' E. Angel, J. Math. Anal.
> Appl.,119,1986.
>
> Ed
> __
>
> Ed Angel
>
> Founding Director, Art, Research, Technology and Science Laboratory (ARTS
> Lab)
> Professor Emeritus of Computer Science, University of New Mexico
>
> 1017 Sierra Pinon
> Santa Fe, NM 87501
> 505-984-0136 (home)   edward.an...@gmail.com
> 505-453-4944 (cell)  http://www.cs.unm.edu/~angel
>
> On Apr 28, 2023, at 8:18 AM, Stephen Guerin 
> wrote:
>
> Special Unitary Groups and Quaternions
>
> Mostly for Ed from the context of last week's Physical Friam if you're
> coming today.
>
> Discussion was around potential ways of visualizing the dynamics of SU(3),
> SU(2), (SU1) that highlights Special Unitary Groups. (wiki link from Frank
> ), and can we
> foreground how quate

Re: [FRIAM] Cory Doctorow on AI hype vs Crypto Hype

2023-05-05 Thread Roger Critchlow
Oh, the "We Have No Moat" posting is just holding on to the front page of
hackernews 24 hours later, 954 comments so far
https://news.ycombinator.com/item?id=35813322

-- rec --

On Fri, May 5, 2023 at 1:01 PM Roger Critchlow  wrote:

> On Fri, May 5, 2023 at 12:57 PM Roger Critchlow  wrote:
>
>> Ah, found the RSS feed that sends text around the paywall.
>>
>> -- rec --
>>
> Geoffrey Hinton tells us why he’s now scared of the tech he helped build
> 
> 2KMIT Technology Review  by Will
> Douglas Heaven / May 02, 2023 at 02:11AM
> //
> keep unread
> //
> hide
>
>
>>
>> I met Geoffrey Hinton at his house on a pretty street in north London
>> just four days before the bombshell announcement that he is quitting
>> Google. Hinton is a pioneer of deep learning
>> 
>>  who
>> helped develop some of the most important techniques at the heart of modern
>> artificial intelligence, but after a decade at Google, he is stepping
>> down
>> 
>>  to
>> focus on new concerns he now has about AI.
>>
>> Stunned by the capabilities of new large language models like GPT-4
>> ,
>> Hinton wants to raise public awareness of the serious risks that he now
>> believes may accompany the technology he ushered in.
>>
>> At the start of our conversation, I took a seat at the kitchen table, and
>> Hinton started pacing. Plagued for years by chronic back pain, Hinton
>> almost never sits down. For the next hour I watched him walk from one end
>> of the room to the other, my head swiveling as he spoke. And he had plenty
>> to say.
>>
>> The 75-year-old computer scientist, who was a joint recipient with Yann
>> LeCun
>> 
>>  and
>> Yoshua Bengio of the 2018 Turing Award for his work on deep learning, says
>> he is ready to shift gears. “I’m getting too old to do technical work that
>> requires remembering lots of details,” he told me. “I’m still okay, but I’m
>> not nearly as good as I was, and that’s annoying.”
>>
>> But that’s not the only reason he’s leaving Google. Hinton wants to spend
>> his time on what he describes as “more philosophical work.” And that will
>> focus on the small but—to him—very real danger that AI will turn out to be
>> a disaster.
>>
>> Leaving Google will let him speak his mind, without the self-censorship a
>> Google executive must engage in. “I want to talk about AI safety issues
>> without having to worry about how it interacts with Google’s business,” he
>> says. “As long as I’m paid by Google, I can’t do that.”
>>
>> That doesn’t mean Hinton is unhappy with Google by any means. “It may
>> surprise you,” he says. “There’s a lot of good things about Google that I
>> want to say, and they’re much more credible if I’m not at Google anymore.”
>>
>> Hinton says that the new generation of large language models—especially
>> GPT-4, which OpenAI released in March—has made him realize that machines
>> are on track to be a lot smarter than he thought they’d be. And he’s scared
>> about how that might play out.
>>
>> “These things are totally different from us,” he says. “Sometimes I think
>> it’s as if aliens had landed and people haven’t realized because they speak
>> very good English.”
>> Foundations
>>
>> Hinton is best known for his work on a technique called backpropagation,
>> which he proposed (with a pair of colleagues) in the 1980s. In a nutshell,
>> this is the algorithm that allows machines to learn. It underpins almost
>> all neural networks today, from computer vision systems to large language
>> models.
>>
>> It took until the 2010s for the power of neural networks trained via
>> backpropagation to truly make an impact. Working with a couple of graduate
>> students, Hinton showed that his technique was better than any others at
>> getting a computer to identify objects in images. They also trained a
>> neural network to predict the next letters in a sentence, a precursor to
>> today’s large language models.
>>
>> One of these graduate students was Ilya Sutskever, who went on to cofound
>> OpenAI and lead the development of ChatGPT
>> .
>> “We got the first inklings that this stuff could be amazing,” says Hinton.
>> “But it’s taken a long time to sink in that it needs to be done at a huge
>> scale to be good.” Back in the 1980s, neural networks were a joke. The
>> dominant idea at the time, known as symbolic AI, was that intelligence
>> involved processing symbo

Re: [FRIAM] connAI.pdf

2023-05-05 Thread Marcus Daniels
Two quantum computing companies use Fahlman’s CMU Common Lisp in the core of 
their product.   And their products are used to implement Boltzmann machines.  
So there.

From: Friam  On Behalf Of Frank Wimberly
Sent: Friday, May 5, 2023 12:12 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: [FRIAM] connAI.pdf

Scott Fahlman's wife and my wife have been best friends for over 50 years.  We 
have spent a lot of time with them over the decades.
---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Cory Doctorow on AI hype vs Crypto Hype

2023-05-05 Thread Roger Critchlow
On Fri, May 5, 2023 at 12:57 PM Roger Critchlow  wrote:

> Ah, found the RSS feed that sends text around the paywall.
>
> -- rec --
>
Geoffrey Hinton tells us why he’s now scared of the tech he helped build

2KMIT Technology Review  by Will Douglas
Heaven / May 02, 2023 at 02:11AM
//
keep unread
//
hide


>
> I met Geoffrey Hinton at his house on a pretty street in north London just
> four days before the bombshell announcement that he is quitting Google.
> Hinton is a pioneer of deep learning
> 
>  who
> helped develop some of the most important techniques at the heart of modern
> artificial intelligence, but after a decade at Google, he is stepping down
> 
>  to
> focus on new concerns he now has about AI.
>
> Stunned by the capabilities of new large language models like GPT-4
> ,
> Hinton wants to raise public awareness of the serious risks that he now
> believes may accompany the technology he ushered in.
>
> At the start of our conversation, I took a seat at the kitchen table, and
> Hinton started pacing. Plagued for years by chronic back pain, Hinton
> almost never sits down. For the next hour I watched him walk from one end
> of the room to the other, my head swiveling as he spoke. And he had plenty
> to say.
>
> The 75-year-old computer scientist, who was a joint recipient with Yann
> LeCun
> 
>  and
> Yoshua Bengio of the 2018 Turing Award for his work on deep learning, says
> he is ready to shift gears. “I’m getting too old to do technical work that
> requires remembering lots of details,” he told me. “I’m still okay, but I’m
> not nearly as good as I was, and that’s annoying.”
>
> But that’s not the only reason he’s leaving Google. Hinton wants to spend
> his time on what he describes as “more philosophical work.” And that will
> focus on the small but—to him—very real danger that AI will turn out to be
> a disaster.
>
> Leaving Google will let him speak his mind, without the self-censorship a
> Google executive must engage in. “I want to talk about AI safety issues
> without having to worry about how it interacts with Google’s business,” he
> says. “As long as I’m paid by Google, I can’t do that.”
>
> That doesn’t mean Hinton is unhappy with Google by any means. “It may
> surprise you,” he says. “There’s a lot of good things about Google that I
> want to say, and they’re much more credible if I’m not at Google anymore.”
>
> Hinton says that the new generation of large language models—especially
> GPT-4, which OpenAI released in March—has made him realize that machines
> are on track to be a lot smarter than he thought they’d be. And he’s scared
> about how that might play out.
>
> “These things are totally different from us,” he says. “Sometimes I think
> it’s as if aliens had landed and people haven’t realized because they speak
> very good English.”
> Foundations
>
> Hinton is best known for his work on a technique called backpropagation,
> which he proposed (with a pair of colleagues) in the 1980s. In a nutshell,
> this is the algorithm that allows machines to learn. It underpins almost
> all neural networks today, from computer vision systems to large language
> models.
>
> It took until the 2010s for the power of neural networks trained via
> backpropagation to truly make an impact. Working with a couple of graduate
> students, Hinton showed that his technique was better than any others at
> getting a computer to identify objects in images. They also trained a
> neural network to predict the next letters in a sentence, a precursor to
> today’s large language models.
>
> One of these graduate students was Ilya Sutskever, who went on to cofound
> OpenAI and lead the development of ChatGPT
> .
> “We got the first inklings that this stuff could be amazing,” says Hinton.
> “But it’s taken a long time to sink in that it needs to be done at a huge
> scale to be good.” Back in the 1980s, neural networks were a joke. The
> dominant idea at the time, known as symbolic AI, was that intelligence
> involved processing symbols, such as words or numbers.
>
> But Hinton wasn’t convinced. He worked on neural networks, software
> abstractions of brains in which neurons and the connections between them
> are represented by code. By changing how those neurons are
> connected—changing the numbers used to represent them—the neural network
> can be rewired on the fly

Re: [FRIAM] Cory Doctorow on AI hype vs Crypto Hype

2023-05-05 Thread Roger Critchlow
Ah, found the RSS feed that sends text around the paywall.

-- rec --

I met Geoffrey Hinton at his house on a pretty street in north London just
four days before the bombshell announcement that he is quitting Google.
Hinton is a pioneer of deep learning

who
helped develop some of the most important techniques at the heart of modern
artificial intelligence, but after a decade at Google, he is stepping down

to
focus on new concerns he now has about AI.

Stunned by the capabilities of new large language models like GPT-4
,
Hinton wants to raise public awareness of the serious risks that he now
believes may accompany the technology he ushered in.

At the start of our conversation, I took a seat at the kitchen table, and
Hinton started pacing. Plagued for years by chronic back pain, Hinton
almost never sits down. For the next hour I watched him walk from one end
of the room to the other, my head swiveling as he spoke. And he had plenty
to say.

The 75-year-old computer scientist, who was a joint recipient with Yann
LeCun

and
Yoshua Bengio of the 2018 Turing Award for his work on deep learning, says
he is ready to shift gears. “I’m getting too old to do technical work that
requires remembering lots of details,” he told me. “I’m still okay, but I’m
not nearly as good as I was, and that’s annoying.”

But that’s not the only reason he’s leaving Google. Hinton wants to spend
his time on what he describes as “more philosophical work.” And that will
focus on the small but—to him—very real danger that AI will turn out to be
a disaster.

Leaving Google will let him speak his mind, without the self-censorship a
Google executive must engage in. “I want to talk about AI safety issues
without having to worry about how it interacts with Google’s business,” he
says. “As long as I’m paid by Google, I can’t do that.”

That doesn’t mean Hinton is unhappy with Google by any means. “It may
surprise you,” he says. “There’s a lot of good things about Google that I
want to say, and they’re much more credible if I’m not at Google anymore.”

Hinton says that the new generation of large language models—especially
GPT-4, which OpenAI released in March—has made him realize that machines
are on track to be a lot smarter than he thought they’d be. And he’s scared
about how that might play out.

“These things are totally different from us,” he says. “Sometimes I think
it’s as if aliens had landed and people haven’t realized because they speak
very good English.”
Foundations

Hinton is best known for his work on a technique called backpropagation,
which he proposed (with a pair of colleagues) in the 1980s. In a nutshell,
this is the algorithm that allows machines to learn. It underpins almost
all neural networks today, from computer vision systems to large language
models.

It took until the 2010s for the power of neural networks trained via
backpropagation to truly make an impact. Working with a couple of graduate
students, Hinton showed that his technique was better than any others at
getting a computer to identify objects in images. They also trained a
neural network to predict the next letters in a sentence, a precursor to
today’s large language models.

One of these graduate students was Ilya Sutskever, who went on to cofound
OpenAI and lead the development of ChatGPT
.
“We got the first inklings that this stuff could be amazing,” says Hinton.
“But it’s taken a long time to sink in that it needs to be done at a huge
scale to be good.” Back in the 1980s, neural networks were a joke. The
dominant idea at the time, known as symbolic AI, was that intelligence
involved processing symbols, such as words or numbers.

But Hinton wasn’t convinced. He worked on neural networks, software
abstractions of brains in which neurons and the connections between them
are represented by code. By changing how those neurons are
connected—changing the numbers used to represent them—the neural network
can be rewired on the fly. In other words, it can be made to learn.

“My father was a biologist, so I was thinking in biological terms,” says
Hinton. “And symbolic reasoning is clearly not at the core of biological
intelligence.

“Crows can solve puzzles, and they don’t have language. They’re not doing
it by storing strings of symbols and manipulating them. They’re doing it by
changing the strengths of connections between neurons in their brain. And
so it has to be possible to learn complicated things by changing the
strengths of connecti

Re: [FRIAM] Cory Doctorow on AI hype vs Crypto Hype

2023-05-05 Thread Pieter Steenekamp
I don't have the article, but maybe as a second best the following
interview with Geoffry Hinton:
https://www.youtube.com/watch?v=sitHS6UDMJc

On Fri, 5 May 2023 at 08:16, Roger Critchlow  wrote:

> Merle --
>
> I tried, but it's paywalled to me now.
>
> -- rec --
>
> On Thu, May 4, 2023 at 4:39 PM Roger Critchlow  wrote:
>
>> Didn't read Cory's blog, though I'm still laughing at the blurb for Red
>> Team Blues.
>>
>> But I read Geoffrey Hinton's interview with MIT Tech Review yesterday.
>>
>>
>> https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai
>>
>> It's not hype that chatgpt dazzled everyone with a model which is much
>> smaller than a human brain, even though it took a fairly huge budget for
>> OpenAI to build it.
>>
>> And I read this posting from an anonymous googler today via hackernews.
>>
>>https://www.semianalysis.com/p/google-we-have-no-moat-and-neither
>>
>> It's not hype that the open source community has rapidly figured out how
>> to produce equally dazzling models with drastically smaller budgets of
>> resources, and is continuing to iterate the process.
>>
>> -- rec --
>>
>> On Thu, May 4, 2023 at 10:11 AM Gary Schiltz 
>> wrote:
>>
>>> I love the graphic! I've had the misfortune of twice jumping on that
>>> roller coaster just before the Peak of Inflated Expectation - once for the
>>> AI boom/bust of the mid 1980s and once for the dotcom boom/bust of the late
>>> 1990s. Jumped on too late to make a killing, but didn't get too badly
>>> damaged by the Trough of Disillusionment either.
>>>
>>> On Thu, May 4, 2023 at 10:34 AM Steve Smith  wrote:
>>>

 https://doctorow.medium.com/the-ai-hype-bubble-is-the-new-crypto-hype-bubble-74e53028631e

 I *am* a fan of LLMs (not so much image generators) and blockchain (not
 so much crypto or NFTs) in their "best" uses (not that I or anyone else
 really knows what that is) in spite of my intrinsic neoLuddite affect.

 Nevertheless I think Doctorow in his usual acerbic and penetrating
 style really nails it well here IMO.

 I particularly appreciated his reference/quote to Emily Bender's "High
 on Supply" and "word/meaning conflation" in the sense of "don't mistake an
 accent for a personality" in the dating scene.

 A lot of my own contrarian commments on this forum come from resisting
 what Doctorow introduces (to me) as "CritiHype" (attributed to Lee
 Vinsel)...  the feeling that some folks make a (a)vocation out of kneejerk
 criticism.   It is much easier to *poke* at something than to *do*
 something worthy of being *poked at*.   I appreciate that Doctorow doesn't
 seem to (by my fairly uncritical eye) engage in this much himself...  which
 is why I was drawn into this article...

 I also very much appreciate his quote from Charlie Stross:

 *corporations are Slow AIs, autonomous artificial lifeforms that
 consistently do the wrong thing even when the people who nominally run them
 try to steer them in better directions:*


 *https://media.ccc.de/v/34c3-9270-dude_you_broke_the_future
  *


 I could go on quoting and excerpting and commenting on his whole
 article and the myriad links/references he offers up but will curb my
 enthusiasm and leave it to the astute FriAM readers to choose how much to
 indulge in.   It was a pretty good antidote for my own AI-thusiasm driven
 by long chats with GPT4 (converging on being more like long sessions
 wandering through Wikipedia after the first 100 hours of engagement).




 -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
 FRIAM Applied Complexity Group listserv
 Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
 https://bit.ly/virtualfriam
 to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
 FRIAM-COMIC http://friam-comic.blogspot.com/
 archives:  5/2017 thru present
 https://redfish.com/pipermail/friam_redfish.com/
   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

>>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>>> FRIAM Applied Complexity Group listserv
>>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>>> https://bit.ly/virtualfriam
>>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>>> FRIAM-COMIC http://friam-comic.blogspot.com/
>>> archives:  5/2017 thru present
>>> https://redfish.com/pipermail/friam_redfish.com/
>>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>>
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http:/