Re: Formatting in the REPL?

2020-05-03 Thread Wilhelm Fitzpatrick




Perhaps by setting '*Prompt'?

ww https://software-lab.de/doc/refP.html#*Prompt

☺/ A!ex


Thanks for the tip! I think I'm on to something...

: (de *Prompt (cond ((isa '+Fixed @) (format> @)) (T "")))
# *Prompt redefined
-> *Prompt
:
: (fixed 1234 2)
-> $171307330464572
12.34: (fixed+ @ (fixed 123 1))
-> $171307330465675
24.64:

..fun :)

-wilhelm


--
UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe


Re: divmod?

2020-05-03 Thread Mike
May 3, 2020 1:13 AM, "Wilhelm Fitzpatrick"  wrote:

> I'm not finding such a thing in the function reference, but asking on the off 
> chance I'm
> overlooking it. Is there a way in Picolisp to get a division result and 
> remainder as a single
> operation?

Sure
http://ix.io/2kBM

--
UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe


Structure sharing bignums in pil32

2020-05-03 Thread Andras Pahi
Hello all,

I would like to ask that structure sharing is used in pil32 bigNums or not ?
In doSub(), doAbs() it uses structure sharing, but otherwise in big.c
it copies the argument then modifies it.

Thanks,
Andras




-- 
UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe



Re: divmod?

2020-05-03 Thread Wilhelm Fitzpatrick




I'm not finding such a thing in the function reference, but asking on the off 
chance I'm
overlooking it. Is there a way in Picolisp to get a division result and 
remainder as a single
operation?

Sure
http://ix.io/2kBM


Thanks! But as Alex intuited, I was looking to leverage the underlying 
processor operation that returns both parts of the integer divide in a 
single operation. But if I follow his response correctly, the cost of 
building the memory representation of the answer swamps the actual cost 
of the divide, and that's going to be similar regardless of if the 
divide and remainder wind up being one machine instruction or two.


-wilhelm



--
UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe


Re: divmod?

2020-05-03 Thread John Duncan
For heavy number crunching, picolisp might not be appropriate. In modern
systems you would probably want something that used the vector
instructions. But if it’s a few divisions here and there, you’d be
surprised how little the efficiency in clock cycles  matters anymore.

On Sun, May 3, 2020 at 14:28 Wilhelm Fitzpatrick  wrote:

>
> >> I'm not finding such a thing in the function reference, but asking on
> the off chance I'm
> >> overlooking it. Is there a way in Picolisp to get a division result and
> remainder as a single
> >> operation?
> > Sure
> > http://ix.io/2kBM
>
> Thanks! But as Alex intuited, I was looking to leverage the underlying
> processor operation that returns both parts of the integer divide in a
> single operation. But if I follow his response correctly, the cost of
> building the memory representation of the answer swamps the actual cost
> of the divide, and that's going to be similar regardless of if the
> divide and remainder wind up being one machine instruction or two.
>
> -wilhelm
>
>
>
> --
> UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe
>
-- 
John Duncan


Talking about algorithms and their BIG(O) behaviour, efficiency, other JIT IR compilers ...

2020-05-03 Thread Guido Stepken
Certainly you've heard of Fusion² Trees, Exponential² Trees, Ryabko³ Trees,
Bit Twiddling Hacks⁴ in C ... this is, what i consider the high class of
programming excellence, every programmer not only should have heard of, but
also should have implemented on his/her own.

This class of algorithms shows much better BIG(O) behaviour, often
O(log(log(n)) or O(log₃₂(n)) on 32 bit, O(log₆₄(n)) on 64 bit in searching
through vasts amounts of data in almost ZERO time.

They're quite rare in e.g. Apache Foundation or Linux Foundation software
("pile of shit") packages, since they allow even better search times on a
$25 Raspberry Pi Zero than on a $100.000 28 Core Intel Dual CPU Xeon
machine. Of course, this is not in the interest of US industry.

Most interesting for Lisp implementors is the Ryabko aka Fenwick Tree,
since it easily allows implementing a highly efficient multi generational
(ageing memory pools) garbage collectors as being found in our German
version of LLVM, the brilliant LuaJIT DYNASM enginge, which has almost
everything, that is publically considered as 'state of the art' JIT
compiler technology.

Of course DYNASM is faster, smaller, better than LLVM and generates machine
code for even smallest embedded devices (even runs on those!!!), has the
much superiour "Four Color Multi Color, Multi Generational Mark Sweep GC".

That little thing with the long name is far superior to every other Garbage
Collector, i've seen in my life:

http://wiki.luajit.org/SSA-IR-2.0
http://wiki.luajit.org/New-Garbage-Collector

Lua is a very Lisp like language, since its smallest data structure is a
two (cons) cell unit, one data and one pointer to next cell. But comes with
INFIX notation.

And since Lua Programming Language data structures are so similar to those
of Lisp, there also is a Lisp port onto that tiny LuaJIT DYNASM engine,
kind of transpiler, written in - Lua! Piece of cake in terms of LoC
(compare to pil21):

https://github.com/bakpakin/Fennel/blob/master/README.md

And, of course, you get C speed with that stuff, with far lesss lines of
code. Maintainable, security reviews can easily be done ... much smaller
memory footprint, much faster JIT compile times, thanks to DYNASM:

http://luajit.org/dynasm.html

Needless to say, that this stuff is far superior to anything you can find
made by US Foundations with their billions of lines of unmaintainable bloat
..

I am using that stuff now since a while - both the superior algorithms as
well as some of our own German/EU software stacks and i only can tell you,
what i've already mentioned:

"Don't use US Software Stacks!!! - Billions of lines of code, millions of
bugs, thousands of NSA backdoors, hundreds of slow algorithms!!!"

I hope, you regard my "findings" for you quite interesting and convincing,
see you using next generation of high quality software, that does not fall
under US software export restrictions, since it's - "Made in Germany"!!! ;-)

Finally you also should have a look into the last link, "Bit Twiddling
Hacks in C". You will be _very surprised_ what you will find there.
Needless to say, that this collection of tips and tricks also applies to
other programming languages, making your code much faster (and much
less readable and understandable, if you don't put a link into the
comments)! ;-)

Have fun!

Best regards, Guido Stepken

¹) https://en.wikipedia.org/wiki/Fusion_tree
²) https://en.wikipedia.org/wiki/Exponential_tree
³) https://en.wikipedia.org/wiki/Fenwick_tree
⁴) https://graphics.stanford.edu/~seander/bithacks.html


Re: Talking about algorithms and their BIG(O) behaviour, efficiency, other JIT IR compilers ...

2020-05-03 Thread John Duncan
I took my algorithms class from one of the inventors of fusion trees. It’s
more of a stunt than anything else. He invented it just to prove the point.
The constant factor of each operation dwarfs the comparison it avoids. But
the big-O is relatively small. They are still impractical.

Pico operates on lists. Its fundamental data structure tends toward linear
algorithms. Trees are also possible where needed. In general, this is not
the fastest, but it’s usually good enough.

On Sun, May 3, 2020 at 16:57 Guido Stepken  wrote:

> Certainly you've heard of Fusion² Trees, Exponential² Trees, Ryabko³
> Trees, Bit Twiddling Hacks⁴ in C ... this is, what i consider the high
> class of programming excellence, every programmer not only should have
> heard of, but also should have implemented on his/her own.
>
> This class of algorithms shows much better BIG(O) behaviour, often
> O(log(log(n)) or O(log₃₂(n)) on 32 bit, O(log₆₄(n)) on 64 bit in searching
> through vasts amounts of data in almost ZERO time.
>
> They're quite rare in e.g. Apache Foundation or Linux Foundation software
> ("pile of shit") packages, since they allow even better search times on a
> $25 Raspberry Pi Zero than on a $100.000 28 Core Intel Dual CPU Xeon
> machine. Of course, this is not in the interest of US industry.
>
> Most interesting for Lisp implementors is the Ryabko aka Fenwick Tree,
> since it easily allows implementing a highly efficient multi generational
> (ageing memory pools) garbage collectors as being found in our German
> version of LLVM, the brilliant LuaJIT DYNASM enginge, which has almost
> everything, that is publically considered as 'state of the art' JIT
> compiler technology.
>
> Of course DYNASM is faster, smaller, better than LLVM and generates
> machine code for even smallest embedded devices (even runs on those!!!),
> has the much superiour "Four Color Multi Color, Multi Generational Mark
> Sweep GC".
>
> That little thing with the long name is far superior to every other
> Garbage Collector, i've seen in my life:
>
> http://wiki.luajit.org/SSA-IR-2.0
> http://wiki.luajit.org/New-Garbage-Collector
>
> Lua is a very Lisp like language, since its smallest data structure is a
> two (cons) cell unit, one data and one pointer to next cell. But comes with
> INFIX notation.
>
> And since Lua Programming Language data structures are so similar to those
> of Lisp, there also is a Lisp port onto that tiny LuaJIT DYNASM engine,
> kind of transpiler, written in - Lua! Piece of cake in terms of LoC
> (compare to pil21):
>
> https://github.com/bakpakin/Fennel/blob/master/README.md
>
> And, of course, you get C speed with that stuff, with far lesss lines of
> code. Maintainable, security reviews can easily be done ... much smaller
> memory footprint, much faster JIT compile times, thanks to DYNASM:
>
> http://luajit.org/dynasm.html
>
> Needless to say, that this stuff is far superior to anything you can find
> made by US Foundations with their billions of lines of unmaintainable bloat
> ...
>
> I am using that stuff now since a while - both the superior algorithms as
> well as some of our own German/EU software stacks and i only can tell you,
> what i've already mentioned:
>
> "Don't use US Software Stacks!!! - Billions of lines of code, millions of
> bugs, thousands of NSA backdoors, hundreds of slow algorithms!!!"
>
> I hope, you regard my "findings" for you quite interesting and convincing,
> see you using next generation of high quality software, that does not fall
> under US software export restrictions, since it's - "Made in Germany"!!! ;-)
>
> Finally you also should have a look into the last link, "Bit Twiddling
> Hacks in C". You will be _very surprised_ what you will find there.
> Needless to say, that this collection of tips and tricks also applies to
> other programming languages, making your code much faster (and much
> less readable and understandable, if you don't put a link into the
> comments)! ;-)
>
> Have fun!
>
> Best regards, Guido Stepken
>
> ¹) https://en.wikipedia.org/wiki/Fusion_tree
> 
> ²) https://en.wikipedia.org/wiki/Exponential_tree
> ³) https://en.wikipedia.org/wiki/Fenwick_tree
> ⁴) https://graphics.stanford.edu/~seander/bithacks.html
> 

-- 
John Duncan


Re: divmod?

2020-05-03 Thread Guido Stepken
Plain wrong. Christian Schafmeister will teach you the use of Lisp in
high(est) end number crunching:

https://youtube.com/watch?v=8X69_42Mj-g

He's the Super Brain behind all the compute stuff of that famous Genomic
Reasearch Institute in NY (proteine folding ... Corona) ... ;-)

In fact, he's using the AI Lisp language to compose all those mighty C/C++
libraries to new libraries. Means: His Lisp AI is (re-)writing software.

I fear, you're a decade behind of what's 'state of the art' in programming!
Lisp, until today, is a highly important language. It also optimizes
machine code within GCC, generating highest efficient machine code for any
CPU in the world  - see MELT, a Lisp dialect:

http://www.starynkevitch.net/Basile/gcc-melt/

Binding GSL (GNU Scientific Library) and magic OpenBLAS (searching through
huge graph structures in zero time) to PicoLisp is piece of cake.

https://picolisp.com/wiki/?interfacing

Automated marshalling and unmarshalling C interfaces in Lisp is a
nobrainer, simply extract .c header files. Finished!

Have fun!

Best regards, Guido Stepken

Am Sonntag, 3. Mai 2020 schrieb John Duncan :
> For heavy number crunching, picolisp might not be appropriate. In modern
systems you would probably want something that used the vector
instructions. But if it’s a few divisions here and there, you’d be
surprised how little the efficiency in clock cycles  matters anymore.
> On Sun, May 3, 2020 at 14:28 Wilhelm Fitzpatrick  wrote:
>>
>> >> I'm not finding such a thing in the function reference, but asking on
the off chance I'm
>> >> overlooking it. Is there a way in Picolisp to get a division result
and remainder as a single
>> >> operation?
>> > Sure
>> > http://ix.io/2kBM
>>
>> Thanks! But as Alex intuited, I was looking to leverage the underlying
>> processor operation that returns both parts of the integer divide in a
>> single operation. But if I follow his response correctly, the cost of
>> building the memory representation of the answer swamps the actual cost
>> of the divide, and that's going to be similar regardless of if the
>> divide and remainder wind up being one machine instruction or two.
>>
>> -wilhelm
>>
>>
>>
>> --
>> UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe
>
> --
> John Duncan


Re: Talking about algorithms and their BIG(O) behaviour, efficiency, other JIT IR compilers ...

2020-05-03 Thread Guido Stepken
Certainly, Fusion Tree, as well as the successor, Exponetial Tree and
especially Van Emde Boas Tree have certain "overhead". But it pays off
quickly at search through huge data on terabyte+ SSDs, you even can find in
any home today.

Remember: "You'll never become a car even if you put your body in a garage!"

Regards, Guido Stepken

Am Sonntag, 3. Mai 2020 schrieb John Duncan :
> I took my algorithms class from one of the inventors of fusion trees.
It’s more of a stunt than anything else. He invented it just to prove the
point. The constant factor of each operation dwarfs the comparison it
avoids. But the big-O is relatively small. They are still impractical.
> Pico operates on lists. Its fundamental data structure tends toward
linear algorithms. Trees are also possible where needed. In general, this
is not the fastest, but it’s usually good enough.
> On Sun, May 3, 2020 at 16:57 Guido Stepken  wrote:
>>
>> Certainly you've heard of Fusion² Trees, Exponential² Trees, Ryabko³
Trees, Bit Twiddling Hacks⁴ in C ... this is, what i consider the high
class of programming excellence, every programmer not only should have
heard of, but also should have implemented on his/her own.
>>
>> This class of algorithms shows much better BIG(O) behaviour, often
O(log(log(n)) or O(log₃₂(n)) on 32 bit, O(log₆₄(n)) on 64 bit in searching
through vasts amounts of data in almost ZERO time.
>>
>> They're quite rare in e.g. Apache Foundation or Linux Foundation
software ("pile of shit") packages, since they allow even better search
times on a $25 Raspberry Pi Zero than on a $100.000 28 Core Intel Dual CPU
Xeon machine. Of course, this is not in the interest of US industry.
>>
>> Most interesting for Lisp implementors is the Ryabko aka Fenwick Tree,
since it easily allows implementing a highly efficient multi generational
(ageing memory pools) garbage collectors as being found in our German
version of LLVM, the brilliant LuaJIT DYNASM enginge, which has almost
everything, that is publically considered as 'state of the art' JIT
compiler technology.
>>
>> Of course DYNASM is faster, smaller, better than LLVM and generates
machine code for even smallest embedded devices (even runs on those!!!),
has the much superiour "Four Color Multi Color, Multi Generational Mark
Sweep GC".
>>
>> That little thing with the long name is far superior to every other
Garbage Collector, i've seen in my life:
>>
>> http://wiki.luajit.org/SSA-IR-2.0
>> http://wiki.luajit.org/New-Garbage-Collector
>>
>> Lua is a very Lisp like language, since its smallest data structure is a
two (cons) cell unit, one data and one pointer to next cell. But comes with
INFIX notation.
>>
>> And since Lua Programming Language data structures are so similar to
those of Lisp, there also is a Lisp port onto that tiny LuaJIT DYNASM
engine, kind of transpiler, written in - Lua! Piece of cake in terms of LoC
(compare to pil21):
>>
>> https://github.com/bakpakin/Fennel/blob/master/README.md
>>
>> And, of course, you get C speed with that stuff, with far lesss lines of
code Maintainable, security reviews can easily be done ... much smaller
memory footprint, much faster JIT compile times, thanks to DYNASM:
>>
>> http://luajit.org/dynasm.html
>>
>> Needless to say, that this stuff is far superior to anything you can
find made by US Foundations with their billions of lines of unmaintainable
bloat ...
>>
>> I am using that stuff now since a while - both the superior algorithms
as well as some of our own German/EU software stacks and i only can tell
you, what i've already mentioned:
>>
>> "Don't use US Software Stacks!!! - Billions of lines of code, millions
of bugs, thousands of NSA backdoors, hundreds of slow algorithms!!!"
>>
>> I hope, you regard my "findings" for you quite interesting and
convincing, see you using next generation of high quality software, that
does not fall under US software export restrictions, since it's - "Made in
Germany"!!! ;-)
>>
>> Finally you also should have a look into the last link, "Bit Twiddling
Hacks in C". You will be _very surprised_ what you will find there.
Needless to say, that this collection of tips and tricks also applies to
other programming languages, making your code much faster (and much
less readable and understandable, if you don't put a link into the
comments)! ;-)
>>
>> Have fun!
>>
>> Best regards, Guido Stepken
>>
>> ¹) https://en.wikipedia.org/wiki/Fusion_tree
>> ²) https://en.wikipedia.org/wiki/Exponential_tree
>> ³) https://en.wikipedia.org/wiki/Fenwick_tree
>> ⁴) https://graphics.stanford.edu/~seander/bithacks.html
>
> --
> John Duncan


Re: Talking about algorithms and their BIG(O) behaviour, efficiency, other JIT IR compilers ...

2020-05-03 Thread Guido Stepken
"Pico operates on lists. Its fundamental data structure tends toward linear
algorithms."

This is what you've been told? Plain wrong!

(cons(cons(A B) C) already is a minimal (directed) graph.

What does that (200 lines of C) Lisp implementation tell you?

typedef struct List {
   struct List * next;
   void * data;
 } List;

https://carld.github.io/2017/06/20/lisp-in-less-than-200-lines-of-c.html

Lisp is a (recursive) graph (syntax "tree") engine by default, scaling -
endlessly!

That's, what PicoLisp also is.

Have fun!

> On Sun, May 3, 2020 at 16:57 Guido Stepken  wrote:
>>
>> Certainly you've heard of Fusion² Trees, Exponential² Trees, Ryabko³
Trees, Bit Twiddling Hacks⁴ in C ... this is, what i consider the high
class of programming excellence, every programmer not only should have
heard of, but also should have implemented on his/her own.
>>
>> This class of algorithms shows much better BIG(O) behaviour, often
O(log(log(n)) or O(log₃₂(n)) on 32 bit, O(log₆₄(n)) on 64 bit in searching
through vasts amounts of data in almost ZERO time.
>>
>> They're quite rare in e.g. Apache Foundation or Linux Foundation
software ("pile of shit") packages, since they allow even better search
times on a $25 Raspberry Pi Zero than on a $100.000 28 Core Intel Dual CPU
Xeon machine. Of course, this is not in the interest of US industry.
>>
>> Most interesting for Lisp implementors is the Ryabko aka Fenwick Tree,
since it easily allows implementing a highly efficient multi generational
(ageing memory pools) garbage collectors as being found in our German
version of LLVM, the brilliant LuaJIT DYNASM enginge, which has almost
everything, that is publically considered as 'state of the art' JIT
compiler technology.
>>
>> Of course DYNASM is faster, smaller, better than LLVM and generates
machine code for even smallest embedded devices (even runs on those!!!),
has the much superiour "Four Color Multi Color, Multi Generational Mark
Sweep GC".
>>
>> That little thing with the long name is far superior to every other
Garbage Collector, i've seen in my life:
>>
>> http://wiki.luajit.org/SSA-IR-2.0
>> http://wiki.luajit.org/New-Garbage-Collector
>>
>> Lua is a very Lisp like language, since its smallest data structure is a
two (cons) cell unit, one data and one pointer to next cell. But comes with
INFIX notation.
>>
>> And since Lua Programming Language data structures are so similar to
those of Lisp, there also is a Lisp port onto that tiny LuaJIT DYNASM
engine, kind of transpiler, written in - Lua! Piece of cake in terms of LoC
(compare to pil21):
>>
>> https://github.com/bakpakin/Fennel/blob/master/README.md
>>
>> And, of course, you get C speed with that stuff, with far lesss lines of
code Maintainable, security reviews can easily be done ... much smaller
memory footprint, much faster JIT compile times, thanks to DYNASM:
>>
>> http://luajit.org/dynasm.html
>>
>> Needless to say, that this stuff is far superior to anything you can
find made by US Foundations with their billions of lines of unmaintainable
bloat ...
>>
>> I am using that stuff now since a while - both the superior algorithms
as well as some of our own German/EU software stacks and i only can tell
you, what i've already mentioned:
>>
>> "Don't use US Software Stacks!!! - Billions of lines of code, millions
of bugs, thousands of NSA backdoors, hundreds of slow algorithms!!!"
>>
>> I hope, you regard my "findings" for you quite interesting and
convincing, see you using next generation of high quality software, that
does not fall under US software export restrictions, since it's - "Made in
Germany"!!! ;-)
>>
>> Finally you also should have a look into the last link, "Bit Twiddling
Hacks in C". You will be _very surprised_ what you will find there.
Needless to say, that this collection of tips and tricks also applies to
other programming languages, making your code much faster (and much
less readable and understandable, if you don't put a link into the
comments)! ;-)
>>
>> Have fun!
>>
>> Best regards, Guido Stepken
>>
>> ¹) https://en.wikipedia.org/wiki/Fusion_tree
>> ²) https://en.wikipedia.org/wiki/Exponential_tree
>> ³) https://en.wikipedia.org/wiki/Fenwick_tree
>> ⁴) https://graphics.stanford.edu/~seander/bithacks.html
>
> --
> John Duncan


Re: divmod?

2020-05-03 Thread Alexander Shendi (Web.DE)
Isn't Christian Schafmeister the guy attempting to make a Common Lisp frontend 
to the dreaded LLVM infrastructure?

SCNR 😇 

Am 3. Mai 2020 23:17:49 MESZ schrieb Guido Stepken :
>Plain wrong. Christian Schafmeister will teach you the use of Lisp in
>high(est) end number crunching:
>
>https://youtube.com/watch?v=8X69_42Mj-g
>
>He's the Super Brain behind all the compute stuff of that famous
>Genomic
>Reasearch Institute in NY (proteine folding ... Corona) ... ;-)
>
>In fact, he's using the AI Lisp language to compose all those mighty
>C/C++
>libraries to new libraries. Means: His Lisp AI is (re-)writing
>software.
>
>I fear, you're a decade behind of what's 'state of the art' in
>programming!
>Lisp, until today, is a highly important language. It also optimizes
>machine code within GCC, generating highest efficient machine code for
>any
>CPU in the world  - see MELT, a Lisp dialect:
>
>http://www.starynkevitch.net/Basile/gcc-melt/
>
>Binding GSL (GNU Scientific Library) and magic OpenBLAS (searching
>through
>huge graph structures in zero time) to PicoLisp is piece of cake.
>
>https://picolisp.com/wiki/?interfacing
>
>Automated marshalling and unmarshalling C interfaces in Lisp is a
>nobrainer, simply extract .c header files. Finished!
>
>Have fun!
>
>Best regards, Guido Stepken
>
>Am Sonntag, 3. Mai 2020 schrieb John Duncan :
>> For heavy number crunching, picolisp might not be appropriate. In
>modern
>systems you would probably want something that used the vector
>instructions. But if it’s a few divisions here and there, you’d be
>surprised how little the efficiency in clock cycles  matters anymore.
>> On Sun, May 3, 2020 at 14:28 Wilhelm Fitzpatrick 
>wrote:
>>>
>>> >> I'm not finding such a thing in the function reference, but
>asking on
>the off chance I'm
>>> >> overlooking it. Is there a way in Picolisp to get a division
>result
>and remainder as a single
>>> >> operation?
>>> > Sure
>>> > http://ix.io/2kBM
>>>
>>> Thanks! But as Alex intuited, I was looking to leverage the
>underlying
>>> processor operation that returns both parts of the integer divide in
>a
>>> single operation. But if I follow his response correctly, the cost
>of
>>> building the memory representation of the answer swamps the actual
>cost
>>> of the divide, and that's going to be similar regardless of if the
>>> divide and remainder wind up being one machine instruction or two.
>>>
>>> -wilhelm
>>>
>>>
>>>
>>> --
>>> UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe
>>
>> --
>> John Duncan

--
You have zero privacy anyway. Get over it.

Scott McNealy 1999

Subscribe

2020-05-03 Thread Narendra Joshi



Re: Structure sharing bignums in pil32

2020-05-03 Thread Alexander Burger
Hi Andras,

> I would like to ask that structure sharing is used in pil32 bigNums or not ?
> In doSub(), doAbs() it uses structure sharing, but otherwise in big.c
> it copies the argument then modifies it.

Correct. And also in pil64 and pil21.

All other arithmetic operations except add/sub cannot share structures, because
the whole number is modified. What *is* used whenever possitle are destructive
operations to avoid unnecessary copies.

☺/ A!ex

-- 
UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe