"Lab rat retires on crypto mine in 'research app', drinks at 6."
--
Phobrain.com
On 2025-04-29 23:24, Ralf Gommers via NumPy-Discussion wrote:
> On Wed, Apr 30, 2025 at 6:08 AM Bill Ross wrote:
>
>> Why run someone else's code? Can't they monito
Why run someone else's code? Can't they monitor through git?
[reads more] .. why?
---
--
Phobrain.com
On 2025-04-29 15:38, Charles R Harris via NumPy-Discussion wrote:
> Just thought I'd pass this along for discussion.
>
> Chuck
>
> -- Forwarded message -
> From:
> Date:
> It is perfectly possible that the AI will largely or completely reproduce
> some existing GPL code for A, from its training data. There is no way that I
> could know that the AI has done that without some substantial research.
Even if it did, what if the common code were arrived at independe
Could a sub-python level spin up extra threads (processes)? Search could
benefit easily.
I switched back to Java for number crunching because one gets to share
memory without using OS-supplied shared memory. Maybe put a JVM behind
python, or do python in the JVM?
Bill
--
Phobrain.com
On 202
ByteBuffer.order().
Bill
--
Phobrain.com
On 2023-01-01 08:31, Jerome Kieffer wrote:
> On Sun, 01 Jan 2023 05:31:55 -0800
> Bill Ross wrote:
>
> Thanks!
>
> Java is known to be big-endian ... your CPU is probably little-endian.
> $ lscpu | grep -i endian
> B
).flush()
Bill
--
Phobrain.com
On 2023-01-01 08:31, Jerome Kieffer wrote:
> On Sun, 01 Jan 2023 05:31:55 -0800
> Bill Ross wrote:
>
> Thanks!
>
> Java is known to be big-endian ... your CPU is probably little-endian.
> $ lscpu | grep -i endian
> Byte Order: Little En
Bill
--
Phobrain.com
On 2023-01-01 05:13, Jerome Kieffer wrote:
> On Sat, 31 Dec 2022 23:45:54 -0800
> Bill Ross wrote:
>
>> How best to write a 1D ndarray as a block of doubles, for reading in
>> java as double[] or a stream of double?
>>
>> Maybe the perfor
he
array, which is ~5% of memory; rather one gets 1 process with e.g. 1475%
CPU.)
Thanks,
Bill Ross
--
Phobrain.com___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@p
binary
> JSON - in case there is such a need and interest among numpy users
>
> specifically, i want to first follow up with Bill's question below regarding
> loading time
>
> On 8/25/22 11:02, Bill Ross wrote:
>
>> Can you give load times for these?
&
Can you give load times for these?
> 8000128 eye5chunk.npy
> 5004297 eye5chunk_bjd_raw.jdb
> 10338 eye5chunk_bjd_zlib.jdb
>2206 eye5chunk_bjd_lzma.jdb
For my case, I'd be curious about the time to add one 1T-entries file to
another.
Thanks,
Bill
--
Phobrain.com
On 2022-08-24 20
Thanks, np.lib.format.open_memmap() works great! With prediction procs
using minimal sys memory, I can get twice as many on GPU, with fewer
optimization warnings.
Why even have the number of records in the header? Shouldn't record size
plus system-reported/growable file size be enough?
I'd lov
11 matches
Mail list logo