When the values are nonnegative, their ratios are, too.
There must be something different happening if there are negative values.
Am 03.04.20 um 03:29 schrieb HH PackRat:
On 4/2/20, Raul Miller wrote:
One other thing -- after sleeping on this, I realized I had two
conflicting views about negat
On 4/2/20, Raul Miller wrote:
> One other thing -- after sleeping on this, I realized I had two
> conflicting views about negative numbers in the stock values you were
> working with:
> (*) One is that negative numbers may appear in the data.
> (*) The other is that negative numbers do not appear
One other thing -- after sleeping on this, I realized I had two
conflicting views about negative numbers in the stock values you were
working with:
(*) One is that negative numbers may appear in the data.
(*) The other is that negative numbers do not appear in the data.
Obviously, these cannot b
On 3/31/20, Raul Miller wrote:
> If you have enough memory for the intermediate results, you would have
> no problems with a file that large. You need an order of magnitude
> more memory for intermediate results than the raw data, though.
I have a desktop PC with 12 GB memory and 2 TB + 1 TB hard
So... I'm stuck at home and am explicitly not allowed to work on
production systems, so... anyways... here's a partially tested
implementation of something close to request #4. (I tested with
blocksize being 150 instead of 1e7 and with the test data provided):
example use:
(jpath '~user/output
There is definitely a way to do this by mapping the files, but if you don't
know what mmap is you're going to have a hard time doing it. I think you
might also have a hard time writing a better parser than is in the csv
addon (which is pretty slow last I checked).
Jd is really the right way to go
On 3/31/20, 'Jim Russell' via Programming wrote:
> If I may, and having heard no reply to my defense of component files, I'd
> suggest jfiles as a potential solution. Each record can take whatever size
> you are comfortable with: a month, year, or decade. Or am I missing
> something (as usual)?
I
If I may, and having heard no reply to my defense of component files, I'd
suggest jfiles as a potential solution. Each record can take whatever size you
are comfortable with: a month, year, or decade. Or am I missing something (as
usual)?
> On Mar 31, 2020, at 8:05 PM, Raul Miller wrote:
>
>
If you have enough memory for the intermediate results, you would have
no problems with a file that large. You need an order of magnitude
more memory for intermediate results than the raw data, though.
Me, if I was working with something that big, I'd probably break it
into pieces first, textually
You should probably take a look at what Jd has to ingest delimited files
like this into a database.
On Tue, Mar 31, 2020 at 12:58 AM HH PackRat wrote:
> Finishing up with function #4..
>
> I have a very large file consisting of multiple sets of historical
> stock prices that I would like to
Finishing up with function #4..
I have a very large file consisting of multiple sets of historical
stock prices that I would like to split into individual files for each
stock. (I'll probably first have to write out all the files to a USB
flash drive [I have limited hard drive space, but it m
11 matches
Mail list logo