> On Jun 1, 2017, at 10:52 AM, Steve Byan's Lists
> wrote:
>
> Hi Jon,
>
>> On May 31, 2017, at 6:41 PM, Steve Byan's Lists
>> wrote:
>>
>>> On May 31, 2017, at 6:14 PM, Jon Zeppieri wrote:
>>>
>>>
>>> This way,
On Thu, Jun 1, 2017 at 11:07 AM, Jon Zeppieri wrote:
>
>
>> On Jun 1, 2017, at 10:52 AM, Steve Byan's Lists
>> wrote:
>>
>> Hi Jon,
>>
On May 31, 2017, at 6:41 PM, Steve Byan's Lists
wrote:
On May
> On Jun 1, 2017, at 10:52 AM, Steve Byan's Lists
> wrote:
>
> Hi Jon,
>
>>> On May 31, 2017, at 6:41 PM, Steve Byan's Lists
>>> wrote:
>>>
>>> On May 31, 2017, at 6:14 PM, Jon Zeppieri wrote:
>>>
>>> So, for
Hi Jon,
> On May 31, 2017, at 6:41 PM, Steve Byan's Lists
> wrote:
>
>> On May 31, 2017, at 6:14 PM, Jon Zeppieri wrote:
>>
>> So, for example:
>>
>> (define (map-trace stat%-set in-port)
>> (for/fold ([sexp-count 0])
>>
> On Jun 1, 2017, at 12:25 AM, Neil Van Dyke wrote:
>
> Steve Byan's Lists wrote on 05/31/2017 10:05 PM:
>> I'd appreciate a short example of what you mean by using `apply` and
>> `lambda` to destructure the list.
>
> I'll babble more than you want here, in case anyone
Steve Byan's Lists wrote on 05/31/2017 10:05 PM:
I'd appreciate a short example of what you mean by using `apply` and
`lambda` to destructure the list.
I'll babble more than you want here, in case anyone on the list is
curious in general...
#lang racket/base
(define
On Wed, May 31, 2017 at 10:05 PM, Steve Byan's Lists
wrote:
>
> I did consider using an association list representation for the attributes,
> but I'm depending on s-expression pattern matching for parsing the records.
> It's wonderfully convenient for this. I'm under
Hi Neil,
Thanks for the comments.
> On May 31, 2017, at 8:21 PM, Neil Van Dyke wrote:
>
> In addition to what others have mentioned, at this scale, you might get
> significant gains by adjusting your s-expression language.
>
> For example, instead of this:
>
>
In addition to what others have mentioned, at this scale, you might get
significant gains by adjusting your s-expression language.
For example, instead of this:
(pmem_flush
(threadId 140339047277632)
(startTime 923983542377819)
(elapsedTime 160)
(result 0)
(addr 0x7fa239055954)
(length
> On May 31, 2017, at 6:32 PM, Matthias Felleisen wrote:
>
>>
>> On May 31, 2017, at 6:14 PM, Jon Zeppieri wrote:
>>
>>
>> This way, you don't build up a list or a lazy stream; you just process
>> each datum as it's read.
>
>
> Yes, that’s what I
Hi Jon,
> On May 31, 2017, at 6:14 PM, Jon Zeppieri wrote:
>
> On Wed, May 31, 2017 at 5:54 PM, Steve Byan's Lists
> wrote:
>> So, I don't want to try to fit all the records in memory at once. I thought
>> that the lazy stream would accomplish
> On May 31, 2017, at 6:14 PM, Jon Zeppieri wrote:
>
> On Wed, May 31, 2017 at 5:54 PM, Steve Byan's Lists
> wrote:
>> Hi Mathias,
>>
>> Thanks for taking a look.
>>
>>> On May 31, 2017, at 4:13 PM, Matthias Felleisen
>>>
On Wed, May 31, 2017 at 5:54 PM, Steve Byan's Lists
wrote:
> Hi Mathias,
>
> Thanks for taking a look.
>
>> On May 31, 2017, at 4:13 PM, Matthias Felleisen wrote:
>>
>>
>> Can you explain why you create a lazy stream instead of a plain list?
>
>
Hi Mathias,
Thanks for taking a look.
> On May 31, 2017, at 4:13 PM, Matthias Felleisen wrote:
>
>
> Can you explain why you create a lazy stream instead of a plain list?
The current size of a short binary trace file is about 10 GB, and I want to
scale to traces many
Can you explain why you create a lazy stream instead of a plain list? Your code
is strict in the stream so the extra memory for a stream is substantial and
probably wasted. Perhaps the below is simplified and you really need only
portions of the lazy stream. — Matthias
> On May 31, 2017,
I've written a command-line tool in Racket to analyze the files produced by a
tool that traces accesses to persistent memory by an application. The traces
are large: about 5 million records per second of application run time. While
developing the tool in Racket was a pleasant, productive, and
16 matches
Mail list logo