What about logging aggressively but also aggressively dumping older log
entries? Keep only, say, the last 100 or 1000 entries. Something like:

(def logger (atom nil))

(def max-log-entries 1000)

(def log-file whatever)

(defn log [msg]
  (swap! logger
    (fn [oldlog]
      (let [newlog (doall (take max-log-entries (cons msg oldlog)))]
        (with-open [wr (writer-on log-file)]
          (binding [*out* wr]
            (print newlog)))
        newlog))))

...

(log "something interesting happened in foo.core/quux!")

...

BOOM!

...

; examine log-file for last 1000 messages produced before the crash

NOTE: yeah, there's a side effect in the swap! function. But it's
semantically idempotent! Whenever the swap! completes the contents of the
file equals the contents of the atom. So it just makes the atom durable. :)



On Tue, May 28, 2013 at 2:17 PM, Lee Spector <lspec...@hampshire.edu> wrote:

>
> On May 28, 2013, at 12:37 PM, Cedric Greevey wrote:
> >
> > I think locals clearing is simply incompatible with ever having full
> debug info at an error site without anticipation of an error at that site,
> and with anticipation you can put debug prints, logging, watchers, and
> suchlike at the site.
>
> Too bad, if that's really the final word. It's so very useful to get the
> locals at the point of (unanticipated) error, and this is information that
> is already "known" to the system... and we just want it not to throw it
> away.
>
> > The lesson for clojure developers being, strive for reproducibility! You
> should already be using lots of pure functions and few side-effecting ones,
> and your IDE should keep a REPL history that can be used to reproduce, and
> perhaps to distill down to a small failing test, any kabooms you get while
> experimenting in the REPL. And if the domain inherently is going to produce
> hard-to-reproduce circumstances, invest proactively in aggressive and
> detailed logging.
>
> My domain (mostly research in evolutionary computation) is indeed
> inherently "going to produce hard-to-reproduce circumstances." And, as it
> happens, logging that is sufficiently "aggressive and detailed logging" to
> capture the things I'll want to see after a crash would produce
> astronomically (impractically) large log files. It'd be so much nicer just
> to see the values of the locals at the time of the crash, which are already
> there in memory if we could only grab them in time.
>
>  -Lee
>
> --
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "Clojure" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to clojure+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to