I haven't read all of the messages in this thread so please pardon the top
post and possible useless or redundant info but my lazy and oh so very
wrong method of grepping an entire fossil repo is to use fossil export and
some grepping. No need to put the whole expanded repo on disk.

First find the mark record. I was looking for some code where I started
using a transaction:

$ fsl export |egrep
'^data\s+[0-9]\s*$|^blob|^mark\s+:[0-9]+$|transaction'|less
....
mark :1904
blob
mark :1906
    (with-transaction
blob

Then use grep to find the comment and check in info, in my case it was in
tasks.scm around Oct 23 (just convert seconds to time):

commit refs/heads/trunk
mark :1909
committer matt <matt> 1319432626 +0000
data 31
Monitor based runs working well
from :1895
M 100644 :1896 Makefile
M 100644 :1898 db.scm
M 100644 :1900 runs.scm
M 100644 :1906 tasks.scm
M 100644 :1902 tests/Makefile
M 100644 :1904 tests/megatest.config

commit refs/heads/trunk
mark :1915
committer matt <matt> 1319449002 +0000
data 43
Added missing dashboard-guimonitor.scm file
from :1909




On Mon, Jan 28, 2013 at 1:06 PM, Joerg Sonnenberger <jo...@britannica.bec.de
> wrote:

> On Mon, Jan 28, 2013 at 08:35:57PM +0100, j. v. d. hoff wrote:
> > this would not prevent,
> > that people run into the exponential run time problem when using the
> > "naive" pattern instead the anchored one, but this could be
> > explained by a FAQ entry making
> > the problem practically irrelevant. or do I still miss the relevant
> point?
>
> The very existence of "naive patterns" is the point. It is kind of a FAQ
> with the canonical answer being
> http://swtch.com/~rsc/regexp/regexp1.html.
>
> Joerg
> _______________________________________________
> fossil-users mailing list
> fossil-users@lists.fossil-scm.org
> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
>
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to