Thanks Niphlod. I'm intrigued by the diffbook project, so I'll take a look 
at your github repo. My main interest is in streamlining my workflow (not 
just webdev, but my academic research and teaching as well) around a large 
folder of flat files that, when changed, will trigger other events (like 
publishing to a blog) automatically.

I think you're right that I've confused this kind of flat-file storage with 
static site generators like Nikola. So I'll have to think more about it.

When you suggest auth.wiki() you mean to publish text in blog format, 
right? It doesn't have anything built in to pull in text files--or does it?

Ian

On Tuesday, June 25, 2013 4:24:45 PM UTC-4, Niphlod wrote:
>
> if you want to reinvent the wheel you can play with that, but the "spate" 
> (as you describe that) is actually something that serialize into STATIC 
> webpages a somewhat fixed structure of flat files, on a given command.
>
> I actually did it for the diffbook, http://niphlod.github.io/diffbook/, 
> to use gh-pages as a hosting platform. That is a web2py application that 
> does its own processing, and then gets serialized by a script (a simple 
> wget)
>
> AFAIK, none of the recent "spates" are based on real-time 
> retrieving/recompiling blog posts.
>
> If you need that kind of "process", just use auth.wiki().
>
> On Tuesday, June 25, 2013 8:55:58 PM UTC+2, Ian W. Scott wrote:
>>
>> I'm attracted to the spate of recent flat-file blogging platforms, using 
>> plain text (markdown) files to store the blog posts. I especially like the 
>> idea of using dropbox or a git repo to store the files. 
>>
>> Has anyone experimented with this in web2py? I can imagine at least two 
>> ways of doing it:
>>
>> 1. A controller retrieves the text file at runtime based on its filename, 
>> parses it as necessary, and passes it through to the view. This strikes me 
>> as both simplest and slowest. It would bypass the web2py model structure 
>> altogether. It could be sped up significantly by caching pages on the 
>> server. 
>>
>> 2. A watcher of some sort (cron/scheduler job, other utility watching for 
>> file changes, git hook?) notices any new/changed files in the content 
>> directory. This triggers a call to a background process that parses the 
>> file and stores its content in the db. This would have the advantage of 
>> using a model and the associated data abstraction, and on first page access 
>> I assume it would be faster. But I'm not sure that the small speed-up is 
>> worth the extra complexity, especially if the speed is negated by 
>> server-side caching.
>>
>> I suppose that something like 2 would have to be present in 1 anyway, 
>> since the system would have to recognize the presence of a new file and add 
>> it to the index of available posts.
>>
>> Are there other approaches I'm not thinking of? Tools or libraries that 
>> would be useful in the process? Or do you think it's all just not worth the 
>> trouble?
>>
>> Thanks,
>>
>> Ian 
>>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to