2009/6/25 Eric Van Hensbergen <[email protected]>:
>
> 2009/6/25 Devon H. O'Dell <[email protected]>:
>>
>> 2009/6/25 Manzur <[email protected]>:
>>>
>>> On 24 июн, 00:26, "Federico G. Benavento" <[email protected]> wrote:
>>>> I have a particular question, in the end are you going to be able to use
>>>> rm, cp, etc?
>>>>
>>>> why not just "touch file" instead "echo add" ?
>>>>
>>>
>>> For what purpose do you want to use cp and rm?
>>
>> I think the idea is, `why can't I just cp /foo/bar /mnt/git' and have
>> that function as `git/add bar'; rm as git/remove or whatever. The idea
>> of having it in a fileserver is novel only when you are able to
>> interact with the fileserver using standard utilities. I think we're
>> not understanding what git/add is supposed to do that a cp wouldn't
>> accomplish.
>>
>
> A potential point of confusion is repository actions versus sandbox
> actions. GIT (and other source control systems) force explicit
> actions to add and remove stuff to the repository from the sandbox so
> that transient files (like object files or editor temporary files)
> don't flow into the repo.
>
> Conceptually, dynamic binding and private namespaces could be used to
> address these, but how to do this isn't completely clear (to me). You
> could explicitly mark the "source" files with chmod (or by default
> mark all files in the sandbox as temporary, and explictly chmod out
> the temporary flag via an explicit command or script). Another option
> would be to have a copy-on-change synthetic file system sandbox bound
> over the gitfs, but then you'll need to explicitly identify which
> files you changed which are source files.
>
> In my mind, figuring out how to do this naturally is the true crux of
> this project and probably should have been worked out during the
> proposal phase. That being said, the current approach being taken by
> the student should suffice as a low-level file system, with a follow
> up project being the implementation of a higher-level abstraction
> (which is in part defined by the development workflow) which actually
> gives the natural file system semantics most of the reponders
> (including myself) seem to desire.
To me this is more of a question of how much internal Git machinery
does one want to expose/see. The power of Git on a conventional
UNIX system clearly stems from the fact that it uses a three-staged
approach:
[working tree] <---> [index] <---> [history]
It is clear (to me at least) that history is a sort of singleton object.
It has a single instance and that's it. There could be different ways
of manipulating it, but still your are always manipulating the same
thing all the time.
Now, working tree on the other hand, has to be "polymorphic".
IOW, I would like to see as many working trees as I have SHA ids
in my history. Ands I also want these trees to be addressed
by tags, refs, etc.
Finally, the index is how you get from a working tree to history.
Perhaps, it is a worthy things to be exposed as well. So that
regardless of what you do, your trees are all read only, and
you'd have to always do
$ bind -ac /path/to/index/ /path/to/a/working/tree
to read and write inside a working tree to make it writable.
At that point, the question of adding to history becomes
more or a less a question of commiting the state of that
extra tree representing index.
But, as you said, this might be something to consider for
the next project.
As Ron pointed out, at this point it is more of a webfs,
than what we all want.
Thanks,
Roman.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Plan
9 Google Summer of Code" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/plan9-gsoc?hl=en
-~----------~----~----~----~------~----~------~--~---