Christian Couder <[email protected]> writes:

> Git can store its objects only in the form of loose objects in
> separate files or packed objects in a pack file.
> To be able to better handle some kind of objects, for example big
> blobs, it would be nice if Git could store its objects in other object
> databases (ODB).
>
> To do that, this patch series makes it possible to register commands,
> using "odb.<odbname>.command" config variables, to access external
> ODBs. Each specified command will then be called the following ways:

Hopefully it is done via a cheap RPC instead of forking/execing the
command for each and every object lookup.

>   - "<command> have": the command should output the sha1, size and
> type of all the objects the external ODB contains, one object per
> line.

Why size and type at this point is needed by the clients?  That is
more expensive to compute than just a bare list of object names.

>   - "<command> get <sha1>": the command should then read from the
> external ODB the content of the object corresponding to <sha1> and
> output it on stdout.

The type and size should be given at this point.

>   - "<command> put <sha1> <size> <type>": the command should then read
> from stdin an object and store it in the external ODB.

Is ODB required to sanity check that <sha1> matches what the data
hashes down to?

If this thing is primarily to offload large blobs, you might also
want not "get" but "checkout <sha1> <path>" to bypass Git entirely,
but I haven't thought it through.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to