On 13/08/08 12:53 PM, "Gudmundur A. Thorisson" <[EMAIL PROTECTED]> wrote:

> Cool, so nobody has requested this before, hm. I am not working on
> Mart directly myself; somebody else in our group is working to
> integrate a core set of data from several locus-specific databases
> (LSDBs), and though the initial plan is to bring all the data to a
> central location, the idea of distributed 'sub-marts' did come up.
> This use case is near-identical to what Henrikki wants to do, that's
> why I brought it up on the list.
>
>    So, do you guys on the dev team want to just put the basic idea on
> your ToDo list, or should we provide a more fleshed-out use case /
> user story ?
>

Definitely a use case please :)
A.


>
>                   Mummi
>
> On 13 Aug 2008, at 16:57, Arek Kasprzyk wrote:
>>
>>
>>
>> On 13/08/08 10:01 AM, "Gudmundur A. Thorisson"
>> <[EMAIL PROTECTED]> wrote:
>>
>>> Hi Arek. Do I then understand correctly that there is currently no
>>> way
>>> to have multiple  remote (but identically-configured) marts appear as
>>> *one*  dataset* on a single portal mart? Basically like the existing
>>> partitions I suppose, but with each partition residing in a remote
>>> mart.
>>
>> Hi Mummi,
>> No you can't at present. This is because partitions are defined in a
>> single
>> Meditor config file which lives in a single mart therefore assuming
>> That all partitions (e.g. what we appear in ensembl to be separate
>> species
>> datasets) will reside in the same mart.
>>
>>
>>
>> Moving all data to one instance as Henrikki is basically forced
>>> to do isn't  always possible, e.g. for licensing reasons, or due to
>>> large size.
>>>   Sorry if this has come up before on mart-dev and I'm repeating a
>>> previous discussion, but are there any plans to support something
>>> like
>>> this in a future version?
>>
>> This is an interesting use case that we have not come across yet. We
>> do have
>> merges on our to do list but never considered distributed
>> partitions :) It
>> certainly sounds interesting :) Let us review our requirements for
>> the next
>> release
>>
>> A.
>>
>

Reply via email to